Using OpenCV to process live video from Phantom 4 - ios

I would like to process frames live in OpenCV from the video feed on a DJI Phantom 4. I've been able to set up OpenCV for iOS in xCode but I need help finding a tutorial/instructions on how to send the frames over from the DJI Camera into OpenCV in the correct format on the fly. Any suggestions?
Thanks

Hello there Ilia Labkovsky,
I think am in the same boat, I have got a P3 and would like to process the images via OpenCV. I am intending to use my laptop PC as an image processor, sending the images directly via TCP/IP and doing my own image processing off-board. I am yet to establish this though, I may come across some problems.
Is there a way I can privately message you on Stack Overflow?
Best of luck with the programming :)

There is a tutorial for Android on the DJI sample apps on how to parse and obtain the yuv frames. From there you can use openCv to process the frames: https://github.com/DJI-Mobile-SDK-Tutorials/Android-VideoStreamDecodingSample

Related

How to save a v4l2 buffer image using OpenCV 4.2

I'm using a Raspberry Pi 3B+ with an equivalent board to the Auvidea B101 which is a HDMI to MIPI bridge. The board works correctly and I can save images using the code from my gist ( https://gist.github.com/danriches/80fadc21b5f1b0ca8ba725a8dd598710 ). However when I try to use OpenCV 4.2.0 (compiled locally and samples all work!) I get a segfault as detailed in the comments section of the gist. I can't for the life of me work out why though. If someone has a B101 or equivalent that uses the TC358743 chipset and has it working with OpenCV I'd be greatly appreciative if you have any advice or suggestions.
This all needs to work as fast as possible so apps calling apps is no good, I need to process the buffer and write it to a gstreamer pipe eventually. In fact if I could take the jpeg I get out the original code at line 832 and pass it to an instance of gst-rtsp-server then I'd be a happy man.
Any takers??
Dan

Teensy + IR camera + OpenCV

I have never ever asked this kind of question on StackOverflow before, and I wonder if you could help me guys because it is a "bit" vague.
I have to design a project that uses Teensy (simple ARM platform) for getting data from IR camera (Flir, resolution 80x60) over SPI, and streaming these data to Linux/Windows running machine (through USB-serial) and doing something simple with OpenCV.
THE PROBLEM: The project lacks some "inovation". It should not be something very complicated, but rather different approach, or trying something new.
Do you have recommendations/tutorials/books/experience with working with above mentioned things? OR do you see a potential for teying something new?
You might want to check out the OpenCV Cookbook for some ideas.
There is a project using this FLIR with a Teensy. It provides a thermal image using a small LCD screen (without any aditional computer).
https://hackaday.io/project/8994-diy-thermocam
So, the teensy can get data through spi.
Can the teensy send data through usb then ? Probably but you will have to check if the rate is high enough
.
Using OpenCV directly on teensy is not possible because of size of library. But you can probably make some basic image processing if the code is small enough.
The FLIR Lepton can be directly interfaced with Linux or Windows computer, so I don't really see the use of Teensy.
I would recommend a Raspberry Pi to interface the FLIR Lepton and then do some image processing. It's well documented on the web.

Streaming opencv Video

I need some ideas about how to stream video feed coming from opencv to a webpage. I currently have gStreamer, but I don't know if this is the right tool for the job. Any advice on using gStreamer or any hyperlinks to tutorials would be helpful and appreciated!
Thanks!
OpenCV doesn't provide an interface for streaming video, which means that you'll need to use some other techonology for this purpose.
I've used GStreamer in several professional projects: this is the droid you are looking for.
I do not have any experience w/ streaming OpenCV output to a website. However I'm sure this is possible using gstreamer.
Using a gstreamer stream, it is possible to get data and convert the data in to OpenCV format. I recommend you read up on GstAppSink and GstBuffer.
Basically, if I remember correctly, you must run a pipeline in the a background thread. Then using some function in gst_app_sink, you can get the buffer data from the sink.
A quick lookup on the issue, you had to use GST_BUFFER_DATA for this
I remember having to convert the result from yCBCr to bgr, a collegue had problems as the conversion of opencv was inadequate. So you might have to write your own. (This was back in the IplImage* days)

read ip camera from openCV in C

I am trying to catch the mjpeg from an ip camera in openCV
but without success till now.
I have the url which displays the mjpeg video in firefox browser
"http://192.168.2.15/GetData.cgi"
but i cannot take it in opencv
Any ideas or solutions are welcomed...
Kind Regards
Thanks in advance
You need the data to be present locally either via the hard drive or in RAM. It would be slow, but I would recommend writing a web service to download frames of the video to the hard drive and then read them into opencv.

FFMPEG on windows (for H.264 RTSP decoding)

Has anyone used the latest FFMPEG version for decoding H.264 based RTSP stream on windows environment using OpenCV.
My problem is that I am able to successfully decode H.264 based RTSP stream on Linux successfully but when I use the same code to decode H.264 based RTSP stream on windows the output is pretty much pixelated. Can someone tell me as to why there is a difference in behaviour (is it due to version mismatch)? Also how do I find out which version of FFMPEG is being used by the OpenCV SDK 2.1.0 and 2.2.0 available for windows?
Awaiting your response.
Thanks in advance.
I didn't know that you can decode RTPS stream using Opencv.
I have decode RTSP stream using Direct show techmology, I'd recommend using Directshow platform due to low cpu consumption ,the video decoding is consumption mostly consumed by the graphic card.
Instead of you , I'll chose to decode the RTSP stream using DirectShow platform,
First install direct show SDK , then install FFD show ,
I'd recommend using filters taken from elecard
(I didn't find any other implementation for RTSP source filter).
Use edit graph to watch your stream
Great tuturial I have found is this (Please read the continuation of this tuturial )
I'm not sure this would be the right answer for you , since I was using a totally different technology...

Resources