How to embed flash (.swf) file in opencv project? - opencv

I have a .swf file, which i want to embed in my opencv and overlay over camera stream and display it to the user. Until now i have not found a solution by simple google search. I would appreciate if anyone has any idea how to approach this.
Thanks

OpenCV doesn't deal with .swf files, so you need to use some other technology like FFMPEG or GStreamer to retrieve the frames and decode them to BGR to be able to create a valid IplImage (or cv::Mat if you are insterested in the C++ interface).
GStreamer also provides a simple mechanism to stream video over the network.

Related

Using OpenCV to process live video from Phantom 4

I would like to process frames live in OpenCV from the video feed on a DJI Phantom 4. I've been able to set up OpenCV for iOS in xCode but I need help finding a tutorial/instructions on how to send the frames over from the DJI Camera into OpenCV in the correct format on the fly. Any suggestions?
Thanks
Hello there Ilia Labkovsky,
I think am in the same boat, I have got a P3 and would like to process the images via OpenCV. I am intending to use my laptop PC as an image processor, sending the images directly via TCP/IP and doing my own image processing off-board. I am yet to establish this though, I may come across some problems.
Is there a way I can privately message you on Stack Overflow?
Best of luck with the programming :)
There is a tutorial for Android on the DJI sample apps on how to parse and obtain the yuv frames. From there you can use openCv to process the frames: https://github.com/DJI-Mobile-SDK-Tutorials/Android-VideoStreamDecodingSample

OpenCV not working with different RTSP streaming URLs. grab() from videocapture is always 0

I'm working with opencv to read the frames from a RTSP streaming link via VideoCapture function. It worked well for a specific RTSP camera. But the thing is, I have tried to connect different RTSP cameras in the same network but for my surprise, it wouldnt work.
Any thoughts of what could cause this problem? I need to be able to get the stream of any rtsp url with the same openCV code for my purpose.
The camera that worked is a generic chinese one and it also worked with the big buck bunny comic provided by rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov. The second camera that I tried and got no outputs is an AirCam Dome from Ubiquiti wich has 4 rtsp links. I tried every resolution.
Check if you have 'opencv_ffmpeg341.dll' or something like this in your running folder.
If you are using Windows system, put the dll in x64 or x86 folder.

Gstreamer frames handlem

I'm on my project about rtsp streams, merging and stuff and one of things I need to do is to pass frames from rtsp stream to a handler or buffer or something so another person could process it with f.e. OpenCV.
how could I do it with gstreamer?
Thanks!
GStreamer already has opencv based plugins (. So the best way is to write a similar plugin that applys your opencv code. There are elements called appsrc and appsink to source data from ann app or receive data by an app, but there is not appfilter element. One could use a pad-probe for it, but it is not a good approach.

Streaming opencv Video

I need some ideas about how to stream video feed coming from opencv to a webpage. I currently have gStreamer, but I don't know if this is the right tool for the job. Any advice on using gStreamer or any hyperlinks to tutorials would be helpful and appreciated!
Thanks!
OpenCV doesn't provide an interface for streaming video, which means that you'll need to use some other techonology for this purpose.
I've used GStreamer in several professional projects: this is the droid you are looking for.
I do not have any experience w/ streaming OpenCV output to a website. However I'm sure this is possible using gstreamer.
Using a gstreamer stream, it is possible to get data and convert the data in to OpenCV format. I recommend you read up on GstAppSink and GstBuffer.
Basically, if I remember correctly, you must run a pipeline in the a background thread. Then using some function in gst_app_sink, you can get the buffer data from the sink.
A quick lookup on the issue, you had to use GST_BUFFER_DATA for this
I remember having to convert the result from yCBCr to bgr, a collegue had problems as the conversion of opencv was inadequate. So you might have to write your own. (This was back in the IplImage* days)

cannot create larger AVI video using OPENCV

I have a series of about 600 JPEG images with sequential filenames. what I need is to create an AVI video. cvCreateVideoWriter didn't return NULL! Initially frames got started to form video but after few frames program terminates... i don't know what i am going wrong.
can anyone help? i would really appreciate that. Thanks in advance.
OpenCV uses VFW and only creates standard AVI files which are limited to 2Gb
You can use ffmpeg to create either mp4 or extended openDML type AVIs
The easiest solution is normally to pipe image frames to something like memcoder rather than having to deal with the details of the video library yourself - see http://opencv.willowgarage.com/wiki/VideoCodecs

Resources