As it's mentioned in Release highlights, OpenCV (4.5.5) now have Audio support in videoio module. However, there's no documentation related on this topic.
I've tried a few things on my own like:
cv::VideoCapture cap(fileName,cv::CAP_MSMF);
However, no results so far.
How can I activate Audio Support? Am I missing something?
(Does not work neither for camera nor video files)
Additionally, I don't use pre-built binaries but, tried with pre-built ones(for Windows) and it didn't work neither.
As far as I see, this question does not make any sense unless they implement an interface. That's why they have no documentation about this topic. Hope they'll bring that feature with 4.5.6.
Related
I am looking for a library that could capture streams of images from webcam or USB camera, and then converting image data into multidimensional matrices, in order to do some mathematical operation on them; afterward saving the result as a png file.
I am stuck in the first step. It seems there is only opencv to capture images from camera, which uses highgui.dll for the job. Unfortunately after installing opencv using nimble install opencv, and running a simple code
import opencv/imgproc
import opencv/highgui
import opencv/core
var capture = captureFromCam(CAP_ANY)
the error could not load: (lib|)opencv_highgui(249|231|)(d|).dll arises. Opencv cannot find the library to import necessary functions from it. So far I could not find any way to overcome this issue. In standard libraries of Nim, there are two libraries serial and winim that if I am not wrong, are handling device ports. I could not find a simple way to use them. The question is, what is the proper library for handling devices and how to use them in a simple manner?
For the rest of the job (manipulating image data) I think pixie is a good library to use. It would be good to know, if there is better library, in simplicity and performance.
As Christoph said, the nim package seems years out of date. However if you download Version 249 and put the right dlls into your directory or link them through your nimble file, your code will run.
For your code you would need to copy from opencv\build\x64\vc12\bin files opencv_core249.dll, opencv_highgui249.dll and opencv_imgproc249.dll
You might want to instead just write a quick wrapper for the functions you need from a newer version yourself since you probably only need a few functions. You can look at the nim-opencv library for how to wrap functions.
Or you could use a different application to capture the footage and nim to process it.
Does gstreamer for ios currently support displaying video. I'm following the tutorial which calls for creating a pipeline
gst_parse_launch("videotestsrc ! warptv ! videoconvert ! autovideosink", &error);"
and then connecting the video overlay.
video_sink = gst_bin_get_by_interface(GST_BIN(pipeline), GST_TYPE_VIDEO_OVERLAY);
Howerver, video_sink is always nil. If I change the pipeline to just playbin that works, but playbin is for playing from a URI, but I need to construct a full gstreamer video pipeline.
I also can't find any video sinks other than autovideosink. Is displaying a gstreamer video pipeline currently supported for ios?
This is on ios 7.1 with gstreamer 1.2.3.
With some help from the mailing list I have got test video displaying. I put up my working version of the ios video tutorial app.
The short answer is that gstreamer 1.2.3 does have support for video displaying using eglglessink. However, you need to modify the #defines in gst_ios_init.h to make sure eglglessink is included. You also need to use a GLKView to provide GL primitives and the video_overlay methods to set this up.
I found it difficult to discover this from the documentation so hopefully some others may find the tutorial useful.
I am new to both openCV and Android. I have to detect objects in my project. So, I have decided to use ASIFT for the same. However, the code they have given here is very lengthy. It contains lots of C file. It also doesn't have openCV support.
Some search on the SO itself suggested that it is easier to connect the ASIFT code to the openCV library, but I can't figure out how to do that. Can anyone help me by giving some link or by telling the steps that I should use to add ASIFT to my openCv library, which I can further utilize in making my Android application?
Also, I would like to know whether using Android NDK along with JNI to make calls to the C files or using Android SDK along with binary package for my android project(Object Detection) would be a suitable option for me?
Finally , I solved my problem by using the source code given at the website of ASIFT developers. I compacted all the source files together to make my own library using make. I then called the required function from the library using JNI.
It worked for me, but the execution is taking approximate 2 mins on an Android device. Anyone having some idea about ways to reduce the running time ?
They used very simple and slow brute force matching (just for proving of concept). You can use FLANN library and it will help a lot. http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html
I am interested in using a library that supports lip reading to augment audio/voice recognition. I found out that Intel's AVCSR (which was bundled with OpenCV library) would be an interesting option to consider. Would there be any other libraries that can be used to achieve the same (lip reading to augment voice recognition)?
Also I have not been able to locate a source to download this library from. I already tried the OpenCV package from SourceForge (http://sourceforge.net/projects/opencvlibrary/) but it does not seem to have the AVCSR packages/files. Could someone who has already worked with something similar point me to the place where I can find these source files (either within OpenCV or elsewhere)?
Thank you.
I'm currently working on a project with OpenEXR and I would like to implement some Blob detection algorithms. To do this I figured that I could use OpenCV as it says in the documentation that it can open OpenEXR format files.
I have all the libraries installed and working as I've been doing other things. I open a simple jpg file with openCV cvLoadImage. It works fine. But when i try to open any .exr file it doesn't seam to like it. I get a gray window where there should be the image display.
Has anyone done any tests with OpenCV and OpenEXR libraries working together? Have they worked for you? What do you think?
Thanks.
Yes, that's done, I posted a Ticket in the OpenCV project at willowGarage and they made all needed changes, you now can use OpenEXR with OpenCV as before!
Great
My HDR tone mapping algorithm will work again, cool
Have a nice programming now ;o)
Alex
Well Alex!
My news aren't really encouraging... I tried to use OpenEXR with OpenCV but it's not doing it's work. It says in the documentation that OpenCV 2.0 has OpenEXR support but...
I've searched the web to find some example of working EXR images in OpenCV but had no luck.
At this time I've developed myself a function to convert an image read with the OpenEXR libraries that uses Ilm::Rgba* structure to save the pixels of the image and convert it to char* that OpenCV uses with images. The IlpImage structure is the one I use. Actually I'm working with an example from OpenFrameworks and using they're Image structure...
It's a really early stage in my development because I had to start over...
I hope this can help you... but if you enter the world of OpenEXR it's a pretty dark world in terms of documentation, so all I can say is good luck!
Feel free to contact me and I'll see if I can help you!
This question is rather old now, but I noticed whilst reading the OpenEXR manual today that it says (whilst talking about reading named channels)
If one of those channels is not present in the image file, the corresponding memory buffer for the pixels will be filled with an appropriate default value.
I'd speculate here that the grey image you are seeing is an "appropriate default value".