Opencv with both TBB and IPP [closed] - opencv

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have build Opencv With TBB enabled. And used "detectMultiscale" and wrote a basic program to detect the face. I couldnt find any changes in processing time if there is a face in a frame. Also i noticed that the processing time has been reduced by two times if there is no face in a frame(empty).
1) How to improve the processing time now?
2) Is it worth to go for Intel IPP? What could be the actual benefit?
Can anyone give me an advice?
Update:
I did this with opencv2.4.5.
Update 2:
I Posted the same question in opencv community and got the reply as TBB is pre-enabled from opencv2.4.5 and we doesnt need to re-build opencv with TBB enbled? Is that correct?
http://answers.opencv.org/question/14226/opencv-with-both-tbb-and-ipp/?answer=14231#post-id-14231

Use of the IPP is rather depreciated, and is really only in OpenCV for historic reasons (i.e. when OpenCV was an Intel Library!)
As per the most recent documentation, what little benefit remains...
``may be used to improve the performance of color conversion, Haar
training and DFT functions of the OpenCV library.''
So you might get some benefit from it - but crucially rememember the IPP library is not free.
Since you are already using the TBB (which is itself rather redundant these days - especially on Windows version of OpenCV) the only real gains may be in using the GPU or OpenCL modules.
And of those, assuming you are working in C++, the OpenCL really represents the most up-to-date and accessible way of incorporating further speed-up (transparent of processor/gpu configuration).
Since you are doing face detection I guess you might have beeing using the Haar classifier functionality (doesn't everyone:-). In which case, you may want to try the OpenCL version instead...

Related

OpenCV vs Core Image [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have worked with Core Image, creating filters and stuff. Also I'm aware that Core Image has feature detection capability.
I have also worked a bit with OpenCV, but not on a mobile device. Used it for very basic purposes.
Core Image is a lot simpler than OpenCV in terms of coding, but I still see a lot of activity going on in the OpenCV community w.r.t iOS. I wanted to know some applications where OpenCV can be preferred over Core Image.
The main goal for Core image is to perform operations on images. As you mentioned in your question, you can create filters and modify images.
OpenCV however, has a far broader scope. As the name implies, it provides tools for all kinds of computer vision applications. It can be used for facial recognition, object recognition, 3D scanning, but also applying filters to images.
I have no idea what you mean by "require it to be used". As far as i'm concerned, every application requires to be used. There would be no point in writing an application if you are not using it.

hypermedia.video.opencv.capture(III)V [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I've downloaded OpenCV version 1 for my Windows 7 laptop from this website. For some reason, whenever I try to run one of the example sketches like the face_detection or threshold, I get this error message:
UnsatisfiedLinkError: hypermedia.video.opencv.capture(III)V
Have I downloaded the wrong version?
Make sure you've also installed OpenCV 1.0 itself, not just the Processing wrapper.
Also make sure the installer sets the PATH variable.
At this point you should be able to run a basic sample which doesn't access the camera.
If not, you can check manually if the PATH variable includes the path to OpenCV in Environment Variables.
If you can run an OpenCV Processing sample without camera access but are getting errors when accessing the webcam try installing WinDVIG_101.
On windows I sometimes found the OpenCV Processing wrapper sometimes worked with some Processing versions and not others. Try testing both Processing 2.0+ and Processing 1.5.1

OpenCV with OpenNi [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have studied NITE 2.While looking for various examples i came across a few videos where OpenCV was used along with OpenNi.
THe question is for what exactly can opencv be used with OpenNI. I understand this is very vague question yet i really need to know.
OpenCV is best of breed library for developing computer vision algorithms. It has great number of optimized algorithms that can be used for analysis of depth maps as well as RGB images that you can capture using OpenNI.
NITE is closed source library that contains set of very well implemented, but limited algorithms.
So if you want to implement something more than gives you NITE, you will need handy tool set for that. In general OpenCV is the best choice.
For instance you can use OpenCV + OpenNI to develop:
Gender, Age or Emotion recognition using fusion of 3D and 2D data;
3D face recognition;
custom gesture recognition;
body shape measuring.
and many other tasks, which number is limited just by imagination.
Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture class. Depth map, RGB image and some other formats of output can be retrieved by using familiar interface of VideoCapture (See Using Kinect and other OpenNI compatible depth sensors)

OpenGL ES - GLSL returning calculations [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm trying to create a face tracker for iPhone using the GPU to perform calculations for performance.
To make the tracking more intelligent, I need to be able to retrieve values from the shader.
However I'm having difficultly doing so, Is this possible with iPhone OpenGL ES?
The only way to get any output from shaders (at least in ES) is by rendering something to the framebuffer and reading back the resulting pixel values from the GPU.
But since I don't know your algorithm and implementation, I cannot tell you how best to structure it for getting results back efficiently. But just remember, the only output from shaders is the image rendered into the framebuffer, whatever structure this image may have.
Therefore it is usually best to structure your algorithms for minimal CPU-GPU communication. So think if you really need to know those values on the CPU or if it is enough to provide them to other parts of your GPU algorithm using textures or VBOs, into which you can render (more or less) efficiently without the need for CPU roundtrips.

Image processing with Hadoop MapReduce [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I am doing a project on motion estimation between two frames of a video sequence using Block Matching Algorithm and using SAD metrics. It involves computing SAD between each block of reference frame and each block of a candidate frame in window size to get the motion vector between the two frames.
I want to implement the same using Map Reduce. Splitting the frames in key-value pairs, but am not able to figure out the logic because everywhere I see I find the wordCount or query search problem which is not analogus to mine
I would also appreciate If you are able to provide me more Map Reduce examples.
Hadoop is being used in situations where computations can happen in parallel and using a single machine might take a lot of time for the processing. There is nothing stopping you using Hadoop for video processing. Check this and this for more information on where Hadoop can be used. Some of these are related to video processing.
Start with understanding the WordCount example and Hadoop in general. Run the example on Hadoop. And then work from there. Would also suggest to buy the Hadoop - The Definitive Guide book. Hadoop and its ecosystem is changing at a very fast pace and it's tough to keep up-to-date, but the book will definitely give you a start on Hadoop.

Resources