It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have worked with Core Image, creating filters and stuff. Also I'm aware that Core Image has feature detection capability.
I have also worked a bit with OpenCV, but not on a mobile device. Used it for very basic purposes.
Core Image is a lot simpler than OpenCV in terms of coding, but I still see a lot of activity going on in the OpenCV community w.r.t iOS. I wanted to know some applications where OpenCV can be preferred over Core Image.
The main goal for Core image is to perform operations on images. As you mentioned in your question, you can create filters and modify images.
OpenCV however, has a far broader scope. As the name implies, it provides tools for all kinds of computer vision applications. It can be used for facial recognition, object recognition, 3D scanning, but also applying filters to images.
I have no idea what you mean by "require it to be used". As far as i'm concerned, every application requires to be used. There would be no point in writing an application if you are not using it.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have build Opencv With TBB enabled. And used "detectMultiscale" and wrote a basic program to detect the face. I couldnt find any changes in processing time if there is a face in a frame. Also i noticed that the processing time has been reduced by two times if there is no face in a frame(empty).
1) How to improve the processing time now?
2) Is it worth to go for Intel IPP? What could be the actual benefit?
Can anyone give me an advice?
Update:
I did this with opencv2.4.5.
Update 2:
I Posted the same question in opencv community and got the reply as TBB is pre-enabled from opencv2.4.5 and we doesnt need to re-build opencv with TBB enbled? Is that correct?
http://answers.opencv.org/question/14226/opencv-with-both-tbb-and-ipp/?answer=14231#post-id-14231
Use of the IPP is rather depreciated, and is really only in OpenCV for historic reasons (i.e. when OpenCV was an Intel Library!)
As per the most recent documentation, what little benefit remains...
``may be used to improve the performance of color conversion, Haar
training and DFT functions of the OpenCV library.''
So you might get some benefit from it - but crucially rememember the IPP library is not free.
Since you are already using the TBB (which is itself rather redundant these days - especially on Windows version of OpenCV) the only real gains may be in using the GPU or OpenCL modules.
And of those, assuming you are working in C++, the OpenCL really represents the most up-to-date and accessible way of incorporating further speed-up (transparent of processor/gpu configuration).
Since you are doing face detection I guess you might have beeing using the Haar classifier functionality (doesn't everyone:-). In which case, you may want to try the OpenCL version instead...
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have studied NITE 2.While looking for various examples i came across a few videos where OpenCV was used along with OpenNi.
THe question is for what exactly can opencv be used with OpenNI. I understand this is very vague question yet i really need to know.
OpenCV is best of breed library for developing computer vision algorithms. It has great number of optimized algorithms that can be used for analysis of depth maps as well as RGB images that you can capture using OpenNI.
NITE is closed source library that contains set of very well implemented, but limited algorithms.
So if you want to implement something more than gives you NITE, you will need handy tool set for that. In general OpenCV is the best choice.
For instance you can use OpenCV + OpenNI to develop:
Gender, Age or Emotion recognition using fusion of 3D and 2D data;
3D face recognition;
custom gesture recognition;
body shape measuring.
and many other tasks, which number is limited just by imagination.
Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture class. Depth map, RGB image and some other formats of output can be retrieved by using familiar interface of VideoCapture (See Using Kinect and other OpenNI compatible depth sensors)
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
i want to build an app that recognizes hyperlinks and email addresses when i point out the camera at a paper or board which consists of lot of information along with hyperlinks and email addresses..anyway has anybody built such an app before or is it feasible? should i use augmented reality for this? what say?
Off the top of my head the basic algorithm that first comes to mind is (1) capture the image, (2) process it via OCR looking for the particular strings you want, and (3) do what you want to do.
A quick search for "OCR on smartphone" turned up this paper which discusses OCR on smartphones and mentions an library available from Google, so you might start to get an overview there:
http://www.cs.unc.edu/cms/publications/honors-theses-1/lian09.pdf
The scenario you are describing does not sound like AR in a pure sense as you are not really "tracking" anything in the real world, but rather taking a picture and then post processing that image.
Good luck.
Interesting idea. You will have to work with text recognition. For hyperlinks and email addresses you can create some rules for which specific features algorithm should look for("#","http://",..).
However text recognition(eventually text extraction with letter comparing in predefined font) is not easy. I don't really see how you would like to use AR. In fact this would be AR app, which would get such info from real world into digital form.
It would be great app but good luck :)
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I am doing a project on motion estimation between two frames of a video sequence using Block Matching Algorithm and using SAD metrics. It involves computing SAD between each block of reference frame and each block of a candidate frame in window size to get the motion vector between the two frames.
I want to implement the same using Map Reduce. Splitting the frames in key-value pairs, but am not able to figure out the logic because everywhere I see I find the wordCount or query search problem which is not analogus to mine
I would also appreciate If you are able to provide me more Map Reduce examples.
Hadoop is being used in situations where computations can happen in parallel and using a single machine might take a lot of time for the processing. There is nothing stopping you using Hadoop for video processing. Check this and this for more information on where Hadoop can be used. Some of these are related to video processing.
Start with understanding the WordCount example and Hadoop in general. Run the example on Hadoop. And then work from there. Would also suggest to buy the Hadoop - The Definitive Guide book. Hadoop and its ecosystem is changing at a very fast pace and it's tough to keep up-to-date, but the book will definitely give you a start on Hadoop.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
what are the best practices to process images in enterptices web applications.
I mean
storing
assign with entity
fast loading/caching
delayed / ajax loading
suitable format (png, jpeg)
on fly editing (resizing, compress)
free libs/helpers
image watermarking/copyrighting on fly
Especially, appreciated already production approaches!
As always, every project has their own requirements, restrictions and resources (The 3Rs). There is no 'super pattern' or 'one size fits all' method.
We cannot tell you how to implement you project as every project is different. It's up to you to use your skills/knowledge and experience to make informed decisions on implementation.
The 'best practice' is to individually research and learn each of the technologies/methods you have listed and gain the knowledge to know when to use them based on your projects requirements, restrictions and resources.
I use ImageMagickObject in my mvc projects. It can:
suitable format (png, jpeg)
on flyediting (resizing, compress)
freelibs/helpers image
watermarking/copyrighting on fly
fast loading/caching: may be memcached?
delayed / ajax loading: jquery is a good solution
assign with entity: Entity Framework can work with almost all databases
storing: hard question. all depend to the functionality