I am trying to process faces in images on a device/tablet. I have used opencv (NDK) a while ago. I see that there are couple of other options available to process faces. Just wondering how opencv, android-vision api and FastCV would compare specifically for processing faces;
Found couple of similar posts here but they did not answer all my questions.
Android API face detection vs. OpenCV/JavaCV face detection
Android Computer Vision JavaCv OpenCV Fastv comparison
My questions:
1) How does android face api (face detection and landmark extraction) compare to opencv (javaCV or opencv NDK) (accuracy Vs speed)
2) Is FastCV better for this? I presume it comes with license restrictions.
3) Does the android api work for all android devices?
I found couple of ports for open-source CV libraries (other than opencv) that are commonly used for pc based CV applications but these are not optimized for the devices, I found them very slow when used as is.
thanks!
Related
There seems to be a lot of overlap between these 3 Google libraries.
According to their sites:
MediaPipe: MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.
ARCore: With ARCore, build new augmented reality experiences that seamlessly blend the digital and physical worlds.
MLKit Vision: Video and image analysis APIs to label images and detect barcodes, text, faces, and objects.
Could someone with experience working with these explain how they relate to eachother and what are the use cases for each?
For example which would be appropriate to implement high level, popular features such as face filters?
(Also perhaps some insight on which of the 3 is most likely to land in Google Graveyard the fastest)
Some simplified & informal explanations:
MediaPipe is a powerful but lower-level library for live and streaming ML solutions, which requires non-trivial setup and customization before it works for your use case.
ML Kit is an end-to-end solution provider, offering mobile friendly, easy-to-use APIs and pre-built pipelines under the hood. Several ML Kit features are actually powered by MediaPipe internally (i.e. Pose detection and Selfie-segmentation).
There is no direct relationships between ARCore and ML Kit, but there could be shared or similar ML models in between, because both require ML models to power their features but the two products have different focuses.
I am using opencv 2.4.10 and am wondering if I hook up a usb 2.0 camera that uses a 10 bit analog to digital converter and has a resolution of 1328 x 1048, does openCV support that type of camera? If it does, how will it store the pixel information? (I have not purchased the camera yet and would buy a different one if the software won't work with it, so I can't just go test it myself).
clearly I didn't google well enough
https://web.archive.org/web/20120815172655/http://opencv.willowgarage.com/wiki/Welcome/OS/
list hasn't been updated for a while though
APIs shipped with MS Windows Kinect SDK is all about program around Voice, Movement and Gesture Recognition related to humans.
Is there any open source or commercial APIs for tracking & recognizing dynamically moving objects like vehicles for its classification.
Is it feasible and good approach of employee Kinect for Automated vehicle classification than traditional image processing approaches
Even image processing technologies have made remarkable innovations, why fully automated vehicle classification is not used at Most of the toll collection.
why existing technologies (except RFID approach) failing to classify the vehicle (i.e, they are not yet 100% accurate in classifying) or is there any other reasons apart from image processing.
You will need to use a regular image processing suite to track objects that are not supported by the Kinect API. A few being:
OpenCV
Emgu CV (OpenCV in .NET)
ImageMagick
There is no library that directly supports the depth capabilities of the Kinect, to my knowledge. As a result, using the Kinect over a regular camera would be of no benefit.
I need to develop an image processing program for my project in which I have to count the number of cars on the road. I am using GPU programming. Should I go for OpenCV program with GPU processing feature or should I develop my entire program on CUDA without any OpenCV library?
The algorithms which I am using for counting the number of cars is background subtraction, segmentation and edge detection.
You can use GPU functions in OpenCV.
First visit the introduction about this : http://docs.opencv.org/modules/gpu/doc/introduction.html
Secondly, I think above mentioned processes are already implemented in OpenCV optimized for GPU. So It will be much easier to develop with OpenCV.
Canny Edge Detection : http://docs.opencv.org/modules/gpu/doc/image_processing.html#gpu-canny
PerElement Operations (including subtraction): http://docs.opencv.org/modules/gpu/doc/per_element_operations.html#per-element-operations
For other functions, visit OpenCV docs.
OpenCV, no doubt, has the biggest collection of Image processing functionality and recently they've started porting functions to CUDA as well. There's a new GPU module in latest OpenCV with few functions ported to CUDA.
Being said that, OpenCV is not the best option to build a CUDA based application as there are many dedicated CUDA libraries like CUVI that beat OpenCV in Performance. If you're looking for an optimized solution, you should also give them a try.
For the people that have experience with OpenCV, are there any webcams that don't work with OpenCV.
I am looking into the feasibility of a project and I know I am going to need a high quality feed (1080p), so I am going to need a webcam that is capable of that. So does OpenCV have problems with certain cameras?
To be analysing a video feed of that resolution on the fly I am going to need a fast processor, I know this, but will I need a machine that is not consumer available...ie, will an i7 do?
Thanks.
On Linux, if it's supported by v4l2, it is probably going to work (e.g., my home webcam isn't listed, but it's v4l2 compatible and works out of the box). You can always use the camera manufacturer's driver to acquire frames, and feed them to your OpenCV code. You can even sub-class the VideoCapture class, and implement your camera driver to make it work seamlessly with OpenCV.
I would think the latest i7 series should work just fine. You may want to also check out Intel's IPP library for more optimized routines. IPP also easily integrates into OpenCV code since OpenCV was an Intel project at its inception.
If you need really fast image processing, you might want to consider adding a high performance GPU to the box, so that you have that option available to you.
Unfortunately, the page that I'm about to reference doesn't exist anymore. OpenCV evolved a lot since I first wrote this answer in 2011 and it's difficult for them to keep track of which cameras in the market are supported by OpenCV.
Anyway, here is the old list of supported cameras organized by Operating System (this list was available until the beginning of 2013).
It depends if your camera is supported by OpenCV, mainly by the driver model that your camera is using.
Quote from Getting Started with OpenCV capturing,
Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL) and two on Linux: Video for Linux(V4L) and IEEE1394. For the latter there exists two implemented interfaces (CvCaptureCAM_DC1394_CPP and CvCapture_DC1394V2).
So if your camera is VFW or MIL compliant under Windows or suits into standard V4L or IEEE1394 driver model, then probably it will work.
But if not, like mevatron says, you can even sub-class the VideoCapture class, and implement your camera driver to make it work seamlessly with OpenCV.