I'm trying to migrate an algorithm that uses a 2d histogram to run using the new G-API on OpenCV. I see that there is equalizeHist() available, but not calcHist(). Is there any way to calculate the histogram using the new Graph API?
Please open an issue at OpenCV GitHub so we take this to work. Meanwhile, the API is extensible so you can add a missing operation/kernel locally (within your app): https://docs.opencv.org/master/d0/d25/gapi_kernel_api.html
Related
I have an object I'd like to track using OpenCV. In my detection algorithm I can create bounded boxes around the objects it sees, and can create a target object to track properly. My detection algorithm works well, but I want to pass this object to a tracking algorithm.I can't quite get this done without having to re write the detection and image display issues. I'm working with an NVIDA Jetson Nanoboard with an Intel Realsense camera if that helps.
The OpenCV DNN module comes with python samples of state of the art trackers. I've heard good things about the "siamese" based ones. Have a look
Also the OpenCV contrib repo contains a whole module of various trackers. Give those a try first. They have a simple API.
Here's some research I have done so far:
- I have used Google Vision API to detect various face landmarks.
Here's the reference: https://developers.google.com/vision/introduction
Here's the link to Sample Code to get the facial landmarks. It uses the same Google Vision API. Here's the reference link: https://github.com/googlesamples/ios-vision
I have gone through the various blogs on internet which says MSQRD based on the Google's cloud vision. Here's the link to it: https://medium.com/#AlexioCassani/how-to-create-a-msqrd-like-app-with-google-cloud-vision-802b578b30a0
For Android here's the reference:
https://www.raywenderlich.com/158580/augmented-reality-android-googles-face-api
There are multiple paid SDK's which full fills the purpose. But they are highly priced. So cant able to afford it.
For instance:
1) https://deepar.ai/contact/
2) https://www.luxand.com/
There is possibility might have some see this question as duplicate of this:
Face filter implementation like MSQRD/SnapChat
But the thread is almost 1.6 years old with no right answers to it.
I have gone through this article:
https://dzone.com/articles/mimic-snapchat-filters-programmatically-1
It describes all the essential steps to achieve the desired results. But they advice to use their own made SDK.
As per my research no good enough material is around which helps to full fill the desired results like MSQRD face filters.
One more Github repository around which has same implementation but it doesn't gives much information about same.
https://github.com/rootkit/LiveFaceMask
Now my question is:
If we have the facial landmarks using Google Vision API (or even using
DiLib), how I can add 2d or 3d models over it. In which format this
needs to be done like this require some X,Y coordinates with vertices
calculation.
NOTE: I have gone through the Googles "GooglyEyesDemo" which adds the
preview layer over eyes. It basically adds a view over the face. So I
dont want to add UIView one dimensional preview layers over it. Image
attached for reference :
https://developers.google.com/vision/ios/face-tracker-tutorial
Creating Models: I also want to know how to create models for live
filters like MSQRD. I welcome any software or format recommendations.
Hope the research I have done will help others and someone else
experience helps me to achieve the desired results. Let me know if any
more details are required.**
Image attached for more reference:
Thanks
Harry
Canvas class is used in android for drawing such 3D / 2D models or core graphics for IOS can be used.
What you can do is detect the face components, take their location points and draw images on top of them. Consider going through this
You need to either predict x,y,z coordinates(check out this demo), either use x,y predictions but then find parameters of universal 3d-model & camera that will give the closest projection of current x,y.
Could anybody suggest an automatic way to convert from a list of images (without Kinect) to a point cloud in opencv?
Take a look at OpenCV Contrib Structure From Motion module (SFM). There are two nice examples trajectory_reconstruction.cpp and scene_reconstruction.cpp.
Also, there is alternative called Multi-View Environment which you could find on GitHub at simonfuhrmann/mve and which might meet your criteria too.
What I do:
1.I installed the opencv-plugin-sample in the kurenoMediaPlayer(from the link https://github.com/Kurento/kms-opencv-plugin-sample) and run a sample opencv face detection.
What i have to do:
1.Now i need to integrate caffe(http://caffe.berkeleyvision.org/install_apt.html) into KMS ,so I can able to run more complex opencv algorithms in KMS.
What I need to know:
1.Is there any specific way to integrate caffe in KurentoMediaServer?
I'm wondering if it is possible to use the OpenCV framework to recognise a building?
For example, if I store an image of a building, is it possible to use OpenCV to detect this building through the iPhone camera?
Thanks!
Detecting known objects such as your building in an image can be done using the features2d module in OpenCV.
It works by detecting key points in the known image and computing a set of descriptors for these that can be compared to the key points and descriptors computed from the unknown scene image by a process known as matching.
The find_obj.py demo in the samples/python2 folder of OpenCV shows how to detect a known object in an image.
There is also a tutorial in the user guide, see http://docs.opencv.org/doc/user_guide/ug_features2d.html
Note that some of the algorithms often used (e.g. SURF and SIFT) are not free, and need to be licensed separately if you use them.
Is possible, but, you have a long road to go.
One way to do this: use visual keypoints to recognise objects.
OpenCV Sift Documentation