I'm working on hand detection using EmguCv. I have successfully detected the skin color object in live video feed. With in that skin detected object I want to track the moving hand only. Please someone tell me how to achieve this without degrading the performance. A code or step by step procedure will be helpful.
Is there any best reference ebook on EmguCv for learning or any other material with code snipets?
you actually need to perform a number of steps to do it.
1) you need to find the hand , the best way to figure that out will be to use HarrCascades,
you can find more about it here: http://www.360doc.com/content/11/1220/16/5087210_173660914.shtml
2)Then you need to use absolute subtraction between the current frame and the previous frame to find out the moving part in the video.
for absolute subtraction you can check this link out.
After detecting the hand you can use Mean Shift Tracking Algorithm to track the hand.There is a good implementation in Accord.NET and an example that you can use for learning how to use it. About Viola-Jones algorithm m there is a HandCascade.xml file from a guy Nikolas Markou , but when I tried it , the performance wasn't good at all and there are similare complaints from other people using that Haar Cascade.
About the reference book of EmguCv, to my best knowledge, there is only one book so far:
Emgu CV Essentials
http://www.amazon.com/Emgu-CV-Essentials-Shin-Shi-ebook/dp/B00GOMTTHI
Related
I am currently stuck at my project of reconstructing an object from different images of the same object.
So far I have calculated the feature matches of each image using AKAZE features.
Now I need to derive the camera parameters and the 3D Point coordinates.
However, I am a little bit confused. In my opinion I need the camera parameters to determine the 3D Points and vice versa.
My question is how can I get the 3D points and the Camera parameters in one step?
I have also looked into the bundle adjustment approach provided by http://scipy-cookbook.readthedocs.io/items/bundle_adjustment.html but there you need an initial guess for camera parameters and 3D coordinates.
Can somebody refer to a pseudocode? or has a pipeline for me?
Thanks in advance
As you seem to be new at this, I strongly recommend you first play with an interactive tool to gain an intuition for the issues involved in solving for camera motion and structure. Try Blender: it's free, and you can find plenty of video tutorials on YouTube on how to use it for matchmoving: examples 1, 2
Take a look at VisualSFM (http://ccwu.me/vsfm/). It is an interactive tool for such tasks. It will give you an idea which algorithms to use.
The Computer Vison book of Richard Szeliski (http://szeliski.org/Book/) will give the theoretical background.
I am working on identifying an object by using Kinect sensor so to get x,y,z coordinates of the object.
I am trying to find the related information for this but could not able to find much. I have seen the videos as well but nobody is sharing the information or any sample code?
This is what I want to achieve https://www.youtube.com/watch?v=nw3yix3XomY
Proabably, few people may asked same question but as I am new to the Kinect and these libraries due to which I need little more guidance.
I read somewhere that object detection is not possible using Kinect v1. We need to use 3rd party libraries like open CV or point-clouds (pcl).
Can somebody help me that even by using third party libraries how exactly can I identify object via a Kinect sensor?
It will be really helpful.
Thank you.
As the author of the video you linked stated in the comment, following this PCL tutorial will help you. As you found out already, realizing this may not be possible using the standalone SDK. Relying on PCL will help you not reinvent the wheel.
The idea there is to:
Downsample the cloud to have less data to deal with in the next steps (this also reduces noise a bit).
Identify keypoints/features (i.e. points, areas, textures that remain somehow invariant to some transformations).
Compute the keypoint descriptors, mathematical representations of these features.
For each scene keypoint descriptor, find nearest neighbor into the model keypoints descriptor cloud and add it to the correspondences vector.
Perform clustering on the keypoints and detect the model in the scene.
The software in the tutorial needs the user to manually feed in the model and scene files. It doesn't do that on live feed, as the video you linked.
The process should be pretty similar though. I'm not sure how cpu-intensive the detection is, so it might require additional performance tweaking.
Once you have frame-by-frame detection in place, you could start thinking about actually tracking an object across the frames. But that's another topic.
Problem: I have a photo of an object (a manufactured part like the attached photo below), using my Andoird phone camera I want to verify if the object in camera preview matches to the template or not. (in other words, is it the same part as the template or not)
I can make the user to move the camera in order to have similar view of the template in camera preview as the template however there will be different noise level and/or lighting and maybe different background.
Question: What do you recommend me to use for solving this problem? I was thinking of Canny edge extraction and then matching the camera frames towards the canny edge extract from template? is this a good idea? if yes would you please tell me how can I implement this? any resources? samples? (I can do the Canny edge extraction but couldn't find a way to do the matching)
if Not a good idea then what do you recommend?
Things I have tried:
Feature Extract and Matching: I used few different extractor and matcher implementations from OpenCV and my app is working and drawing the detected feature points and matches, etc. however being a beginner with image processing I cannot make sense of the result and also how to know what is a match. any idea, help, good resources?
Template Matching: I used OpenCV template matching however the performance was horrible and I decided that this cannot be the solution.
I tried object recognition with my phone on your test image and the results were positive.
Detector used :ORB(Binary Detector).
Descriptor used :ORB.
Matching Technique : Brute-force matching .
Image Size 640x480.
I was able to detect around 500 feature points (number of keypoints is around sufficient but it might produce false matches when you have more images with similar looking objects.you need to refine your matching to avoid false matches).
Result of object recognition on two different scales.
Regarding you finding difficulties in understanding object recognition. What exactly did you not understand(Specific topic).
I recommend you to go thru the these two books
Learning OpenCV by By Adrian Kaehler, Gary Bradski
OpenCV 2 Computer Vision Application Programming Cookbook by by Robert Laganière(chapter 8 & 9).
Cheers!
from what I understand canny edge detection might not be an optimal solution. according to me after some basic pre-processing of the test image find its sift features and compare it with the sift features of the template. sift being really versatile should work here too.
you can also try opensurf feature they are faster than sift but i havent had an opportunity to work alot with them to be able to comment on its accuracy
I went through the Kinect SDK and Toolkit provided by Microsoft. Tested the Face Detection Sample, it worked successfully. But, how to recognize the faces ? I know the basics of OpenCV (VS2010). Is there any Kinect Libraries for face recognition? if no, what are the possible solutions? Are there, any tutorials available for face recognition using Kinect?
I've been working on this myself. At first I just used the Kinect as a webcam and passed the data into a recognizer modeled after this code (which uses Emgu CV to do PCA):
http://www.codeproject.com/Articles/239849/Multiple-face-detection-and-recognition-in-real-ti
While that worked OK, I thought I could do better since the Kinect has such awesome face tracking. I ended up using the Kinect to find the face boundaries, crop it, and pass it into that library for recognition. I've cleaned up the code and put it out on github, hopefully it'll help someone else:
https://github.com/mrosack/Sacknet.KinectFacialRecognition
I've found project which could be a good source for you - http://code.google.com/p/i-recognize-you/ but unfortunetly(for you) its homepage is not in english. The most important parts:
-project(with source code) is at http://code.google.com/p/i-recognize-you/downloads/list
-in bibliography author mentioned this site - http://www.shervinemami.info/faceRecognition.html. This seems to be a good start point for you.
There are no built in functionality for the Kinect that will provide face recognition. I'm not aware of any tutorials out there that will do it, but someone I'm sure has tried. It is on my short list; hopefully time will allow soon.
I would try saving the face tracking information and doing a comparison with that for recognition. You would have a "setup" function that would ask the user the stare at the Kinect, and would save the points the face tracker returns to you. When you wish to recognize a face, the user would look at the screen and you would compare the face tracker points to a database of faces. This is roughly how the Xbox does it.
The big trick is confidence levels. Numbers will not come back exactly as they did previously, so you will need to include buffers of values for each feature -- the code would then come back with "I'm 93% sure this is Bob".
I had an idea for which I need to be able to recognize certain objects or models from a rendered three dimensional digital movie.
After limited research, I know now that what I need is called feature detection in the field of Computer Vision.
So, what I want to do is:
create a few screenshots of a certain character in the movie (eg. front/back/leftSide/rightSide)
play the movie
while playing the movie, continuously create new screenshots of the movie
for each screenshot, perform feature detection (SIFT?, with openCV?) to see if any of our character appearances are there (they must still be recognized if the character is further away and thus appears smaller, or if the character is eg. lying down).
give a notice whenever the character is found
This would be possible with OpenCV, right?
The "issue" is that I would have to learn c++ or python to develop this application. This is not a problem if my movie and screenshots are applicable for what I want to do.
So, I would like to first test my screenshots of the movie. Is there a GUI version of OpenCV that I can input my test data and then execute it's feature detection algorithms manually as a means of prototyping?
Any feedback is appreciated. Thanks.
There is no GUI of OpenCV able to do what you want. You will be able to use OpenCV for some aspects of your problem, but there is no ready-made solution waiting there for you.
While it's definitely possible to solve your problem, the learning curve for this problem is quite long. If you're a professional, then an alternative to learning about it yourself would be to hire an expert to do it for you. It would cost money, but save you time.
EDIT
As far as template matching goes, you wouldn't normally use it to solve such a problem because the thing you're looking for is changing appearance and shape. There aren't really any "dynamic parameters to set". The closest thing you could try is have a massive template collection that would try to cover the expected forms that your target may take. But it would hardly be an elegant solution. Plus it wouldn't scale.
Next, to your point about face recognition. This is kind of related, but most facial recognition applications deal with a controlled environment: lighting, distance, pose, angle, etc. Outside of that controlled environment face detection effectiveness drops significantly. If you're detecting objects in a movie, then your environment isn't really controlled.
You may want to first try a simpler problem of accurately detecting where the characters are, without determining who they are (video surveillance, essentially). While it may sound simple, you'll find that it's actually non-trivial for arbitrary scenes. The result of solving that problem may be useful in identifying the characters.
There is Find-Object by Mathieu Labbé. It was very helpful for me to start getting an understanding of the descriptors since you can change them while your video is running to see what happens.
This is probably too late, but might help someone else looking for a solution.
Well, using OpenCV you would of taking a frame of a video file and do any computations on it.
You can do several different methods of detecting a character on that image, but it's not so easy to have it as flexible so you can even get that person if it's lying on the floor for example, if you only entered reference images of that character standing.
Basically you could try extracting all important features from your set of reference pictures and have a (in your case supervised) learning algorithm that gets a good feature-vector of that character for classification.
You then need to write your code that plays the video and which takes a video frame let's say each 500ms (or other as you desire), gets a segmentation of the object you thing would be that character and compare it with the reference values you get from your learning algorithm. If there's a match, your code can yell "Yehaaawww!" or do other things...
But all this depends on how flexible you want this to be. You could also try a template match or cross-correlation which basically shifts the reference image(s) over the frame and checks how equal both parts are. But this unfortunately is very sensitive for rotation, deformations or other noise... so you wouldn't get that person if its i.e. laying down. And I doubt you can get all those calculations done in realtime...
Basically: Yes OpenCV is good to use for your image processing/computer vision tasks. But it offers a lot of methods and ways and you'd need to find a way that works for your images... it's not a trivial task though...
Hope that helps...
Have you tried looking at some of the work of the Oxford visual geometry group?
Their Video Google system describes to a large extent what you want, instance detection.
Their work into Naming People in TV shows is also pretty relevant. A face detection and facial feature pipeline is included that can be run from Matlab. Are you familiar with Matlab?
Have you tried computer vision frameworks like Cassandra? There you can exactly do that just by some mouse clicks.