I am working on a project in which I have to detect the hand in a video. Kinect is being used to capture the video. I have already tried skin segmentation using hsv colour scheme. It works well when I get the video from cam of my laptop but does not work with Kinect. I have also tried colour segmentation and thresholding but it is also not working well. I am using opencv in c. I will be grateful if someone can give any type of suggestions or steps to detect the hand.
You can use OpenNI to obtain a skeleton of the person which robustly tracks the hand in your video. E.g. http://www.youtube.com/watch?v=pZfJn-h5h2k
Related
I want to send my images to Kinect SDK either OpenNI or windows SDK for Kinect sdk to tell me the position of a user's hand or head and ..
I don't want to use kinect's camera feed. The images are from a paper which I want to do some image processing on them and I need to work exactly on same images so can not use my own body as input to Kinect camera.
I don't matter between Kinect sdk of Microsoft or the OpenNI thing, it just needs to be able to get my rgb and depth images as input instead of Kinect camera's one.
Is it possible? If yes how can I do it?
I subscribe to the same question. I want to feed a Kinect face detection app to read images from the hard drive and return the Animation Units of the recognized face. I want to train a classifier for facial emotion recognition using Animation Units as input and features.
Thanks,
Daniel.
I am working with Xtion Pro Live on Ubuntu 12.04 with Opencv 2.4.10. I want to do object recognition on daylight.
So far i have achieved object recognition indoors by producing a depth and a disparity map. When i go outdoors the maps that i mentioned above are black and i cannot perform object recognition.
I would like to ask you if Asus Xtion Pro Live can work outdoors.
If it cannot, is there a way to fix it (through code) in order to do object detection outdoors?
I have searched around and i found out that i should take another stereoscopic camera. Could anyone help?
After some research I discovered that the Xtion Pro Live stereoscopic camera, can not be used outdoors because of the IR sensor. This sensor is responsible for the production of depth map and is affected by sunlight. Because of this, there are no clear results. Without clear results the creation of depth and disparity map (with the proper values) is impossible.
I am working on an app in iOS that will occur an event if camera detects some changes in image or we can say motion in image. Here I am not asking about face recognition or a particular colored image motion, And I got all result for OpenCV when I searched, And I also found that we can achieve this by using gyroscope and accelerometer both , but how??
I am beginner in iOS.So my question is , Is there any framework or any easy way to detect motion or motion sensing by camera.And How to achieve?
For Example if I move my hand before camera then it will show some message or alert.
And plz give me some useful and easy to understand links about this.
Thanx
If all you want is some kind of crude motion detection, my open source GPUImage framework has a GPUImageMotionDetector within it.
This admittedly simple motion detector does frame-to-frame comparisons, based on a low-pass filter, and can identify the number of pixels that have changed between frames and the centroid of the changed area. It operates on live video and I know some people who've used it for motion activation of functions in their iOS applications.
Because it relies on pixel differences and not optical flow or feature matching, it can be prone to false positives and can't track discrete objects as they move in a frame. However, if all you need is basic motion sensing, this is pretty easy to drop into your application. Look at the FilterShowcase example to see how it works in practice.
I don't exactly understand what you mean here:
Here I am not asking about face recognition or a particular colored
image motion, because I got all result for OpenCV when I searched
But I would suggest to go for opencv as you can use opencv in IOS. Here is a good link which helps you to setup opencv in ios.
There are lot of opencv motion detection codes online and here is one among them, which you can make use of.
You need to convert the UIImage ( image type in IOS ) to cv::Mat or IplImage and pass it to the opencv algorithms. You can convert using this link or this.
I'm currently working on a project, where I need to detect a face and then take a photo with the camera. (after the camera focused everything correctly).
Is something like this possbile in iOS?
Are there any good tutorials on this?
i would suggest to use opencv for this as it has proven algorithm and fast enough to work on image as well as video
https://github.com/aptogo/FaceTracker
https://github.com/mjp/FaceRecognition
This solution will work for android too using opencv port to android.
Use GPUImage for face detection.
Face detection example is also available in GPUImage.
see last point in FilterShowCase example project of GPUImage for face detection.
iOS 10 and Swift 3
You can check apple example you can detect face
https://developer.apple.com/library/content/samplecode/AVCamBarcode/Introduction/Intro.html
you can select the face metedata to make camera track the face and show yellow box on the face its have good performace than this example
https://github.com/wayn/SquareCam-Swift
I am a new kinect developer and going to develop some application related to face tracking by using kinect v1.5 and XNA Framework in c# platform.
I can successfully get the face points and rectangle points to display in the screen by using the kinect sdk and Basic Effect of XNA 3D drawing.
However, What i want is to get back exactly the same color pixel of the user's face so that I can get mapping of the user's real face to a model.
Is there anybody that can help to answer my question?
Thank you very much!
One of the ways you can achieve this would be by using the RGB (colour) video stream and capturing a still. You can then use C# to enumerate through the X/Y axis of this image to get the colour if required.
The more effcient way however would be to use this still as the texture and "wrap" the 3D model you are creating using it. There is an example provided with the Kinect SDK which does something similar, the sample is called Face Tracking 3D - WPF. I would encourage you to use this as your base porting to XNA and work from there.