Kinect sdk v1.5 face tracking by using XNA 4.0 Framework of C# - xna

I am a new kinect developer and going to develop some application related to face tracking by using kinect v1.5 and XNA Framework in c# platform.
I can successfully get the face points and rectangle points to display in the screen by using the kinect sdk and Basic Effect of XNA 3D drawing.
However, What i want is to get back exactly the same color pixel of the user's face so that I can get mapping of the user's real face to a model.
Is there anybody that can help to answer my question?
Thank you very much!

One of the ways you can achieve this would be by using the RGB (colour) video stream and capturing a still. You can then use C# to enumerate through the X/Y axis of this image to get the colour if required.
The more effcient way however would be to use this still as the texture and "wrap" the 3D model you are creating using it. There is an example provided with the Kinect SDK which does something similar, the sample is called Face Tracking 3D - WPF. I would encourage you to use this as your base porting to XNA and work from there.

Related

Find surfaces in 3d image

I'm working on a C++ project using a ToF camera. The camera is inside a room and has to detect walls, doors or other big planar surfaces. I'm currently using OpenCV but answers using other C++ libaries are also okay. What is a good algorithmn to detect the surfaces, also if they are rotated and aren't facing the camera directly. I've heard things like making a point cloud and using RANSAC. If you suggest me doing that please explain it in detail or provide a resource for explanation, because I don't know much about this topic (I'm a beginner in computer vision).
Thanks for your responses.
Are you familiar with PCL?
This tutorial shows how to find planar segments in a point-cloud using PCL.

Detect an object and take photo

You might have seen that option in one of the samsung phone that when a person smile it take the photo. So it somehow detects the smile and the click the photo automatically.I'm trying to create the similar thing on iOS that lets say if the camera detects a chair it clicks the photo.I've searched around and what I found is that there is a library called OpenCV but I'm not sure it'll work with iOS or not. Plus there is a concept of CoreImage in iOS which has something to do with deep understanding of the image. So any idea about this?
openCV For iOS
For detection you can use openCV framework in iOS and the native detection methods. In my application i am using openCV rectangle detection and the scenario is: after taken picture openCV detects rectangle on the image and then makes lines on detected shape, also it can crop the image with basic functionality and as perspective correction.
options: Face Detection, Shape Detection
Native way:
iOS provides us real time detection there are many tutorials how to use them i will link at the end of the thread. Native way also provides us face detection, shape detection and perspective correction.
Coclusion:
Choice is up to you but i prefer native way. remember openCV is written in C++ if you are using swift language you can import openCV in your project and then connect swift to objective-C to call openCV. Using Bridging Headers
Tutorials:
Medium Link 1
Medium Link 2
Toptal Tutorial
How to use OPENCV in iOS

Motion Sensing by Camera in iOS

I am working on an app in iOS that will occur an event if camera detects some changes in image or we can say motion in image. Here I am not asking about face recognition or a particular colored image motion, And I got all result for OpenCV when I searched, And I also found that we can achieve this by using gyroscope and accelerometer both , but how??
I am beginner in iOS.So my question is , Is there any framework or any easy way to detect motion or motion sensing by camera.And How to achieve?
For Example if I move my hand before camera then it will show some message or alert.
And plz give me some useful and easy to understand links about this.
Thanx
If all you want is some kind of crude motion detection, my open source GPUImage framework has a GPUImageMotionDetector within it.
This admittedly simple motion detector does frame-to-frame comparisons, based on a low-pass filter, and can identify the number of pixels that have changed between frames and the centroid of the changed area. It operates on live video and I know some people who've used it for motion activation of functions in their iOS applications.
Because it relies on pixel differences and not optical flow or feature matching, it can be prone to false positives and can't track discrete objects as they move in a frame. However, if all you need is basic motion sensing, this is pretty easy to drop into your application. Look at the FilterShowcase example to see how it works in practice.
I don't exactly understand what you mean here:
Here I am not asking about face recognition or a particular colored
image motion, because I got all result for OpenCV when I searched
But I would suggest to go for opencv as you can use opencv in IOS. Here is a good link which helps you to setup opencv in ios.
There are lot of opencv motion detection codes online and here is one among them, which you can make use of.
You need to convert the UIImage ( image type in IOS ) to cv::Mat or IplImage and pass it to the opencv algorithms. You can convert using this link or this.

How to detect movement of object on iPhone's camera screen? [duplicate]

I saw that someone has made an app that tracks your feet using the camera, so that you can kick a virtual football on your iPhone screen.
How could you do something like this? Does anyone know of any code examples or other information about using the iPhone camera for detecting objects and tracking them?
I just gave a talk at SecondConf where I demonstrated the use of the iPhone's camera to track a colored object using OpenGL ES 2.0 shaders. The post accompanying that talk, including my slides and sample code for all demos can be found here.
The sample application I wrote, whose code can be downloaded from here, is based on an example produced by Apple for demonstrating Core Image at WWDC 2007. That example is described in Chapter 27 of the GPU Gems 3 book.
The basic idea is that you can use custom GLSL shaders to process images from the iPhone camera in realtime, determining which pixels match a target color within a given threshold. Those pixels then have their normalized X,Y coordinates embedded in their red and green color components, while all other pixels are marked as black. The color of the whole frame is then averaged to obtain the centroid of the colored object, which you can track as it moves across the view of the camera.
While this doesn't address the case of tracking a more complex object like a foot, shaders like this should be able to be written that could pick out such a moving object.
As an update to the above, in the two years since I wrote this I've now developed an open source framework that encapsulates OpenGL ES 2.0 shader processing of images and video. One of the recent additions to that is a GPUImageMotionDetector class that processes a scene and detects any kind of motion within it. It will give you back the centroid and intensity of the overall motion it detects as part of a simple callback block. Using this framework to do this should be a lot easier than rolling your own solution.

Question related to Qualcomm's Qcar sdk

Can Anyone suggest how we can capture the an image after 3d augmentation in augmented reality. That is once we detected the tracker which is a requirement of Qcar sdk and placed a 3d texture over it. Then i need to capture this image with 3d textured augmented. Any suggestions would be helpful in my research.
Thanks in Advance..
I haven't done this, but apparently it's possible using glReadPixels
Here's a description
https://ar.qualcomm.at/arforums/showthread.php?t=666
& an example
https://ar.qualcomm.at/arforums/showthread.php?t=427

Resources