send images to kinect as input instead of camera feed - image-processing

I want to send my images to Kinect SDK either OpenNI or windows SDK for Kinect sdk to tell me the position of a user's hand or head and ..
I don't want to use kinect's camera feed. The images are from a paper which I want to do some image processing on them and I need to work exactly on same images so can not use my own body as input to Kinect camera.
I don't matter between Kinect sdk of Microsoft or the OpenNI thing, it just needs to be able to get my rgb and depth images as input instead of Kinect camera's one.
Is it possible? If yes how can I do it?

I subscribe to the same question. I want to feed a Kinect face detection app to read images from the hard drive and return the Animation Units of the recognized face. I want to train a classifier for facial emotion recognition using Animation Units as input and features.
Thanks,
Daniel.

Related

How to convert webcam image to RGB Depth

I'm building an iPhone-like FaceID program using my PC's webcam. I'm following this notebook which uses Kinect to create RGB-D images. So can I use my webcam to capture several images for the same purpose?
Here's how to predict the person in the Kinect image. It uses a .dat file.
inp1 = create_input_rgbd(file1)
file1 = ('faceid_train/(2012-05-16)(154211)/011_1_d.dat')
inp2 = create_input_rgbd(file1)
model_final.predict([inp1, inp2])
They use Kinect to create RGB-D images where you want to only use RGB camera to do the similar? Hardwarely they are different. So there wont be a direct method
You have to first estimate a depth map using only monocular Image.
Well you can try with Revisiting Single Image Depth Estimation: Toward Higher Resolution Maps with Accurate Object Boundaries as shown below. The depth obtained is pretty much close to real ground truth. For non-life threatening case(e.g control UAV or control car), you can use it anytime.
The code and model are available at
https://github.com/JunjH/Revisiting_Single_Depth_Estimation
Edit the demo py file to do a single image detection.
image = you
deep_learned_fake_depth = model(image)
#Add your additional classification routing behind.
Take note this method cant work real time. So you can only do it at keyframes. Usualy people use the feature tracking technique to fake continuous detection( which is the common practice).
Also take note that some of the phone devices does have a small depth estimation sensor that you can make use of. Details I'm not very sure as I deal android and ios at a very minimal level.

Extract hand from kinect dataset using image processing

I downloaded the kinect sensor datasets (depth(textfile) and image)because kinect is expensive.I don't know how to proceed with the dataset?I have to extract the hand from the image.i can't use kinectSDK because it works only if kinect sensor is connected.So i decided to extract hand from the image using image processing.Can anyone please suggest any algorithm for that? or can I extract hand by means of other methods?
Thanks in advance.
color image and depth information can be used for hand detection,
i think you can use nearest skin region to camera as the hand, because in the dataset hand placed front of body.

Motion Sensing by Camera in iOS

I am working on an app in iOS that will occur an event if camera detects some changes in image or we can say motion in image. Here I am not asking about face recognition or a particular colored image motion, And I got all result for OpenCV when I searched, And I also found that we can achieve this by using gyroscope and accelerometer both , but how??
I am beginner in iOS.So my question is , Is there any framework or any easy way to detect motion or motion sensing by camera.And How to achieve?
For Example if I move my hand before camera then it will show some message or alert.
And plz give me some useful and easy to understand links about this.
Thanx
If all you want is some kind of crude motion detection, my open source GPUImage framework has a GPUImageMotionDetector within it.
This admittedly simple motion detector does frame-to-frame comparisons, based on a low-pass filter, and can identify the number of pixels that have changed between frames and the centroid of the changed area. It operates on live video and I know some people who've used it for motion activation of functions in their iOS applications.
Because it relies on pixel differences and not optical flow or feature matching, it can be prone to false positives and can't track discrete objects as they move in a frame. However, if all you need is basic motion sensing, this is pretty easy to drop into your application. Look at the FilterShowcase example to see how it works in practice.
I don't exactly understand what you mean here:
Here I am not asking about face recognition or a particular colored
image motion, because I got all result for OpenCV when I searched
But I would suggest to go for opencv as you can use opencv in IOS. Here is a good link which helps you to setup opencv in ios.
There are lot of opencv motion detection codes online and here is one among them, which you can make use of.
You need to convert the UIImage ( image type in IOS ) to cv::Mat or IplImage and pass it to the opencv algorithms. You can convert using this link or this.

Programming camera for ROI selection

I want to transfer an ROI instead of the full Image using Camera. This I am doing to increase the (usefull) data transfer rate from camera to PC(less data, less time), where I will be doing some Image processing on the ROI. Basically the user will define the ROI's coordinates, using which the camera will capture ROI and will send only this ROI to the PC through USB or gigaBitEthernet.
Is it possioble to do this programatically since my application will need ROI which will be changing dynamically. Do we have some APIs that lets us define ROI and prgram the camera accordingly?
I will be using C/C++ with OpenCV for the entire application.
Do we have some APIs that lets us define ROI and prgram the camera
accordingly?
This all depends on your camera and the driver. Some cameras produced by Point Grey have this feature and they come with a SDK that provides an API for setting a ROI.

hand detection in c using opencv capturing video from kinect device

I am working on a project in which I have to detect the hand in a video. Kinect is being used to capture the video. I have already tried skin segmentation using hsv colour scheme. It works well when I get the video from cam of my laptop but does not work with Kinect. I have also tried colour segmentation and thresholding but it is also not working well. I am using opencv in c. I will be grateful if someone can give any type of suggestions or steps to detect the hand.
You can use OpenNI to obtain a skeleton of the person which robustly tracks the hand in your video. E.g. http://www.youtube.com/watch?v=pZfJn-h5h2k

Resources