I am working with Xtion Pro Live on Ubuntu 12.04 with Opencv 2.4.10. I want to do object recognition on daylight.
So far i have achieved object recognition indoors by producing a depth and a disparity map. When i go outdoors the maps that i mentioned above are black and i cannot perform object recognition.
I would like to ask you if Asus Xtion Pro Live can work outdoors.
If it cannot, is there a way to fix it (through code) in order to do object detection outdoors?
I have searched around and i found out that i should take another stereoscopic camera. Could anyone help?
After some research I discovered that the Xtion Pro Live stereoscopic camera, can not be used outdoors because of the IR sensor. This sensor is responsible for the production of depth map and is affected by sunlight. Because of this, there are no clear results. Without clear results the creation of depth and disparity map (with the proper values) is impossible.
Related
Apple provides sample project for putting 3d content or face filters on people faces. The 3d content tracks face anchor and move according to it. But this function is only supported with devices that have TrueDepth Camera. For example, we can not use ARSCNFaceGeometry without TrueDepth. How Facebook or 3. party SDKs like Banuba makes this work with devices without depth camera?
As far as I know, using the MediaPipe and get face mesh is the only possibility without TrueDepth camera.
Is it possible to get calibration data AVCapturePhoto::cameraCalibrationData for ultra wide camera?
Documentation says:
Camera calibration data is present only if you specified the cameraCalibrationDataDeliveryEnabled and dualCameraDualPhotoDeliveryEnabled settings when requesting capture.
but dualCameraDualPhotoDeliveryEnabled was deprecated.
I tried to set cameraCalibrationDataDeliveryEnabled for builtInDualWideCamera and builtInUltraWideCamera without any success.
The calibration data is meant to give you information about the intrinsics of multiple cameras in a virtual camera capture scenario. This used to be the dual camera (introduced with the iPhone X), but with the release of the iPhone 11 Pro, the API switched it's naming. It's now called isVirtualDeviceConstituentPhotoDeliveryEnabled and you can now specify the set of cameras that should be involved in the capture with virtualDeviceConstituentPhotoDeliveryEnabledDevices.
Note that the calibration data only seem to be available for virtual devices with at least two cameras involved (so builtInDualCamera, builtInDualWideCamera and builtInTripleCamera).
I want to send my images to Kinect SDK either OpenNI or windows SDK for Kinect sdk to tell me the position of a user's hand or head and ..
I don't want to use kinect's camera feed. The images are from a paper which I want to do some image processing on them and I need to work exactly on same images so can not use my own body as input to Kinect camera.
I don't matter between Kinect sdk of Microsoft or the OpenNI thing, it just needs to be able to get my rgb and depth images as input instead of Kinect camera's one.
Is it possible? If yes how can I do it?
I subscribe to the same question. I want to feed a Kinect face detection app to read images from the hard drive and return the Animation Units of the recognized face. I want to train a classifier for facial emotion recognition using Animation Units as input and features.
Thanks,
Daniel.
Hi i am using an asus xtion pro live camera for my object detection, i am also new to opencv. Im trying to get distance of object from the camera. The Object detected is in 2d image. Im not sure on what should i use to get the information then following up with the calculations to get distance between camera and object detected. Could someone advise me please?
In short: You can't.
You're losing the depth information and any visible pixel in your camera image essentially transforms into a ray originating from your camera.
So once you've got an object at pixel X, all you know is that the object somewhere intersects the vector cast based on this pixel and the camera's intrinsic/extrinsic parameters.
You'll essentially need more information. One of the following should suffice:
Know at least one coordinate of the 3D point (e.g. everything detected is on the ground or in some known plane).
Know the relation between two projected points:
Either the same point from different positions (known camera movement/offset)
or two points with significant distance between them (like the two ends of some staff or bar).
Once you've got either, you're able to use simple trigonometry (rule of three) to calculate the missing values.
Since I initially missed this being a camera with an OpenNI compatible depth sensor, it's possible to build OpenCV with support for that by definining the preprocessor define WITH_OPENNI when building the library.
I don't like to be the one breaking this to you but what you are trying to do is either impossible or extremely difficult with a single camera.
You need to have the camera moving, record a video of it and use a complex technique such as this. Usually 3d information is created from at least 2 2d images taken from 2 different places. You also need to know quite precisely the distance and the rotation between the two images. The common technique is to have 2 cameras with a precisely measured distance between the two.
The Xtion is not a basic webcam. It's a stereo-scopic depth sensing cam similar to Kinect and Primesense. The main API for this is OpenNI - see http://structure.io/openni.
I am working on a project in which I have to detect the hand in a video. Kinect is being used to capture the video. I have already tried skin segmentation using hsv colour scheme. It works well when I get the video from cam of my laptop but does not work with Kinect. I have also tried colour segmentation and thresholding but it is also not working well. I am using opencv in c. I will be grateful if someone can give any type of suggestions or steps to detect the hand.
You can use OpenNI to obtain a skeleton of the person which robustly tracks the hand in your video. E.g. http://www.youtube.com/watch?v=pZfJn-h5h2k