I recorded some motions as .xed file by Kinect Studio 1.8 and sensor Kinect for Windows at my University. And now I want to use this .xed file instead of kinect sensor, because at home I don't have it and I want to improve my app.
When I did points from similar problem it is not working, there is a message box that I need connect kinect sensor.
How I can open my app without kinect sensor and test it with this .xed file? I read about Fakenect but I can't find any documentation how to use it.
Related
I am trying to get the raw LiDAR data from Helios2 time of flight camera. How do I remove the built in features which sharpens the LiDAR data output of pointcloud.
I am trying to access code of the SDK where I can make some changes but could not find that in Windows version of the software.
I'm using a RPI 4 with Pi Camera and OpenCV to get the video stream from the Pi camera, and detect a face, then tracking it using servo motors.
If I want to see the feed, I can use cv2.imshow("",frame) with the frame read from the stream.
I'm looking for a way to output the frames to be used as a webcam. For example using rtsp to make the RPI an IP camera, then using VLC to get the feed.
Problem is, I can't find a way to implement in my code to actually stream the frames. I tried using ffmpeg, but the rtsp server part is missing, I need to somehow start in my code, maybe with a package of somekind.
If anyone has a better suggestion to use the RPI with my code as a webcam, I would be happy to hear.
Thanks
I am currently working with Google Tango and Microsoft Hololens. I got the idea of scanning a room or an object using google Tango and then converting and showing it as hologram with the Hololens.
For that I need to get the ADF file on my computer.
Does someone know of a way to import adf-files onto a computer?
Do you know if it is possible to convert adf-files into usable 3d files?
An ADF is not a 3D scan of the room, it's a collection of feature descriptors from the computer vision algorithms with associated positional data, but the format is not documented.
You will want to use the point cloud from the depth sensor, convert it to a mesh (there are existing apps to do this) and import the mesh into a render engine on Hololens.
I'm trying to do a face tracking project, so I came to see the microsoft kinect face tracking ,but I don't have a kinect camera,Is it possible to use two webcameras instead of kinect camera.
As Bart mentions, the Kinect SDK doesn't have support for two webcams, it's aimed at the Kinect sensor itself only.
You use OpenCV for stereo calibration but it might worth looking at what you can do with a single camera too. I recommend having a look at Jason Saragih's Face Tracker and Kyle McDonald's ofxFaceTracker addon examples.
Emgu CV currently allows the use of the Kinect with the OpenNI drivers.
I've also seen that there exists an mssdk-openni bridge application to allow the Kinects running on the official Microsoft SDK to emulate OpenNI driven Kinects.
Has anyone been successful in getting a Kinect running on the Microsoft SDK to work with Emgu CV, either with the mssdk-openni bridge or directly?
Are there any tips on getting it running smoothly, or pitfalls to avoid?
Yeah. I've simply installed the SDK and could capture and extract bitmaps of the video stream. The MSSDK for Kinect works just fine and easy. You can start by reading the Samples specially the Skeleton Sample and KinectColorViewer, KinectDepthViewer and KinectDiagnosticViewer wpf samples provided by Microsoft. You can add Emgu CV dlls and use them both together to gain your goal.
Good Luck