I'm able to get both the open source kinect drivers and the Windows drivers working correctly for my xbox kinect, but I am not able to keep both on the same machine when using one or the other. What is the reason for that? And what can I do so that I don't have to totally uninstall all Kinect related things when I am testing out a gesture library that might be open source or require the Windows Kinect SDK?
On Windows you can only have 1 driver per device, so I'm afraid you can't use both OpenKinect/libfreenct and the Kinect for Windows(MS Kinect SDK) at once.
However you can either use OpenNI 1.5.x with this Kinect-OpenNI bridge,
or OpenNI 2 with Kinect for Windows
I assume it might be simpler to use a single kinect library.
Nite(which ships with OpenNI) provides some gestures.
I'm not sure if the latest Kinect for Windows does too.
You can still use skeleton tracking to implement your own gestures.
You can use a number of algorithms, for example Dynamic Time Warping
(here's a Kinect for Windows library)
I also recommend having a look at Gesture Recognition Toolkit(GRT)
as it provides a number of various algorithms nicely explained wiki
and since it's generic you can use with either openni or kinect for windows
(not to mention wiimotes/IMUs/etc.)
Related
Info seems to be scarse, hoping someone can point me to a sdk, libary, code to get the infra frame from the hello camera in the surface pro.
Does opencv support this?
More info the camera is Intel AVStream Camera 2500 as listed in the device manager of the surface pro.
To my best knowledge Media Foundation API has no support for infrared cameras. Microsoft did not update the API to extend it to such inputs even though it is technically possible when it comes to undocumented.
You can read infrared frames through a newer API offered for UWP development: Process media frames with MediaFrameReader, the keyword there is this: MediaFrameSourceKind.Infrared. This API is built on top of Media Foundation and Sensor APIs and gets you infrared cameras even though underlying Media Foundation alone has no equivalent public interface.
Given that this is UWP API, you might have troubles fitting this all together with OpenCV if you need the latter. UWP/OpenCV bridging might be on help there: Create a helper Windows Runtime component for OpenCV interop.
Since OpenCV is supposedly interfacing directly to traditional Windows APIs, DirectShow and Media Foundation, it is highly unlikely that it is capable of capturing infrared stream out of the box, unless, of course, the driver itself represents it as normal video. "Proper" markup on Surface Pro as infrared, thus, hides sensor from the mentioned APIs and, respectively, OpenCV.
Can OpenCV seamlessly interact with all cameras that comply with these standards
No it cannot. You need something called a GenTLProducer in order to interact with your camera. Normally your vendor's SDK comes with it. Alternatively, you can use the one from Baumer Here or from Stemmer Imaging Here.
Another option is to use harvesters, which is an open source project that aims to do this. Although you need a GenTLProducer for that as well.
Is that possible or do you need to connect the kinect to a computer and stream the images in (almost) real time to an iPhone? Is it even possible to get ~30fps via stream on the iphone?
The Kinect uses a USB connection and even if you could make up some sort of cable to connect a Kinect to the Lightning or 30 pin connector, iOS would not recognise the Kinect as it does not have a driver, so the short answer is no, you cannot connect a Kinect directly to the iPhone.
For a simple solution/alternative, you might want to check out Occipital/Structure.io, who are selling a depth sensor for (some) iDevices for ca. 380USD.
Apparently they are using Primesense Carmine sensors ("which is essentially an equivalent of ASUS Xtion Live under different brand name" according to [iPiSoft's sensor comparison] (http://wiki.ipisoft.com/Depth_Sensors_Comparison)).
You can review the differences to the Kinect at the previous link, but basically it boils down to the Kinect being bigger and heavier, having a motorized tilt and requiring external power.
To get back to your question:
if you look around you'll find working examples of how to get OpenNI running on BeagleBone dev-boards under Linux, and thus it is more than conceivable that you'll be able to compile and run it for/on iOS as well (possibly requiring a jailbreak).
You could also have a look at libfreenect, another open implementation of drivers for the Primesense family of sensors (as well as the Kinect 2).
I was wondering if it would be possible to capture the live video from my integrated webcam using Labview 2011(National Instruments). All I need to do for now is put the camera in the front panel. This is not a USB Webcam. It is a chicony USB 2.0 Camera(does not show up as usb on my pc). Can anyone help me?
LV2012? Is this beta?
The best way to do this is using IMAQdx drivers+Vision Developement module. AFter installing IMAQdx, USB cams usually already show up in Measurement and Automation Explorer and you can try out Snap/Grab... (Tip: Do install whatever driver is included with the hardware/on a cd.)
Then, in LV, just drop the "IMAQ Acquisition Express" vi into your block diagram and you'll be guided through a very quick and easy setup.
I'm not much into Express vis, but that one is good.
If you don't have Vision Dev Module, look into ADVision (http://vi-lib.com/). It does the same thing, just with OpenCV, but I don't think that every driver is supported.
Also, remember only USB cameras that have DirectShow filter are supported by the Vision Acquisition Software, which has the IMAQdx that Birgit P. mentioned.
for usb2 you need imaqdx toolkit in vision acquisition part
also check NIMax after installation to see if labview could find your camera or not
labview could find and support all useb2 camera if you instal camera diver correctly
Is it possible to develop for the Kinect sensor without having an Xbox 360?
We would like to use the Kinect to develop an augmented reality application, but we're not sure if we need to get an Xbox for this. Do we have to, or can we develop using other platforms?
Yes, there are other APIs for interacting with the Kinect.
Microsoft has released it's beta API:
http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/
The caveat with Microsofts' API is that it cannot be licensed for commercial use. It's also a beta, so functionality is not locked, and there may be bugs.
OpenKinect is an open-source alternative, but it requires a little more work to get up and running:
http://openkinect.org/wiki/Main_Page
Yes ! it is very much possible to develop applications and devices on Kinect without the XBOX. Particular instances are in robotics (http://turtlebot.com/) and 3D printing (http://www.makerbot.com/blog/2011/05/26/3d-printing-with-kinect/). You should be able to get your project done using OpenNI drivers (http://www.openni.org/).