Is that possible or do you need to connect the kinect to a computer and stream the images in (almost) real time to an iPhone? Is it even possible to get ~30fps via stream on the iphone?
The Kinect uses a USB connection and even if you could make up some sort of cable to connect a Kinect to the Lightning or 30 pin connector, iOS would not recognise the Kinect as it does not have a driver, so the short answer is no, you cannot connect a Kinect directly to the iPhone.
For a simple solution/alternative, you might want to check out Occipital/Structure.io, who are selling a depth sensor for (some) iDevices for ca. 380USD.
Apparently they are using Primesense Carmine sensors ("which is essentially an equivalent of ASUS Xtion Live under different brand name" according to [iPiSoft's sensor comparison] (http://wiki.ipisoft.com/Depth_Sensors_Comparison)).
You can review the differences to the Kinect at the previous link, but basically it boils down to the Kinect being bigger and heavier, having a motorized tilt and requiring external power.
To get back to your question:
if you look around you'll find working examples of how to get OpenNI running on BeagleBone dev-boards under Linux, and thus it is more than conceivable that you'll be able to compile and run it for/on iOS as well (possibly requiring a jailbreak).
You could also have a look at libfreenect, another open implementation of drivers for the Primesense family of sensors (as well as the Kinect 2).
Related
Info seems to be scarse, hoping someone can point me to a sdk, libary, code to get the infra frame from the hello camera in the surface pro.
Does opencv support this?
More info the camera is Intel AVStream Camera 2500 as listed in the device manager of the surface pro.
To my best knowledge Media Foundation API has no support for infrared cameras. Microsoft did not update the API to extend it to such inputs even though it is technically possible when it comes to undocumented.
You can read infrared frames through a newer API offered for UWP development: Process media frames with MediaFrameReader, the keyword there is this: MediaFrameSourceKind.Infrared. This API is built on top of Media Foundation and Sensor APIs and gets you infrared cameras even though underlying Media Foundation alone has no equivalent public interface.
Given that this is UWP API, you might have troubles fitting this all together with OpenCV if you need the latter. UWP/OpenCV bridging might be on help there: Create a helper Windows Runtime component for OpenCV interop.
Since OpenCV is supposedly interfacing directly to traditional Windows APIs, DirectShow and Media Foundation, it is highly unlikely that it is capable of capturing infrared stream out of the box, unless, of course, the driver itself represents it as normal video. "Proper" markup on Surface Pro as infrared, thus, hides sensor from the mentioned APIs and, respectively, OpenCV.
Can you use a USB camera for the Spectator View rig and overwrite one of the scripts OpenCV uses to get the camera feed?
I think this is the first StackOverflow question where it talks about Spectator View the Microsoft HoloLens supports, because I checked once to see if any other questions talk about it here, and it doesn't look like it.
Anyway, according to the documentation here, to enable Spectator View on a Unity-based UWP app that is deployed to more than one Microsoft HoloLens, I need to choose one out of four different ways to capture live video feed from a camera:
OpenCV 3.2
DeckLink Capture Card
Elgato Capture Card
Canon SDK
In this Spectator View setup I have for a project that's under a non-disclosure agreement, I am using OpenCV 3.2. What I'm using is a Lenovo ThinkPad laptop as the hub for Spectator View.
In detail, it runs the Unity Editor that holds the Spectator View Manager component I need to see in the Inspector in order to build, install, and launch the app that two HoloLens headsets I have will use to see a shared, anchored hologram that is spatially placed. The editor also has the Compositor interface I need to overlap what the camera sees with what a virtual camera this Unity scene has to create a video feed that goes out to a projector or TV set. Lastly, I have an executable from Microsoft's Mixed Reality Toolkit called Sharing Service where it runs basically a server program to exchange the transform of holograms on the fly, as if those are put in place in the real environment.
Now, the Lenovo ThinkPad cannot take in any capture cards, because there are no internal expansion ports. The laptop does not have an HDMI input port; only output. As such, when I start running the app on the Unity Editor, I do get video input and Unity view input in the Compositor interface, but the video feed is coming from the built-in camera the Lenovo ThinkPad provides. What I want to do is use a different camera instead, preferably a DSLR camera that can connect to my laptop using USB.
By using OpenCV 3.2 as the main dependency in the libraries I need, can I modify one of the scripts where it accepts video stream from a USB camera?
#Dtb49 says in the StackOverflow chat above,
"I don't think you are limited to those four choices I think those are just the ones that they tested with. I do remember something about the USB port needed to be a 3.0 for it to work properly. I do remember coming across that problem when I was initially setting it up."
I don't know right now whether I need to change a script or not to have the Compositor interface take camera input from the external camera that's connected by USB, or just disable the webcam on my laptop temporarily where something in the OpenCV assembly or motherboard determines which camera to load for the interface. But it looks like using a DSLR camera connected by USB for the Microsoft HoloLens Spectator View rig is possible.
As a university intern, I can say that the documentation for Spectator View in its current state is quite confusing, as I am not familiar with UNET and some other Microsoft technologies.
I am importing a source code for stereo visions. The next code of the author works. It takes two cameras sources. I have two different cameras currently and i receive images. Both works. It crashes at capture2. interesting part is that if i change the orders of the webcams(Unplugging them and invert the orders) the first camera it will be the second one. We it doesn't work? I tested also with Windows XP sp3 and Windows 7 X64. The same problem.
//---------Starting WebCam----------
capture1= cvCaptureFromCAM(1);
assert(capture1!=NULL); cvWaitKey(100);
capture2= cvCaptureFromCAM(2);
assert(capture2!=NULL);
Also If i use -1 for paramters the just give me the first one(all the time).
Or any method to capture two camers using function cvCaptureFrom
Firstly the cameras are generally numbered from 0 - is this just the problem?
Secondly, directshow and multiple USB webcams is notoriously bad in windows. Sometimes it will work with two identical camera, sometimes only if they are different.
You can also try a delay between initialising the cameras, sometimes one will lock the capture stream until it is sending data, preventing the other being detected.
Often the drivers assume they are the only camera and make incorrect calls to lock up the entire capture graph. This isn't helped by it being extremely complicated to write correct drivers+fdirectshow filters in Windows
some mother board can not work with some usb 2.0 cameras. one usb 2.0 camera take 40-60% of usb controller. solution is connect second usb 2.0 camera from pci2usb controller
Get 2 PS3 Eyes, around EUR 10 each, and the free codelaboratories.com SDK, this gets you support up to 2 cameras using C, C#, Java, and AS3 incl. examples etc. You also get FIXED frame rates up 75 fps # 640*480. Their free driver only version 5.1.1.0177 provides decent DirectShow component, but for a single camera only.
COmment for the rest: Multi-cam DirectShow drivers should be a default for any manufacturer, not providing this is a direct failure to implement THE VERY BASIC PORPUSE AND FEATURE OF USB as an interface. It is also VERY EASY to implement, compared to implementing the driver itself for a particular sensor / chipset.
Alternatives that are confirmed to work in identical pairs (via DirectShow):
Microsoft Lifecam HD Cinema (use general UVC driver if you can, less limited fps)
Logitech Webcam Pro 9000 (not to be confused with QuickCam Pro 9000, which DOES NOT work)
Creative VF0220
Creative VF0330
Canyon WCAMN-1N
If you're serious about your work, get a pair of machine vision cameras to get PERFORMANCE. Cheapest on the market, with german engineering quality, CCD, CMOS, mono, colour, GigE (ethernet), USB, FireWire, excellent range of dedicated drivers:
http://www.theimagingsource.com
I was wondering if it would be possible to capture the live video from my integrated webcam using Labview 2011(National Instruments). All I need to do for now is put the camera in the front panel. This is not a USB Webcam. It is a chicony USB 2.0 Camera(does not show up as usb on my pc). Can anyone help me?
LV2012? Is this beta?
The best way to do this is using IMAQdx drivers+Vision Developement module. AFter installing IMAQdx, USB cams usually already show up in Measurement and Automation Explorer and you can try out Snap/Grab... (Tip: Do install whatever driver is included with the hardware/on a cd.)
Then, in LV, just drop the "IMAQ Acquisition Express" vi into your block diagram and you'll be guided through a very quick and easy setup.
I'm not much into Express vis, but that one is good.
If you don't have Vision Dev Module, look into ADVision (http://vi-lib.com/). It does the same thing, just with OpenCV, but I don't think that every driver is supported.
Also, remember only USB cameras that have DirectShow filter are supported by the Vision Acquisition Software, which has the IMAQdx that Birgit P. mentioned.
for usb2 you need imaqdx toolkit in vision acquisition part
also check NIMax after installation to see if labview could find your camera or not
labview could find and support all useb2 camera if you instal camera diver correctly
I'm writing LabVIEW software that grabs images from an IMAQ compatible GigE camera.
The problem: This is a collaborative project, so I only have intermittent access to the actual camera.I'd like to be able to keep developing this software even when the camera isn't present.
Is there a simple/fast way to create a virtual or dummy IMAQ camera in software? Ideally I'd like the dummy camera grab frames from an AVI or a stack of JPEG's. Something like this must exist, I just can't find it on Google.
I'm looking for something that won't take very long (e.g.< 2 hours effort) and that is abstracted away through the standard LabVIEW IMAQ interface, so that my software won't know or care whether its dealing with a dummy camera or an actual camera.
You can try this method using LabVIEW classes:
Hardware Emulation Using LabVIEW Classes
If you have the IMAQdx driver, you might consider just buying a cheap USB webcam for $10.
Use the IMAQdx driver (assuming you have it), and then insert the Vision Acquisition Express VI, and you can choose AVIs or even pics as a source.
Something like this: GigESim is a camera emulation software. Unfortunately it is proprietary and too expensive (>$500) for my own needs, but perhaps others will find this link useful.
Anyone know of a viable Open Source alternative?
There's an IP Camera emulator project that emulates IP camera with python. I haven't used it myself so i don't know if it can be used by IMAQ.
Let us know if it's good for you.
I know this question is really old, but hopefully this answer helps someone out.
IMAQdx also works with Windows DirectShow devices. While normally these are actual physical capture devices (think USB Webcams), there is no necessity that they have to be.
There are a few different pre-made options available on the web. I found using Open Broadcaster Studio and this Virtual Cam plugin to be easy enough. Basically:
Download and install both.
Load your media sources in the sources list.
Enable the VirtualCam stream (Tools > VirtualCam). Press Start.