I am using portaudio and trying to get input from two separate usb webcam microphones.
I connecting to two separate devices yet the data I get seems to me mixed (speaking into one microphone shows up in both outputs)
My bad, my visualization code had a bug
Related
My setup consists of 3 USB 2.0 cameras connected to a USB 3.0 hub and then connected to computer. Cameras are independent of each other and the hardware is not synchronized. Goal is to capture an image from all these camera when an event occur, in this case just pressing a button.
I came across this opencv document page ,"VideoCapture::grab" & "VideoCapture::retrieve" but unable to find a simple example that i can understand/easy to use. Any help is greatly appreciated. I also see that many people are using multi threading, multi processing etc. and almost all of them are using "VideoCapture::read" rather than grab&retrieve.
I'm using a custom Linux built using Buildroot for a SoC Allwinner A20. This SoC have 4 analog video inputs which I need to use. The problem is that there is not a decent driver for this video input, so I'm fixing the single one I was able to find over the internet. It is a V4L2 driver for this device.
This device can capture video from more than one of the video inputs at the same time, and combine them in one single image, splitting the image in 2 or 4 parts, displaying the video from each camera in a different part.
However, the driver is very basic, and it is not prepared to configure this yet. It only capture from the video input #1. What I want to do is to modify this driver so it allows to configure how many inputs do you cant to enable (1, 2 or 4 inputs) and which ones (for example: enable inputs #2 and #4 and combine them into a video split in 2 parts).
First thing I though was to do this using the VIDIOC_S_INPUT ioctl because this is what it is supposed to do: selecting what input you want to use from a device with more than one input. However, this would work great if I just had to choose one of the 4 inputs, but I don't know how to use it to enable 2 or 4 inputs, and less what inputs must be enabled in this case, and in what order.
How can achieve this in a 4vl2 compliant way? I would like to use it with standard software like ffmpeg and gstreamer.
I guess that the v4l2 compliant way would be to create 4 devices /dev/video0../dev/video3 each of them exposing one capture source, and then do the overlay in user-space.
If this is not possible, and you indeed can only present a single combined video stream via a single device (/dev/video0) because the device does the stream merge in hardware, then i don't think that using VIDIOC_S_INPUT is out of the way. just come up with a good numbering scheme...
However (to re-iterate), if the stream merge is not done in hardware but in software, then you should never do that in kernel-space and always in user-space (so you should expose the 4 streams via 4 device files)
I am quite new to Labview. Does anyone know how to get images from 2 usb cameras simultaneously. There are some topics about it in the internet, but all of them are a little bit misleading and do not present the whole solution.
use labview 2014 and higher also check if your cpu and RAM of your system could support two or more simountensous camera .you can check this by open two camera in same time but with different software after that you find out that you system could handle two camera
use Imaqdx for open camera but it is better to use flat sequence for config and open part
it is better open camera by one by but you can capture in same time inside while loop
if you system could not handle two camera told me I could suggest a method to solve it
This isn't something supported using basic LabVIEW funtions.
However, you can use hooks into the User32.dll to create windows and populate them with live images from USB webcams.
I've had the example posted here working on LabVIEW 2016 + 2018
The linked example is displaying 2 webcams and doesn't need IMAQ drivers to run
I would like to use the images of one webcam in two different systems that require exclusive access to the video device simultaneously. Therefore I created two virtual cameras with v4l2loopback, one for each of the systems, and now I am trying to stream the data from the actual webcam to both virtual cameras. I tried to use GStreamer, but it only allows me to stream the data to one single virtual camera. If I try to stream to the other one also, I end up with my original problem of the webcam being already busy. I can't figure out a way to solve this problem, help would be greatly appreciated!
Just a thought I haven’t actually tried it, suppose you have a webcam attached and its /dev/video0, now create two virtual devices using v4l2loopback say /dev/video1 & /dev/video2 and now create a gstreamer pipeline with a tee element which outputs to 2 v4l2sink's /dev/video1 and /dev/video2.
Hope that helps!
I am importing a source code for stereo visions. The next code of the author works. It takes two cameras sources. I have two different cameras currently and i receive images. Both works. It crashes at capture2. interesting part is that if i change the orders of the webcams(Unplugging them and invert the orders) the first camera it will be the second one. We it doesn't work? I tested also with Windows XP sp3 and Windows 7 X64. The same problem.
//---------Starting WebCam----------
capture1= cvCaptureFromCAM(1);
assert(capture1!=NULL); cvWaitKey(100);
capture2= cvCaptureFromCAM(2);
assert(capture2!=NULL);
Also If i use -1 for paramters the just give me the first one(all the time).
Or any method to capture two camers using function cvCaptureFrom
Firstly the cameras are generally numbered from 0 - is this just the problem?
Secondly, directshow and multiple USB webcams is notoriously bad in windows. Sometimes it will work with two identical camera, sometimes only if they are different.
You can also try a delay between initialising the cameras, sometimes one will lock the capture stream until it is sending data, preventing the other being detected.
Often the drivers assume they are the only camera and make incorrect calls to lock up the entire capture graph. This isn't helped by it being extremely complicated to write correct drivers+fdirectshow filters in Windows
some mother board can not work with some usb 2.0 cameras. one usb 2.0 camera take 40-60% of usb controller. solution is connect second usb 2.0 camera from pci2usb controller
Get 2 PS3 Eyes, around EUR 10 each, and the free codelaboratories.com SDK, this gets you support up to 2 cameras using C, C#, Java, and AS3 incl. examples etc. You also get FIXED frame rates up 75 fps # 640*480. Their free driver only version 5.1.1.0177 provides decent DirectShow component, but for a single camera only.
COmment for the rest: Multi-cam DirectShow drivers should be a default for any manufacturer, not providing this is a direct failure to implement THE VERY BASIC PORPUSE AND FEATURE OF USB as an interface. It is also VERY EASY to implement, compared to implementing the driver itself for a particular sensor / chipset.
Alternatives that are confirmed to work in identical pairs (via DirectShow):
Microsoft Lifecam HD Cinema (use general UVC driver if you can, less limited fps)
Logitech Webcam Pro 9000 (not to be confused with QuickCam Pro 9000, which DOES NOT work)
Creative VF0220
Creative VF0330
Canyon WCAMN-1N
If you're serious about your work, get a pair of machine vision cameras to get PERFORMANCE. Cheapest on the market, with german engineering quality, CCD, CMOS, mono, colour, GigE (ethernet), USB, FireWire, excellent range of dedicated drivers:
http://www.theimagingsource.com