I'm using a custom Linux built using Buildroot for a SoC Allwinner A20. This SoC have 4 analog video inputs which I need to use. The problem is that there is not a decent driver for this video input, so I'm fixing the single one I was able to find over the internet. It is a V4L2 driver for this device.
This device can capture video from more than one of the video inputs at the same time, and combine them in one single image, splitting the image in 2 or 4 parts, displaying the video from each camera in a different part.
However, the driver is very basic, and it is not prepared to configure this yet. It only capture from the video input #1. What I want to do is to modify this driver so it allows to configure how many inputs do you cant to enable (1, 2 or 4 inputs) and which ones (for example: enable inputs #2 and #4 and combine them into a video split in 2 parts).
First thing I though was to do this using the VIDIOC_S_INPUT ioctl because this is what it is supposed to do: selecting what input you want to use from a device with more than one input. However, this would work great if I just had to choose one of the 4 inputs, but I don't know how to use it to enable 2 or 4 inputs, and less what inputs must be enabled in this case, and in what order.
How can achieve this in a 4vl2 compliant way? I would like to use it with standard software like ffmpeg and gstreamer.
I guess that the v4l2 compliant way would be to create 4 devices /dev/video0../dev/video3 each of them exposing one capture source, and then do the overlay in user-space.
If this is not possible, and you indeed can only present a single combined video stream via a single device (/dev/video0) because the device does the stream merge in hardware, then i don't think that using VIDIOC_S_INPUT is out of the way. just come up with a good numbering scheme...
However (to re-iterate), if the stream merge is not done in hardware but in software, then you should never do that in kernel-space and always in user-space (so you should expose the 4 streams via 4 device files)
Related
I'm in charge of technology at my local camera club, a not-for-profit charity in Malvern UK. We have a database-centric competition management system which is home-brewed by me in Delphi 6 and now we wish to add a scoring system to it. This entails attaching 5 x cheap-and-standard USB numeric keypads to a PC (using a USB hub) and being able to programmatically read the keystrokes from each keyboard as they are entered by the 5 judges. Of course, they will hit their keys in a completely parallel and asynchronous way, so I need to identify which key has been struck by which judge, so as to assemble the scores (i.e. possible multiple keystrokes each) they have entered individually.
From what I can gather, Windows grabs the attention of keyboard devices and looks after the characer strings they produce, simply squirting the chars into the normal keyboard queue (and I have confirmed that by experiment!). This won't do for my needs, as I really must collect the 5 sets of (possibly multiple) key-presses and allocate the received characters as 5 separate variables for the scoring system to manipulate thereafter.
Can anyone (a) suggest a method for doing this in Delphi and (b) offer some guide to the code that might be needed? Whilst I am pretty Delphi-aware, I have no experience of accessing USB devices, or capturing their data.
Any help or guidance would be most gratefully received!
Windows provides a Raw Input API, which can be used for this purpose. In the reference at the link provided, one of the advantages is listed as:
An application can distinguish the source of the input even if it is
from the same type of device. For example, two mouse devices.
While this is more work than regular Windows input messages, it is a lot easier than writing USB device drivers.
One example of its use (while not written in Delphi) demonstrates what it can do, and provides some information on using it:
Using Raw Input from C# to handle multiple keyboards.
I am quite new to Labview. Does anyone know how to get images from 2 usb cameras simultaneously. There are some topics about it in the internet, but all of them are a little bit misleading and do not present the whole solution.
use labview 2014 and higher also check if your cpu and RAM of your system could support two or more simountensous camera .you can check this by open two camera in same time but with different software after that you find out that you system could handle two camera
use Imaqdx for open camera but it is better to use flat sequence for config and open part
it is better open camera by one by but you can capture in same time inside while loop
if you system could not handle two camera told me I could suggest a method to solve it
This isn't something supported using basic LabVIEW funtions.
However, you can use hooks into the User32.dll to create windows and populate them with live images from USB webcams.
I've had the example posted here working on LabVIEW 2016 + 2018
The linked example is displaying 2 webcams and doesn't need IMAQ drivers to run
I am using portaudio and trying to get input from two separate usb webcam microphones.
I connecting to two separate devices yet the data I get seems to me mixed (speaking into one microphone shows up in both outputs)
My bad, my visualization code had a bug
I am importing a source code for stereo visions. The next code of the author works. It takes two cameras sources. I have two different cameras currently and i receive images. Both works. It crashes at capture2. interesting part is that if i change the orders of the webcams(Unplugging them and invert the orders) the first camera it will be the second one. We it doesn't work? I tested also with Windows XP sp3 and Windows 7 X64. The same problem.
//---------Starting WebCam----------
capture1= cvCaptureFromCAM(1);
assert(capture1!=NULL); cvWaitKey(100);
capture2= cvCaptureFromCAM(2);
assert(capture2!=NULL);
Also If i use -1 for paramters the just give me the first one(all the time).
Or any method to capture two camers using function cvCaptureFrom
Firstly the cameras are generally numbered from 0 - is this just the problem?
Secondly, directshow and multiple USB webcams is notoriously bad in windows. Sometimes it will work with two identical camera, sometimes only if they are different.
You can also try a delay between initialising the cameras, sometimes one will lock the capture stream until it is sending data, preventing the other being detected.
Often the drivers assume they are the only camera and make incorrect calls to lock up the entire capture graph. This isn't helped by it being extremely complicated to write correct drivers+fdirectshow filters in Windows
some mother board can not work with some usb 2.0 cameras. one usb 2.0 camera take 40-60% of usb controller. solution is connect second usb 2.0 camera from pci2usb controller
Get 2 PS3 Eyes, around EUR 10 each, and the free codelaboratories.com SDK, this gets you support up to 2 cameras using C, C#, Java, and AS3 incl. examples etc. You also get FIXED frame rates up 75 fps # 640*480. Their free driver only version 5.1.1.0177 provides decent DirectShow component, but for a single camera only.
COmment for the rest: Multi-cam DirectShow drivers should be a default for any manufacturer, not providing this is a direct failure to implement THE VERY BASIC PORPUSE AND FEATURE OF USB as an interface. It is also VERY EASY to implement, compared to implementing the driver itself for a particular sensor / chipset.
Alternatives that are confirmed to work in identical pairs (via DirectShow):
Microsoft Lifecam HD Cinema (use general UVC driver if you can, less limited fps)
Logitech Webcam Pro 9000 (not to be confused with QuickCam Pro 9000, which DOES NOT work)
Creative VF0220
Creative VF0330
Canyon WCAMN-1N
If you're serious about your work, get a pair of machine vision cameras to get PERFORMANCE. Cheapest on the market, with german engineering quality, CCD, CMOS, mono, colour, GigE (ethernet), USB, FireWire, excellent range of dedicated drivers:
http://www.theimagingsource.com
I have a DirectShow application written in Delphi 6 using the DSPACK component library. I want to be able to mix together audio coming from the output pins from multiple Capture Filters that are set to the exact same media format. Is there an open source or "sdk sample" filter that does this?
I know that intelligent mixing is a big deal and that I'd most likely have to buy a commercial library to do that. But all I need is a DirectShow filter that can accept wave audio input from multiple output pins and does a straight addition of the samples received. I know there are Tee Filter's for splitting a single stream into multiple streams (one-to-many), but I need something that does the opposite (many-to-one), preferably with format checking on each input connection attempt so that any attempt to attach an output pin with a different media format than the ones already added is thwarted with an error. Is there anything out there?
Not sure about anything available out of the box, however it would be definitely a third party component.
The complexity of creating this custom filter is not very high (it is not a rocket science in terms of creating such component yourself for specific need). You basically need to have all input audio converted to the same PCM format, match the timestamps, add the data and then deliver via output pin.