Opencv cant access camera connected through video capture device - opencv

I have a analog camera connected to EasyCap video capture device. When I run a basic code which opens webcam video using OPENCV, I can access my in-built webcam but not the other analog camera.
How would you connect any other camera (FPV, IR, etc) to the PC such that OPENCV can access it.
Thanks.

i struggled with the same problem and hope it helps!
the original thread + ANSWER
also relevant XKCD
one more observation: from your description it looks like you already have a webcam running on the laptop (in-built webcam maybe?) you might want to disable it in system manager so as to guarantee that your analog camera cam_index is zero for certain. Otherwise if you leave the webcam enabled as a device, then your analog cam will most likely be incremented to cam_index=1 which amusingly enough seems to be confirmed by it crashing on cam_index=1.
Arguable not a great method to find your camera's index but there you have it!

You can set which camera to connect to open by changing the following deviceID to the desired device you want:
CvCapture* capture = cvCaptureFromCAM(deviceID);
or new API:
VideoCapture cap(deviceID);
Check out documenation for more info.

Use the deviceID of the analog camera instead of the in-built one.

Related

What possible solution to set custom exposure in duo camera device?

My code was setting AVCaptureExposureModeCustom for AVCaptureDevicePositionBack however while adding support for iPhone 7 Plus I am able to set it for AVCaptureDeviceTypeBuiltInTelephotoCamera but not on AVCaptureDeviceTypeBuiltInDuoCamera.
What could be a possible solution to allow user set exposure for Duo Cam.
AVCaptureDeviceTypeBuiltInDuoCamera don't support RAW capture and manual controls. if you want to achieve manual control you have to select wide-angle or telephoto camera.
When you use the dual camera capture device, RAW capture and most
manual controls are not available. To use these features, specifically
select either the wide-angle or telephoto capture device.
Apple documentation link
The real reason why you cannot do manual controls is because duel camera device uses the system automatically chooses which camera to use during capture, and can combine data from both cameras to improve capture output.
Check iOSDeviceCompatibility for more info.

Streaming from ip camera

I am working on 1 project start up. And i want to do the following this. 1)Detect the camera by wifi and camera id. 2)After detecting the camera.Fetch the stream from the camera to the phone 3)play that stream into the player.
Now i had done google and found that by RTSP we can achieve the following this. But i am not sure or dont have the code to detect the camera from the camera id or by wifi and the detect the stream from that. Can anyone guide me in this with the some sample source code.
If your camera supports ONVIF you can use it to detect it.

Camera setup logic iOS

Although i've searched SO and read documentation multiple times on AVCaptureConnection, AVCaptureSession, AVCaptureVideoPreviewLayer, AVCaptureDevice, AVCaptureInput/Output … i'm still confused about all this AV stuff. When it comes to this, it's one big pile of abstract words to me, that don't make much sense. I'm asking to shed some light on the subject for me here.
So, can anyone explain coherently in plain english the logic of proper setup and use of the media devices? What is AVCaptureVideoPreviewLayer? What is AVCaptureConnection? Input/Output?
I want to catch the basic idea the people who made this stuff had while making it.
Thanks
I wish I had more time to write a more thorough reply. Here are some simplified basics:
In order to work with audio and video coming from the hardware, destined for the screen or files, you need to setup an AVCaptureSession that helps coordinate the sources and the destinations, using AVCaptureConnections. You use the session instance to start and stop the process, along with setting some output properties like bitrate and quality. You use the AVCaptureConnection instance(s) to control the connection between an AVCaptureInputPort and an AVCaptureOutputPort (or AVCaptureVideoPreviewLayer), such as monitoring input levels of sounds or setting the orientation of the video.
AVCaptureInputPort are different inputs from AVCaptureDevice - which is where your video or audio is coming from, such as the camera or the microphone. You will normally look through all available devices and choose those that have the properties you are looking for, such as if they are audio, or if they are the front-facing camera.
AVCaptureOutput is where the AV is sent - it might be a file or a routine that allows you to process the data in real-time, etc.
AVCaptureVideoPreviewLayer is an OpenGL layer that is optimized for very fast rendering of the output of the selected video input device (front or back camera). You typically use this to show your user what input you are working with - sort of like a camera viewfinder.
If you are going to use this stuff, then you must read Apple's AV Foundation Programming Guide
Here's an image that may help you some more (from above-mentioned doc):
A more detailed view:

How do I fire a camera connected on USB programatically?

I want to make something like they have at US dmv's where you sit down and it takes your picture, maybe like photobooth.
I want to connect a high end camera via usb, fire the camera and get the picture.
There's the Picture Transfer Protocol http://en.wikipedia.org/wiki/Picture_Transfer_Protocol a nastly little thing. All the cameras I held in my hands so far, claiming they had proper PTP support failed it somewhere. But in theory one can use PTP to remote control a camera, i.e. trigger the shutter, retrieve the picture and so on.
Rater than reimplementing the whole thing I recommend you get some readily usable PTP library. There are some open source ones listed on http://ptp.sourceforge.net
The easiest method is probably to use OpenCV: http://opencv.willowgarage.com/wiki/
If you need a high end camera - most digital SLRs have a tethered mode where you can control the camera, fire the shutter and retrieve the image data. Each camera maker has a proprietary (but normally free) sdk.
For a webcam type camera - these normally run in video mode, you simply grab an image out f the video stream - as PaulR says - use openCV

AV Foundation camera preview layer gets zoomed in, how to zoom out?

The application currently I am using has a main functionality to scan QR/Bar codes continuously using Zxing library (http://code.google.com/p/zxing/). For continuous frame capturing I used to initialize the AVCaptureSession and AVCaptureVideoOutput, AVCaptureVideoPreviewLayer described in the apple Q&A http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html.
My problem is, when I used to run the camera preview, the image I can see through the Video device is much larger (1.5x) than the image we can see through the still camera of the iPhone. Our customer needs to hold the iPhone around 5cm distance from the bar code when he is scanning, but if you hold the iPhone to that parameter, the whole QR code won't be visible and the decoding fails.
Why is Video camera in iPhone 4 enlarges the image (by seeing through the AVCaptureVideoPreviewLayer) ?.
This is a function of the AVCaptureSession video preset, accessible by using the .sessionPreset property. For example, after configuring your captureSession, but before starting it, you would add
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
See the documentation here:
iOS Reference Document
The default preset for video is 1280x720 (I think) which is a lower resolution than the max supported by the camera. By using the "Photo" preset, you're getting the raw camera data.
You see the same behaviour with the built-in iPhone Camera app. Switch between still and video capture modes and you'll notice that the default zoom level changes. You see a wider view in still mode, whereas video mode zooms in a bit.
My guess is that continuous video capture needs to use a smaller area of the camera sensor to work optimally. If it used the whole sensor perhaps the system couldn't sustain 30 fps. Using a smaller area of the sensor gives the effect of "zooming in" to the scene.
I am answering my own question again. This was not answered even in Apple Dev forum, therefore I directly filed a technical support request from Apple and they have replied that this is a known issue and will be fixed and released with a future version. So there is nothing we can do more than waiting and see.

Resources