How do I fire a camera connected on USB programatically? - image-processing

I want to make something like they have at US dmv's where you sit down and it takes your picture, maybe like photobooth.
I want to connect a high end camera via usb, fire the camera and get the picture.

There's the Picture Transfer Protocol http://en.wikipedia.org/wiki/Picture_Transfer_Protocol a nastly little thing. All the cameras I held in my hands so far, claiming they had proper PTP support failed it somewhere. But in theory one can use PTP to remote control a camera, i.e. trigger the shutter, retrieve the picture and so on.
Rater than reimplementing the whole thing I recommend you get some readily usable PTP library. There are some open source ones listed on http://ptp.sourceforge.net

The easiest method is probably to use OpenCV: http://opencv.willowgarage.com/wiki/

If you need a high end camera - most digital SLRs have a tethered mode where you can control the camera, fire the shutter and retrieve the image data. Each camera maker has a proprietary (but normally free) sdk.
For a webcam type camera - these normally run in video mode, you simply grab an image out f the video stream - as PaulR says - use openCV

Related

Removing low frequency (hiss) noise from video in iOS

I am recording videos and playing them back using AVFoundation. Everything is perfect except the hissing which is there in the whole video. You can hear this hissing in every video captured from any iPad. Even videos captured from Apple's inbuilt camera app has it.
To hear it clearly, you can record a video in a place as quiet as possible without speaking anything. It can be very easily detected through headphones and keeping volume to maximum.
After researching, I found out that this hissing is made by preamplifier of the device and cannot be avoided while recording.
Only possible solution is to remove it during post processing of audio. Low frequency noise can be removed by implementing low pass filter and noise gates. There are applications and software like Adobe Audition which can perform this operation. This video shows how it is achieved using Adobe Audition.
I have searched Apple docs and found nothing which can achieve this directly. So I want to know if there exists any library, api or open source project which can perform this operation. If not, then how can I start going in right direction because it does looks like a complex task.

How to get shutter glasses to work with WebGL on Linux

Is there any generic way to enable frame-sequential stereo with WebGL content?
(I'm not talking about hard-coding this into the presentation/game.)
I can use shutter glasses with anything that can display two images in sync with the monitor's refresh rate. (Bino, Blender Game Engine) These applications use standard OpenGL quadbuffering.
How would one go about getting stereo to work with stuff like let's say the fish tank demo?
I understand on Windows one can emulate WebGL in DirectX and somehow force stereoscopy that way. I'm a casual Linux user therefore have no access to this. There should't be a need for us to hack things like that get stereo.
Would it be even possible to write let's say an add-on for Firefox that hijacks the camera in any WebGL scene to enable this functionality?

Opencv cant access camera connected through video capture device

I have a analog camera connected to EasyCap video capture device. When I run a basic code which opens webcam video using OPENCV, I can access my in-built webcam but not the other analog camera.
How would you connect any other camera (FPV, IR, etc) to the PC such that OPENCV can access it.
Thanks.
i struggled with the same problem and hope it helps!
the original thread + ANSWER
also relevant XKCD
one more observation: from your description it looks like you already have a webcam running on the laptop (in-built webcam maybe?) you might want to disable it in system manager so as to guarantee that your analog camera cam_index is zero for certain. Otherwise if you leave the webcam enabled as a device, then your analog cam will most likely be incremented to cam_index=1 which amusingly enough seems to be confirmed by it crashing on cam_index=1.
Arguable not a great method to find your camera's index but there you have it!
You can set which camera to connect to open by changing the following deviceID to the desired device you want:
CvCapture* capture = cvCaptureFromCAM(deviceID);
or new API:
VideoCapture cap(deviceID);
Check out documenation for more info.
Use the deviceID of the analog camera instead of the in-built one.

Camera setup logic iOS

Although i've searched SO and read documentation multiple times on AVCaptureConnection, AVCaptureSession, AVCaptureVideoPreviewLayer, AVCaptureDevice, AVCaptureInput/Output … i'm still confused about all this AV stuff. When it comes to this, it's one big pile of abstract words to me, that don't make much sense. I'm asking to shed some light on the subject for me here.
So, can anyone explain coherently in plain english the logic of proper setup and use of the media devices? What is AVCaptureVideoPreviewLayer? What is AVCaptureConnection? Input/Output?
I want to catch the basic idea the people who made this stuff had while making it.
Thanks
I wish I had more time to write a more thorough reply. Here are some simplified basics:
In order to work with audio and video coming from the hardware, destined for the screen or files, you need to setup an AVCaptureSession that helps coordinate the sources and the destinations, using AVCaptureConnections. You use the session instance to start and stop the process, along with setting some output properties like bitrate and quality. You use the AVCaptureConnection instance(s) to control the connection between an AVCaptureInputPort and an AVCaptureOutputPort (or AVCaptureVideoPreviewLayer), such as monitoring input levels of sounds or setting the orientation of the video.
AVCaptureInputPort are different inputs from AVCaptureDevice - which is where your video or audio is coming from, such as the camera or the microphone. You will normally look through all available devices and choose those that have the properties you are looking for, such as if they are audio, or if they are the front-facing camera.
AVCaptureOutput is where the AV is sent - it might be a file or a routine that allows you to process the data in real-time, etc.
AVCaptureVideoPreviewLayer is an OpenGL layer that is optimized for very fast rendering of the output of the selected video input device (front or back camera). You typically use this to show your user what input you are working with - sort of like a camera viewfinder.
If you are going to use this stuff, then you must read Apple's AV Foundation Programming Guide
Here's an image that may help you some more (from above-mentioned doc):
A more detailed view:

can a video projector take input?

Can a Video projector take input?.. I want to create an augmented reality application where the user will click on the buttons projected in the screen.
Is this possible only with the projector ? or a camera is also needed?
I know that without camera it's not possible. But i don't know much abt the projectors that's why i am asking this question
Projectors can only project... they work with a strong light-bulb that projects the image onto the screen/wall...
If you want some sort of input, you need a camera or another sensor. This could even be a Nintendo Wii Remote:
http://johnnylee.net/projects/wii/
(Have a look at the second project there)
But your projector itself is not capable of getting input... maximum would be infrared from remote controls, but that depends on the projector
You need something like a camera (can also be kinect or something like that) or something like a "touchable wallpaper" (don't know if there is something on the market).
Ciao!
Stefan
http://smarttech.com/us/Solutions/Education+Solutions/Products+for+education/Interactive+whiteboards+and+displays/SMART+Board+interactive+whiteboards
This is pretty well solved.

Resources