cvCaptureFromCAM() / cvQueryFrame(): disable automatic image correction? - opencv

I'm using the two OpenCV functions mentioned above to retrieve frames from my webcam. No additional properties are set, just running with default parameters.
While reading frames in a loop I can see that the image changes, brightness and contrast seem to be adjusted automatically. It definitely seems to be a operation of OpenCV because the image captured by the camera is not changed and lit constantly.
So how can I disable this automated correction? I could not find a property that seems to be able to do that job.

You should try to play around with these three parameters:
CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras)
CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras)
CV_CAP_PROP_SATURATION Saturation of the image (only for cameras)
Try to set them all to 50. Also (if it won't help) try to change another camera capture parameters from documentation.

To answer that for my own: OpenCV is buggy or outdated here.
it seems to be impossible to get images in native resolution of the camera, they're always 640x480; also forcing it to an other value by setting width and height properties does not change anything
it seems to be impossible to disable the automatic image correction, the properties mentioned above seem not to work
the brightness/contrast properties doesn't seem to work as well - or at least I could not find any good values for it or the automatic image correction always overrides them
To sum it up: I'd not recommend to use OpenCV for some more enhanced image capturing.

Related

Increasing the brightness level for .continuousAutoExposure Mode

I implemented a custom camera with the AVCaptureDevice.Preset.High preset and I'm using .continuousAutoExposure. Everything works as expected, however, the brightness of the picture is sometimes quite low.
I have researched the official documentation and found out, that I am able to set a custom ISO with setExposureModeCustomWithDuration. Unfortunately, doing it this way results in the loss of the wished automation of the exposure.
My question now is, is there a way to increase the overall brightness Percentage of the .continuousAutoExposure Mode? I need it to increase the exposure to around 5% only, but I also need to stick with the .continuousAutoExposure mode.
The trick is to set exposureTargetOffset property of the AVCaptureDevice instance. You need to use KVO to observe changes in the value of captureDevice.exposureTargetOffset and change it to your required exposure level. For more details, check this answer.

iOS, Objective C auto image processing filters

I'm doing a photo app and sometimes the lighting is off in certain areas and the picture isn't clear. I was wondering if there was a feature that can auto adjust the brightness, contrast, exposure, saturation of a picture like in photoshop.
I don't want to manually adjust images like the sample code given by apple:
https://developer.apple.com/library/ios/samplecode/GLImageProcessing/Introduction/Intro.html
I want something that will auto adjust or correct the photo
As an alternative you could use AVFoundation to make your implementation of the camera and set the ImageQuality to high and the autofocus or tap to focus feature. Otherwise, I am almost certain you cannot set this properties, The UIImagePicker controller included in the SDK is really expensive memory wise and gives you an image instead of raw data (another benefit of using AVFoundation). This is a good tutorial for this in case you would like to check it out:
http://www.musicalgeometry.com/?p=1297
Apparently someone has created it on Github: https://github.com/proth/UIImage-PRAutoAdjust
Once imported, I used it the following:
self.imageView.image = [self.imageView.image autoAdjustImage];

OpenCv Issue of Image Subtraction?

i am trying to subtract 2 image using the function cvAbsDiff(img1, img2, dest);
it working but sometimes when i bring my hand before my head or body the hand is not clear and background comes into picture... the background image(head) overlays my foreground.(hand)..
it works correctly on plain surfaces i.e when the background is even like a wall.
please check out my image...so that you can better understand my problem...!!!!
http://www.2shared.com/photo/hJghiq4b/bg_overlays_foreground.html
if you have any solution/hint please help me.......
There's nothing wrong with your code . Background subtraction is not a preffered way for motion detection or silhoutte detection because its not very robust.The problem is coming because both the background and the foreground are similar in colour at many regions which on subtractions pushes the foreground to back . You might try using
- optical flow for motion detection
- If your task is just detecting silhoutte or hand try training a HOG classifier over it
In case you do not want to try a new approach . You may try around playing with the threshold value(in your case 30).So when you subtract similar colour image there difference is less than 30 . And later you threshold with 30 so it just blacks out. Also you may try HSV or some other colourspace as well .
Putting in the relevant code would help. Also knowing what you're actually trying to achieve.
Which two images are you subtracting? I've done subtracting subsequent images (so, images taken with a delay of a fraction of a second), and the background subtraction generally results in the edges of moving objects, for example the edges of a hand, and not the entire silhouette of a hand. I'm guessing you're taking the difference of the current frame and a static startup frame. It's possible that parts aren't different enough (skin+skin).
I've got some computer problems tonight, I'll test it out tomorrow (pls put up at least the steps you actually carry thorough though) and let you know.
I'm still not sure what your ultimate goal is, although I'm guessing you want to do some gesture-recognition (since you have a vector called "fingers").
As Manpreet said, your biggest problem is robustness, and that is from the subjects having similar color.
I reproduced your image by having my face in the static comparison image, then moving it. If I started with only background, it was already much more robust and in anycase didn't display any "overlaying".
Quick fix is, make sure to have a clean subject-free static image.
Otherwise, you'll want to have dynamic comparison image, simplest would be comparing frame_n with frame_n-1. This will generally give you just the moving edges though, so if you want the entire silhouette you can either:
1) Use a different segmenting algorithm (what I recommend. Background subtraction is fast and you can use it to determine a much smaller ROI in which to search, and then use a different algorithm for more robust segmentation.)
2) Try to make a compromise between the static and dynamic comparison image, for example as an average of the past 10 frames or something like that. I don't know how well this works, but would be quite simple to implement, worth a try :).
Also, try with CV_THRESH_OTSU instead of 30 for your threshold value, see if you like that better.
Also, I noticed often the output flares (regions which haven't changed switch from black to white). Checking with the live stream, I'm quite certain it because of the webcam autofocusing/adjusting white balance etc.. If you're getting that too, turning off the autofocus etc. should help (which btw isn't done through openCV but depends on the camera. Possibly check this: How to programatically disable the auto-focus of a webcam?)

openCV: is it possible to time cvQueryFrame to synchronize with a projector?

When I capture camera images of projected patterns using openCV via 'cvQueryFrame', I often end up with an unintended artifact: the projector's scan line. That is, since I'm unable to precisely time when 'cvQueryFrame' captures an image, the image taken does not respect the constant 30Hz refresh of the projector. The result is that typical horizontal band familiar to those who have turned a video camera onto a TV screen.
Short of resorting to hardware sync, has anyone had some success with approximate (e.g., 'good enough') informal projector-camera sync in openCV?
Below are two solutions I'm considering, but was hoping this is a common enough problem that an elegant solution might exist. My less-than-elegant thoughts are:
Add a slider control in the cvWindow displaying the video for the user to control a timing offset from 0 to 1/30th second, then set up a queue timer at this interval. Whenever a frame is needed, rather than calling 'cvQueryFrame' directly, I would request a callback to execute 'cvQueryFrame' at the next firing of the timer. In this way, theoretically the user would be able to use the slider to reduce the scan line artifact, provided that the timer resolution is sufficient.
After receiving a frame via 'cvQueryFrame', examine the frame for the tell-tale horizontal band by looking for a delta in HSV values for a vertical column of pixels. Naturally this would only work when the subject being photographed contains a fiducial strip of uniform color under smoothly varying lighting.
I've used several cameras with OpenCV, most recently a Canon SLR (7D).
I don't think that your proposed solution will work. cvQueryFrame basically copies the next available frame from the camera driver's buffer (or advances a pointer in a memory mapped region, or blah according to your driver implementation).
In any case, the timing of the cvQueryFrame call has no effect on when the image was captured.
So as you suggested, hardware sync is really the only route, unless you have a special camera, like a point grey camera, which gives you explicit software control of the frame integration start trigger.
I know this has nothing to do with synchronizing but, have you tried extending the exposure time? Or doing so by intentionally "blending" two or more images into one?

recognize the moving objects and differentiate them from the background?

iam working in a project that i take a vedio by a camera and convert this vedio to frames (this part of project is done )
what iam facing now is how to detect moving object in these frames and differentiate them from the background so that i can distinguish between them ?
I recently read an awesome CodeProject article about this. It discusses several approaches to the problem and then walks you step by step through one of the solutions, with complete code. It's written at a very accessible level and should be enough to get you started.
One simple way to do this (if little noise is present, I recommend smoothing kernel thought) is to compute the absolute difference of two consecutive frames. You'll get an image of things that have "moved". The background needs to be pretty static in order to work. If you always get the abs diff from the current frame to the nth frame you'll have a grayscale image with the object that moved. The object has to be different from the background color or it will disappear...

Resources