Any way to tell which direction focus is off in image - opencv

I'm trying to make an autofocuser for a microscopic USB Cam.
using the standard
return cv2.Laplacian(image, cv2.CV_64F).var()
to calculate the overall blurriness of a 64x64 pixel "focus cursor" which works fine.
What I'd like to be able to determine is which direction I need to move the focus knob to achieve ideal focus. I initially tried simply sampling a small movement each direction to determine better/worse score but I find for badly out of focus images the score is so low the Laplacian score becomes based on image noise rather than true blur.
Is there a way to detect the direction the focus needs to go in based on image data?
Also if anyone has a link to the the vocabulary used when speaking of focusing in vs out it would be appreciated as I know the way I'm describing it is not the correct terminology.
Thanks!

Related

Determine movement/motion (in pixels) between two frames

First of all I'm a total newbie in image processing, so please don't be too harsh on me.
That being said, I'm developing an application to analyse changes in blood flow in extremities using thermal images obtained by a camera. The user is able to define a region of interest by placing a shape (circle,rectangle,etc.) on the current image. The user should then be able to see how the average temperature changes from frame to frame inside the specified ROI.
The problem is that some of the images are not steady, due to (small) movement by the test subject. My question is how can I determine the movement between the frames, so that I can relocate the ROI accordingly?
I'm using the Emgu OpenCV .Net wrapper for image processing.
What I've tried so far is calculating the center of gravity using GetMoments() on the biggest contour found and calculating the direction vector between this and the previous center of gravity. The ROI is then translated using this vector but the results are not that promising yet.
Is this the right way to do it or am I totally barking up the wrong tree?
------Edit------
Here are two sample images showing slight movement downwards to the right:
http://postimg.org/image/wznf2r27n/
Comparison between the contours:
http://postimg.org/image/4ldez2di1/
As you can see the shape of the contour is pretty much the same, although there are some small differences near the toes.
Seems like I was finally able to find a solution for my problem using optical flow based on the Lukas-Kanade method.
Just in case anyone else is wondering how to implement it in Emgu/C#, here's the link to a Emgu examples project, where they use Lukas-Kanade and Farneback's algorithms:
http://sourceforge.net/projects/emguexample/files/Image/BuildBackgroundImage.zip/download
You may need to adapt a few things, e.g. the parameters for the corner detection (the frame.GoodFeaturesToTrack(..) method) , but it's definetly something to start with.
Thanks for all the ideas!

People detect using Hog not finding anyone

I have a video of soccer in which the players are relatively far away from the camera and thus represent small portions of the image. I'm using background subtraction to detect the players and the results are fine but I have been asked to try detecting using Hog.
I tried using the detect MultiScale using the default descriptors presented on opencv but i cant get any detection. I dont really understand how can I make it work on this case, because on other sequences where the people are closer to the camera, the detector works fine.
Here is a sample image link
Thanks.
The descriptor you use with HOG determines the minimum size of person you can detect: with the DefaultPeopleDetector the detection window is 128 pixels high x 64 wide, so you can detect people around 90px high. With the Daimler descriptor the size you can detect is a bit smaller.
Your pedestrians are still too small for this, so you may need to magnify the whole image, or just the parts which show up as foreground using background segmentation.
Have a look at the function definition for detectMultiscale http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#cascadeclassifier-detectmultiscale
It might be that you need to reduced the value of minsize so as to detect smaller people or the people might just be too far away.

How to position a car in image processing (computer vision)?

I would like to locate a car (front center point x,y) using a high resolution single camera. The camera setup is fixed at 1-2m high, and tilted around 25 degrees. The camera can provide images in where the front side of the car is visible. The intrinsic and extrinsic parameters are known.
So far, I tried to detect the headlights and number plates. Issues... Headlights are not detected as blobs all the time. The shape of the headlights are changing depending on the distance. Also, the number plate is not visible in the dark.
Is there a robust algorithm to detect a car? or to detect headlights? or detect number plate?How could I proceed?
Thanks in advance,
Are you detecting the same car everytime? If yes, then presumably the appearance remains consistent. Rather than detect and recognise blobs and shapes, you may be better off using scale and rotation invariant features combined with a machine learning algorithm. Look into the SIFT and SURF feature descriptors. For easy experimentation, use OpenCV's implementation of feature description and matching. Take a look at this example.
This is not an easy problem because of the change in the scale and point of view. Ideally, you would need a collection of training images with the car seen from different points of view to match later some of them to your input image. Then, you need local features (SIFT, SURF) or some classifier to decide on the match.
On the other hand, if you are tracking the same car all the time, check out the MeanShift algorithm. The problem is you need an initial position to carry on with the tracking.

Image Rectification for Shake Correction on OpenCV

I've 2 pictures of the same scene from an uncalibrated camera. The pics are from a slightly different angle and scale(zoom) and I'd like to superpose them, rejecting any kind of shake. In other words, I should transform them so the shake becomes imperceptible, do a Motion Compensation.
I've already tried using a simple SURF (feature) detector along with Homography but sometimes the result isn't satisfactory. So I am thinking about trying Image Rectification to compensate the motion.
- Would it work with slight changes, such as user shake?
- Would it really work to reject shake for these 2 frames? And for a bigger buffer of pictures (10 maybe)?
- Anyone knows if it would fix scale disparity (different zoom in the images)?
- What the algorithm really do? Will it transform both pictures into a third orientation?
If there is a better solution, I would be glad to know =)
EDIT
I don't aim to compensate blur motion but the displacement itself. For example, in this file the author compensates the angle difference between two cameras by Image Rectification. How does it actually work? Does it always create an intermediate picture orientation or can I specify that one of the pictures shall remains still??
Also, would I be able to apply this to many frames or it would always find an intermediate orientation for each two frames I put in?
Cheers,
I'm not sure how well superimposing the images would work. Another way to remove blur (including motion blur which should dominate in handheld camera devices) from an image is by blind deconvolution. It is basically a method of finding the inverse of the blur filter that was physically applied (camera shaken) to the real image. There's plenty of techniques out on the web. I've specifically had good results using a modified version of the algorithm in this paper: http://www.cse.cuhk.edu.hk/~leojia/all_final_papers/motion_deblur_cvpr07.pdf
It also comes with an executable file somewhere around the web so you can see if it's fit for your purpose.
Good luck out there!

Fiducial marker detection in the presence of camera shake

I'm trying to make my OpenCV-based fiducial marker detection more robust when the user moves the camera (phone) violently. Markers are ArTag-style with a Hamming code embedded within a black border. Borders are detected by thresholding the image, then looking for quads based on the found contours, then checking the internals of the quads.
In general, decoding of the marker is fairly robust if the black border is recognized. I've tried the most obvious thing, which is downsampling the image twice, and also performing quad-detection on those levels. This helps with camera defocus on extreme nearground markers, and also with very small levels of image blur, but doesn't hugely help the general case of camera motion blur
Is there available research on ways to make detection more robust? Ideas I'm wondering about include:
Can you do some sort of optical flow tracking to "guess" the positions of the marker in the next frame, then some sort of corner detection in the region of those guesses, rather than treating the rectangle search as a full-frame thresholding?
On PCs, is it possible to derive blur coeffiients (perhaps by registration with recent video frames where the marker was detected) and deblur the image prior to processing?
On smartphones, is it possible to use the gyroscope and/or accelerometers to get deblurring coefficients and pre-process the image? (I'm assuming not, simply because if it were, the market would be flooded with shake-correcting camera apps.)
Links to failed ideas would also be appreciated if it saves me trying them.
Yes, you can use optical flow to estimate where the marker might be and localise your search, but it's just relocalisation, your tracking will have broken for the blurred frames.
I don't know enough about deblurring except to say it's very computationally intensive, so real-time might be difficult
You can use the sensors to guess the sort of blur you're faced with, but I would guess deblurring is too computational for mobile devices in real time.
Then some other approaches:
There is some really smart stuff in here: http://www.robots.ox.ac.uk/~gk/publications/KleinDrummond2004IVC.pdf where they're doing edge detection (which could be used to find your marker borders, even though you're looking for quads right now), modelling the camera movements from the sensors, and using those values to estimate how an edge in the direction of blur should appear given the frame-rate, and searching for that. Very elegant.
Similarly here http://www.eecis.udel.edu/~jye/lab_research/11/BLUT_iccv_11.pdf they just pre-blur the tracking targets and try to match the blurred targets that are appropriate given the direction of blur. They use Gaussian filters to model blur, which are symmetrical, so you need half as many pre-blurred targets as you might initially expect.
If you do try implementing any of these, I'd be really interested to hear how you get on!
From some related work (attempting to use sensors/gyroscope to predict likely location of features from one frame to another in video) I'd say that 3 is likely to be difficult if not impossible. I think at best you could get an indication of the approximate direction and angle of motion which may help you model blur using the approaches referenced by dabhaid but I think it unlikely you'd get sufficient precision to be much more help.

Resources