Is it possible to detect blur, exposure, orientation of an image programmatically? - image-processing

I need to sort a huge number of photos, and remove the blurry images (due to camera shake), the over/under exposed ones and detect whether the image was shot in the landscape or portrait orientation. Can these things be done on an image using an image processing library or are they still beyond the realms of an algorithmic solution ?

Let's look at your question as three separate question.
Can I find blurry images?
There are some methods for finding blurry images either from :
Sharpening an image and comparing it to the original
Using wavelets to detect blurring ( Link1 )
Hough Transform ( Link )
Can I find images that are under or over exposed?
The only way I can think of this is that your overall brightness is either really high or really low. But the problem is that you would have know if the picture was taken at night or day. You could create a histogram of your image and see if it is really skewed one way or the other and that might be some indication of over/under exposure.
Can I determine the orientation of the image?
There are techniques that have been used such as SVM, Color Moments, Edge Direction Histograms, Bayesian Framework using cues.

Can I find images that are under or over exposed?
here histograms is recommended.

Related

How to detect image with small depth

Currently, i am doing a project related to image processing. Before any thing is done. The blurry images should be dropped indeed. However, i found some of images are really interesting which are "shallow depth of field". I don't want my blur detection algorithm( variance of Laplacian ) dropping all these nice images at all. i have done a observation with a training set to get a new threshold. It can recognize part of the images i want. Are there any Algorithm can detect small depth images with higher accuracy.If any of you have an idea(it is better if related to opencv) please share to me. Thanks indeed.

Image Rectification for Shake Correction on OpenCV

I've 2 pictures of the same scene from an uncalibrated camera. The pics are from a slightly different angle and scale(zoom) and I'd like to superpose them, rejecting any kind of shake. In other words, I should transform them so the shake becomes imperceptible, do a Motion Compensation.
I've already tried using a simple SURF (feature) detector along with Homography but sometimes the result isn't satisfactory. So I am thinking about trying Image Rectification to compensate the motion.
- Would it work with slight changes, such as user shake?
- Would it really work to reject shake for these 2 frames? And for a bigger buffer of pictures (10 maybe)?
- Anyone knows if it would fix scale disparity (different zoom in the images)?
- What the algorithm really do? Will it transform both pictures into a third orientation?
If there is a better solution, I would be glad to know =)
EDIT
I don't aim to compensate blur motion but the displacement itself. For example, in this file the author compensates the angle difference between two cameras by Image Rectification. How does it actually work? Does it always create an intermediate picture orientation or can I specify that one of the pictures shall remains still??
Also, would I be able to apply this to many frames or it would always find an intermediate orientation for each two frames I put in?
Cheers,
I'm not sure how well superimposing the images would work. Another way to remove blur (including motion blur which should dominate in handheld camera devices) from an image is by blind deconvolution. It is basically a method of finding the inverse of the blur filter that was physically applied (camera shaken) to the real image. There's plenty of techniques out on the web. I've specifically had good results using a modified version of the algorithm in this paper: http://www.cse.cuhk.edu.hk/~leojia/all_final_papers/motion_deblur_cvpr07.pdf
It also comes with an executable file somewhere around the web so you can see if it's fit for your purpose.
Good luck out there!

Balancing contrast and brightness between stitched images

I'm working on an image stitching project, and I understand there's different approaches on dealing with contrast and brightness of an image. I could of course deal with this issue before I even stitched the image, but yet the result is not as consistent as I would hope. So my question is if it's possible by any chance to "balance" or rather "equalize" the contrast and brightness in color pictures after the stitching has taken place?
You want to determine the histogram equalization function not from the entire images, but on the zone where they will touch or overlap. You obviously want to have identical histograms in the overlap area, so this is where you calculate the functions. You then apply the equalization functions that accomplish this on the entire images. If you have more than two stitches, you still want to have global equalization beforehand, and then use a weighted application of the overlap-equalizing functions that decreases the impact as you move away from the stitched edge.
Apologies if this is all obvious to you already, but your general question leads me to a general answer.
You may want to have a look at the Exposure Compensator class provided by OpenCV.
Exposure compensation is done in 3 steps:
Create your exposure compensator
Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);
You input all of your images along with the top left corners of each of them. You can leave the masks completely white by default unless you want to specify certain parts of the image to work on.
compensator->feed(corners, images, masks);
Now it has all the information of how the images overlap, you can compensate each image individually
compensator->apply(image_index, corners[image_index], image, mask);
The compensated image will be stored in image

Auto-Detecting blurry regions of an image

I am working on images that are partially blur on some sections. These are noises that should be taken care of, but here is the problem:
Are there methods to detect whether an image is blur or partially blur at some sections of an image? For instance, take a look at sample image below:
You can see in the image that there are 3 sections that are visually blur: bottom-left, near center region and top-right. Now, is it possible to detect that any portion of an image is blur programming-wise or mathematically?
As lain_b pointed out, with an image like this you can use an edge detector and look for an absence of edges. I tried it on your image and it seems to work pretty well. First I used the kernel
[0,1,0,
1,-4,1,
0,1,0]
Which is a simple edge detector. Its result was
Then I used a threshold to get
Then I closed the image and opened it to get
This is obviously not a finished version, the top right portion did not recognize well at all. Perhaps you could improve it by blurring before performing thresholding, or by choosing better values for the threshold and the radii of the opening and closing operations. A lot of the decisions you will need to make depend on the constraints you can put on your problem. I think this technique will work for you though.
Edit
If you are looking for blur detection of arbitrary images you are going to have to investigate a wide variety of techniques. Things are much easier if you can make assumptions about your set of input images. Without any assumptions I don't know what will work best for you. Here is some reading on the topic
Image Blur Metrics
Reserach paper on using the Harr wavelet transform
Similar SO Question and look at the question that question links to
Blur detection is a very active research field, there is no one answer. You will just need to try all the methods you can find (these were found by googling detect blur in image).
This paper may be of some help. It does blur estimation (mostly for out of focus, but I think it also does blur) to recreate a similarly blurred object in the image.
I think you should be able to use it to detect the blurred areas, and how blurred they are. It should be especially relevent to your problem as it is designed to work with real-world images.

Upsampling an Image

I have a basic question.
What are the advantages of upsampling an Image?
Does it help me in edge detection?
I have not found much useful information on the internet.
It depends on the image. It can help if you have extremely jagged edges. At the worst it does nothing. So, you pay in processing time for a potential improvement.
Usually we need to convert an image to a size different than its
original.
For this, there are two possible options:
Upsize the image (zoom in)
Downsize it (zoom out)
As an example, you could want to do your calculations (e.g. segmentation) on a downsized Image, later on you want to work on the original Image data again, so you upsize your Output (e.g. Segmentation mask) again.
Finding better results on upsized Images when applying edge detection
can rise from the following:
With edge detectors (e.g. canny, not only gradient computation) a blurring algorithm is usually connected. If you use some sort of blurring mask in preprocessing, it is possible that you can obtain similar behavior by its modification (decreasing, or increasing power of blurring) as in the case of image resize.

Resources