blurred image removal in an image database - image-processing

I have a dataset of about 2000 images. This database contains some blurred images.
How can I automatically remove the blurred images from this database?
I read about fourier transformation to remove the blurred images. First I need to transform my images into fourier domain and then by applying some threshold I will be able to identify the blurred images. Could anybody give me some sample code in matlab for this? I don't know how to determine the threshold. Are there any way to determining this threshold?

This task is really not so simple, if you remove all the images that doesn't contain high frequencies you will end up removing many images that contain smooth scenes even though they are not blurred.
There is no 100% in computer vision, the best thing for you (in my opinion) is to make a human aided software, your software should suggest on the images that it thinks should be removed, but the final call must be made by a human being.

Related

Background Subtraction in OpenCV

I am trying to subtract two images using absdiff function ,to extract moving object, it works good but sometimes background appears in front of foreground.
This actually happens when the background and foreground colors are similar,Is there any solution to overcome this problem?
It may be description of the problem above not enough; so I attach images in the following
link .
Thanks..
You can use some pre-processing techniques like edge detection and some contrast stretching algorithm, which will give you some extra information for subtracting the image. Since color is same but new object should have texture feature like edge; if the edge gets preserved properly then when performing image subtraction you will obtain the object.
Process flow:
Use edge detection algorithm.
Contrast stretching algorithm(like histogram stretching).
Use the detected edge top of the contrast stretched image.
Now use the image subtraction algorithm from OpenCV.
There isn't enough information to formulate a complete solution to your problem but there are some tips I can offer:
First, prefilter the input and background images using a strong
median (or gaussian) filter. This will make your results much more
robust to image noise and confusion from minor, non-essential detail
(like the horizontal lines of your background image). Unless you want
to detect a single moving strand of hair, you don't need to process
the raw pixels.
Next, take the advice offered in the comments to test all 3 color
channels as opposed to going straight to grayscale.
Then create a grayscale image from the the max of the 3 absdiffs done
on each channel.
Then perform your closing and opening procedure.
I don't know your requirements so I can't take them into account. If accuracy is of the utmost importance. I'd use the median filter on input image over gaussian. If speed is an issue I'd scale down the input images for processing by at least half, then scale the result up again. If the camera is in a fixed position and you have a pre-calibrated background, then the current naive difference method should work. If the system has to determine movement from a real world environment over an extended period of time (moving shadows, plants, vehicles, weather, etc) then a rolling average (or gaussian) background model will work better. If the camera is moving you will need to do a lot more processing, probably some optical flow and/or fourier transform tests. All of these things need to be considered to provide the best solution for the application.

is it possible to take low resolution image from street camera, increase it and see image details

I would like to know if it is possible to take low resolution image from street camera, increase it
and see image details (for example a face, or car plate number). Is there any software that is able to do it?
Thank you.
example of image: http://imgur.com/9Jv7Wid
Possible? Yes. In existence? not to my knowledge.
What you are referring to is called super-resolution. The way it works, in theory, is that you combine multiple low resolution images, and then combine them to create a high-resolution image.
The way this works is that you essentially map each image onto all the others to form a stack, where the target portion of the image is all the same. This gets extremely complicated extremely fast as any distortion (e.g. movement of the target) will cause the images to differ dramatically, on the pixel level.
But, let's you have the images stacked and have removed the non-relevant pixels from the stack of images. You are left hopefully with a movie/stack of images that all show the exact same image, but with sub-pixel distortions. A sub-pixel distortion simply means that the target has moved somewhere inside the pixel, or has moved partially into the neighboring pixel.
You can't measure if the target has moved within the pixel, but you can detect if the target has moved partially into a neighboring pixel. You can do this by knowing that the target is going to give off X amount of photons, so if you see 1/4 of the photons in one pixel and 3/4 of the photons in the neighboring pixel you know it's approximate location, which is 3/4 in one pixel and 1/4 in the other. You then construct an image that has a resolution of these sub-pixels and place these sub-pixels in their proper place.
All of this gets very computationally intensive, and sometimes the images are just too low-resolution and have too much distortion from image to image to even create a meaningful stack of images. I did read a paper about a lab in a university being able to create high-resolution images form low-resolution images, but it was a very very tightly controlled experiment, where they moved the target precisely X amount from image to image and had a very precise camera (probably scientific grade, which is far more sensitive than any commercial grade security camera).
In essence to do this in the real world reliably you need to set up cameras in a very precise way and they need to be very accurate in a particular way, which is going to be expensive, so you are better off just putting in a better camera than relying on this very imprecise technique.
Actually it is possible to do super-resolution (SR) out of even a single low-resolution (LR) image! So you don't have to hassle taking many LR images with sub-pixel shifts to achieve that. The intuition behind such techniques is that natural scenes are full of many repettitive patterns that can be use to enahance the frequency content of similar patches (e.g. you can implement dictionary learning in your SR reconstruction technique to generate the high-resolution version). Sure the enhancment may not be as good as using many LR images but such technique is simpler and more practicle.
Photoshop would be your best bet. But know that you cannot reliably inclrease the size of an image without making the quality even worse.

Matlab image processing - replacing dark pixels with neighboring pixels

I am doing some image processing of the retina images.. I need to replace the blood vessels with background pixels so that I can focus on other aspects of retina. I could not figure out a way to do this. I am using matlab. any suggestions?
Having worked extensively with retinal images, I can tell you that what you're proposing is a complex problem in itself. Sure, if you just want a crude method, you can use imdilate. But that will affect your entire image, and other structures in the image will change appearance. Something, that is not desirable.
However, if you want to do it properly, you will first need to segment all the blood vessels and create a binary mask. Once you have a binary mask, it's up to you how to fill up the vessel regions. You can either interpolate from the boundaries or calculate a background image and replace the vessel regions with pixels from the background image, etc.
Segmentation of the blood vessels is a challenging problem and you will find a lot of literature concerning that on the internet. Ultimately, you will have to choose how accurate a segmentation you want and build your algorithm accordingly.
imdilate should do what you want, since it replaces each pixel with the maximum of its neighbors. For more detailed suggestions, I'd need to see images.

Balancing contrast and brightness between stitched images

I'm working on an image stitching project, and I understand there's different approaches on dealing with contrast and brightness of an image. I could of course deal with this issue before I even stitched the image, but yet the result is not as consistent as I would hope. So my question is if it's possible by any chance to "balance" or rather "equalize" the contrast and brightness in color pictures after the stitching has taken place?
You want to determine the histogram equalization function not from the entire images, but on the zone where they will touch or overlap. You obviously want to have identical histograms in the overlap area, so this is where you calculate the functions. You then apply the equalization functions that accomplish this on the entire images. If you have more than two stitches, you still want to have global equalization beforehand, and then use a weighted application of the overlap-equalizing functions that decreases the impact as you move away from the stitched edge.
Apologies if this is all obvious to you already, but your general question leads me to a general answer.
You may want to have a look at the Exposure Compensator class provided by OpenCV.
Exposure compensation is done in 3 steps:
Create your exposure compensator
Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);
You input all of your images along with the top left corners of each of them. You can leave the masks completely white by default unless you want to specify certain parts of the image to work on.
compensator->feed(corners, images, masks);
Now it has all the information of how the images overlap, you can compensate each image individually
compensator->apply(image_index, corners[image_index], image, mask);
The compensated image will be stored in image

Image Comparison

What is the efficient way to compare two images in visual c..?
Also in which format images has to be stored.(bmp, gif , jpeg.....)?
Please provide some suggestions
If the images you are trying to compare have distinctive characteristics that you are trying to differentiate then PCA is an excellent way to go. The question of what format of the file you need is irrelevant really; you need to load it into the program as an array of numbers and do analysis.
Your question opens a can of worms in terms of complexity.
If you want to compare two images to check if they are the same, then you need to perform an md5 on the file (removing possible metainfos which could distort your result).
If you want to compare if they look the same, then it's a completely different story altogether. "Look the same" is intended in a very loose meaning (e.g. they are exactly the same image but stored with two different file formats). For this, you need advanced algorithms, which will give you a probability for two images to be the same. Not being an expert in the field, I would perform the following "invented out of my head" algorithm:
take an arbitrary set of pixel points from the image.
for each pixel "grow" a polygon out of the surrounding pixels which are near in color (according to HSV colorspace)
do the same for the other image
for each polygon of one image, check the geometrical similitude with all the other polygons in the other image, and pick the highest value. Divide this value by the area of the polygon (to normalize).
create a vector out of the highest values obtained
the higher is the norm of this vector, the higher is the chance that the two images are the same.
This algorithm should be insensitive to color drift and image rotation. Maybe also scaling (you normalize against the area). But I restate: not an expert, there's probably much better, and it could make kittens cry.
I did something similar to detect movement from a MJPEG stream and record images only when movement occurs.
For each decoded image, I compared to the previous using the following method.
Resize the image to effectively thumbnail size (I resized fairly hi-res images down by a factor of ten
Compare the brightness of each pixel to the previous image and flag if it is much lighter or darker (threshold value 1)
Once you've done that for each pixel, you can use the count of different pixels to determine whether the image is the same or different (threshold value 2)
Then it was just a matter of tuning the two threshold values.
I did the comparisons using System.Drawing.Bitmap, but as my source images were jpg, there were some artifacting.
It's a nice simple way to compare images for differences if you're going to roll it yourself.
If you want to determine if 2 images are the same perceptually, I believe the best way to do it is using an Image Hashing algorithm. You'd compute the hash of both images and you'd be able to use the hashes to get a confidence rating of how much they match.
One that I've had some success with is pHash, though I don't know how easy it would be to use with Visual C. Searching for "Geometric Hashing" or "Image Hashing" might be helpful.
Testing for strict identity is simple: Just compare every pixel in source image A to the corresponding pixel value in image B. If all pixels are identical, the images are identical.
But I guess don't want this kind of strict identity. You probably want images to be "identical" even if certain transformations have been applied to image B. Examples for these transformations might be:
changing image brightness globally (for every pixel)
changing image brightness locally (for every pixel in a certain area)
changing image saturation golbally or locally
gamma correction
applying some kind of filter to the image (e.g. blurring, sharpening)
changing the size of the image
rotation
e.g. printing an image and scanning it again would probably include all of the above.
In a nutshell, you have to decide which transformations you want to treat as "identical" and then find image measures that are invariant to those transformations. (Alternatively, you could try to revert the translations, but that's not possible if the transformation removes information from the image, like e.g. blurring or clipping the image)

Resources