How to detect image with small depth - image-processing

Currently, i am doing a project related to image processing. Before any thing is done. The blurry images should be dropped indeed. However, i found some of images are really interesting which are "shallow depth of field". I don't want my blur detection algorithm( variance of Laplacian ) dropping all these nice images at all. i have done a observation with a training set to get a new threshold. It can recognize part of the images i want. Are there any Algorithm can detect small depth images with higher accuracy.If any of you have an idea(it is better if related to opencv) please share to me. Thanks indeed.

Related

Identify if an image is blurred (preferred without OpenCV)

Just completed building a camera using AVCaptureSession for scanning documents on iPhone, I am looking for away to determine if the captured image is in good quality and not blurred.
I saw many solutions using OpenCV and I am looking for an other options.
Any Help would be appreciated.
Thanks
First of all, interesting question, made me do some research to figure out stuff myself. In general, Analysis of focus measure operators for shape-from-focus is a great research paper, talking about a few methods (36 to be precise) on how to get measure of blurriness in an image, from simple/straightforward ones to more complex ones.
I have done myself some basic laplacian operation on one channel of the image (essentially 2nd derivative of the pixels) to measure the blurriness, which worked quite well for me. Once you convolve the channel with the laplacian operator, the variance of this laplacian image is a good estimate of the blurriness. The assumption here is that if an image contains high variance then there is a wide spread of responses, both edge-like and non-edge like, representative of a normal, in-focus image. But if there is very low variance, then there is a tiny spread of responses, indicating there are very little edges in the image. As we know, the more an image is blurred, the less edges there are. The trick here is to find an apt threshold for the variance to be high/low, which I guess you can ascertain by running it on your dataset.
Courtesy: Blog
PS. Although the blog I reference here mentions "OpenCV", the methods can be implemented as you want if you understand the concept and hence I started the answer with the research paper.

Logo detection/recognition in natural images

Given a logo image as a reference image, how to detect/recognize it in a cluttered natural image?
The logo may be quite small in the image, it can appear in clothes, hats, shoes, background wall etc. I have tried SIFT feature for matching without any other preprocessing, and the result is good for cases in which the size of the logo in images is big and the logo is clear. However, it fails for some cases where the scene is quite cluttered and the proportion of the logo size is quite small compared with the whole image. It seems that SIFT feature is sensitive to perspective distortions.
Anyone know some better features or ideas for logo detection/recognition in natural images? For example, training a classifier to locate candidate regions first, and then apply directly SIFT matching for further recognition. However, training a model needs many data, especially it needs manually annotating logo regions in images, and it needs re-training (needs to collect and annotate new image) if I want to apply it for new logos.
So, any suggestions for this? Detailed workflow/code/reference will be highly appreciated, thanks!
There are many algorithms from shape matching to haar classifiers. The best algorithm very depend on kind of logo.
If you want to continue with feature registration, i recommend:
For detection of small logos, use tiles. Split whole image to smaller (overlapping) tiles and perform usual detection. It will use "locality" of searched features.
Try ASIFT for affine invariant detection.
Use many template images for reference feature extraction, with different lightning , different background images (black, white, gray)

Upsampling an Image

I have a basic question.
What are the advantages of upsampling an Image?
Does it help me in edge detection?
I have not found much useful information on the internet.
It depends on the image. It can help if you have extremely jagged edges. At the worst it does nothing. So, you pay in processing time for a potential improvement.
Usually we need to convert an image to a size different than its
original.
For this, there are two possible options:
Upsize the image (zoom in)
Downsize it (zoom out)
As an example, you could want to do your calculations (e.g. segmentation) on a downsized Image, later on you want to work on the original Image data again, so you upsize your Output (e.g. Segmentation mask) again.
Finding better results on upsized Images when applying edge detection
can rise from the following:
With edge detectors (e.g. canny, not only gradient computation) a blurring algorithm is usually connected. If you use some sort of blurring mask in preprocessing, it is possible that you can obtain similar behavior by its modification (decreasing, or increasing power of blurring) as in the case of image resize.

Is it possible to detect blur, exposure, orientation of an image programmatically?

I need to sort a huge number of photos, and remove the blurry images (due to camera shake), the over/under exposed ones and detect whether the image was shot in the landscape or portrait orientation. Can these things be done on an image using an image processing library or are they still beyond the realms of an algorithmic solution ?
Let's look at your question as three separate question.
Can I find blurry images?
There are some methods for finding blurry images either from :
Sharpening an image and comparing it to the original
Using wavelets to detect blurring ( Link1 )
Hough Transform ( Link )
Can I find images that are under or over exposed?
The only way I can think of this is that your overall brightness is either really high or really low. But the problem is that you would have know if the picture was taken at night or day. You could create a histogram of your image and see if it is really skewed one way or the other and that might be some indication of over/under exposure.
Can I determine the orientation of the image?
There are techniques that have been used such as SVM, Color Moments, Edge Direction Histograms, Bayesian Framework using cues.
Can I find images that are under or over exposed?
here histograms is recommended.

SIFT is not finding any features in reference image in OpenCV

I have an image of the target logo that I am trying to use to find target logos in other images. I am currently running two different detection algorithms to help me detect any logos on the image. The first detection I use is Histogram based in which I search the image for a general area on screen where the colors are very similar. From there I run SIFT to further get the object that I am looking for. This works on most logos however the Target logo that I have isn't even picking up and keypoints in the logo.
I was wondering if there was anything I could do to help locate some keypoints in the image. Any advice is greatly appreciated.
Below is the image that isn't being picked up by SIFT:
Thanks in advance.
EDIT
I tired using Julien's idea for template matching based and different scales and rotations of the model, but still got little results. I have included an image that I am trying to test against.
There is no keypoint in your image...
Why ?
Because there is no keypoint in a uniform color plane (why would there be ? as it is uniform nothing is an highlight)
Because everything is symmetric in your image, it wouldn't really help to have keypoints, according to certain feature extractor they would have the same feature vectors
Because there's no corner or high gradient in cross directions which would result in keypoints fro many feature detectors
What you could try is a template matching method if you are searching for this logo without big changes (rotation, translation, noise etc) a simple correlation is the easiiiiest.
If you want to go further, one of my idea, that I have never implemented but which could be funny : would be to have sets of this image that you scale, rotate, warp, desaturate, increase noise with functions and then apply template matching with this set of images you got from your former template...
Well this idea comes from SIFT and Wavelet transform, where we use sort of functions that we change in some ways (rotation, noise, frequency etc...) in order to give robustness to our transform against these basic changes that occur in any image that you want to "inspect".
That could be an idea for you !
Here is an image summarizing my idea, you rotate and scale your template, actually it creates a new rotated/scaled template that you can try to match, it will increase robustness (even if it can be very long if you choose a lot of parameters to change). Well i'm not saying that's an algorithm, but it could be a funny and very basic idea to try...
Julien,
There is another reason that this logo is problematic for feature matching. Most features work pretty bad with artificial images that doesn't have any smoothness. All the derivatives are exactly 1 pixel size and features detector rely on derivatives. You have to smooth the image a bit. Ofcorse for this specific logo it will not help due to high symmetry. You can use hough transform to detect circles inside circles. It would give you better results in comparison with template matching.
I think you can try using MSER features- https://en.wikipedia.org/wiki/Maximally_stable_extremal_regions
See an example:
https://www.mathworks.com/examples/matlab-computer-vision/mw/vision_product-TextDetectionExample-automatically-detect-and-recognize-text-in-natural-images

Resources