Please consider a sample image shown below
The cave paintings are vaguely seen here. Can you please suggest a probable image processing technique I can use here to extractthe regions of the painting?
I tried OTSU threshold, as it is a type of adaptive threshold, but it did not work. Something as simple as color segmentation can be looked into. Apart from that, any pointers please?
You can use decorrelation stretching for this. Take a look at this. You'll find pre-processing techniques they are using in combination with decorrelation stretching to segment rock paintings. Here in my blog post you'll find an implementation of decorrelation stretching using OpenCV.
Related
Just completed building a camera using AVCaptureSession for scanning documents on iPhone, I am looking for away to determine if the captured image is in good quality and not blurred.
I saw many solutions using OpenCV and I am looking for an other options.
Any Help would be appreciated.
Thanks
First of all, interesting question, made me do some research to figure out stuff myself. In general, Analysis of focus measure operators for shape-from-focus is a great research paper, talking about a few methods (36 to be precise) on how to get measure of blurriness in an image, from simple/straightforward ones to more complex ones.
I have done myself some basic laplacian operation on one channel of the image (essentially 2nd derivative of the pixels) to measure the blurriness, which worked quite well for me. Once you convolve the channel with the laplacian operator, the variance of this laplacian image is a good estimate of the blurriness. The assumption here is that if an image contains high variance then there is a wide spread of responses, both edge-like and non-edge like, representative of a normal, in-focus image. But if there is very low variance, then there is a tiny spread of responses, indicating there are very little edges in the image. As we know, the more an image is blurred, the less edges there are. The trick here is to find an apt threshold for the variance to be high/low, which I guess you can ascertain by running it on your dataset.
Courtesy: Blog
PS. Although the blog I reference here mentions "OpenCV", the methods can be implemented as you want if you understand the concept and hence I started the answer with the research paper.
I have a problem very similar but very much simple than this.
To begin with I have a small image:
Then I take a screenshot and I want to detect if my small house is in the screenshot.
The problem is that my house can be different in size and slightly different in color.
I've found so far the OpenCV library but it seem quite oversized for my need.
Do you know any simpler library to achieve this task?
Tx
Edit: I've found this about SURF algorithm
Judging by your question, there will be no sheer or skew to your image as it will be on screen, whereas the problem you referenced is a much more difficult situation. Your image will not experience any distortion, but only an increase/decrease in size.
To match regardless of color, I recommend computing the gradient image (using sobel kernels) for both your template image and your screen shot. Now you're matching based on visible edges and take color out of the mix.
To match regardless of size, create multiple versions of your template (the more versions you make the more precise, but the longer the processing) and slide your template across the image until you find an acceptable match.
OpenCV is a beast that has a steep learning curve. If my assumptions are correct, then you are correctly stating that OpenCV is oversized when simple image processing techniques can be applied :).
I'm trying to figure out how to do this programatically, but despite all of my Googling I cannot figure out how this is down.
The lens blur is different than the Gaussian blur which looks very computer generated.
Thanks for the help!
I found an interesting blog post on the subject. I haven't read through the whole thing, but it seems quite descriptive and might be of some help.
You don't state what language you're after, but Java can do a lot of image processing, check out this link:
jhlabs blurring examples
It even includes the lens blur effect you are after.
I'm thinking of starting a project for school where I'll use genetic algorithms to optimize digital sharpening of images. I've been playing around with unsharp masking (USM) techniques in Photoshop. Basically, I want to create a software that optimizes the parameters (i.e. blur radius, types of blur, blending the image) to create the "best-fit" set of filters.
I'm sort of quickly planning this project before starting it, and I can't think of a good fitness function for the 'selection' part. How would I determine the 'quality' of the filter sets, or measure how sharp the image is?
Also, I will be programming using python (with the Python Imaging Library) since it's the only language I'm proficient with. Should I learn a low-level language instead?
Any advice/tips on anything is greatly appreciated. Thanks in advance!
tl;dr How do I measure how 'sharp' an image is?
if its for tuning parameters you could take a known image and apply a known blurring/low pass filter. Then sharpen this with your GA+USM algorithm. Calculate your fitness function making use of the original image, e.g maybe something as simple as the mean absolute error. May need to create different datasets, e.g. landscape images (mostly sharp, in focus with large depth of field), portrait images (could be large areas deliberately out of focus and "soft"), along with low noise and noisy images. Sharpening noisy images is actually quite a challenge.
It would definitely be worth taking a look at Bruce Frasier' work on sharpening techniques for Photoshop etc.
Also it might worth checking out Imatest (www.imatest.com) to see if there is anything regarding sharpness/resolution. And finally you might also consider resolution charts.
And finally I seroiusly doubt one set of ideal parameters exists for USM, the optimum parameters will be image dependant and indeed be a personal perference (thatwhy I suggest starting for a known sharp image and blurring it). Understanding the type of image is probably as important and in itself and very interesting and challenging problem. Although perhaps basic hueristics like image varinance and edge histogram would reveal suitable clues.
Anyway just a thought, hopefully some of the above is useful
I have an image of the target logo that I am trying to use to find target logos in other images. I am currently running two different detection algorithms to help me detect any logos on the image. The first detection I use is Histogram based in which I search the image for a general area on screen where the colors are very similar. From there I run SIFT to further get the object that I am looking for. This works on most logos however the Target logo that I have isn't even picking up and keypoints in the logo.
I was wondering if there was anything I could do to help locate some keypoints in the image. Any advice is greatly appreciated.
Below is the image that isn't being picked up by SIFT:
Thanks in advance.
EDIT
I tired using Julien's idea for template matching based and different scales and rotations of the model, but still got little results. I have included an image that I am trying to test against.
There is no keypoint in your image...
Why ?
Because there is no keypoint in a uniform color plane (why would there be ? as it is uniform nothing is an highlight)
Because everything is symmetric in your image, it wouldn't really help to have keypoints, according to certain feature extractor they would have the same feature vectors
Because there's no corner or high gradient in cross directions which would result in keypoints fro many feature detectors
What you could try is a template matching method if you are searching for this logo without big changes (rotation, translation, noise etc) a simple correlation is the easiiiiest.
If you want to go further, one of my idea, that I have never implemented but which could be funny : would be to have sets of this image that you scale, rotate, warp, desaturate, increase noise with functions and then apply template matching with this set of images you got from your former template...
Well this idea comes from SIFT and Wavelet transform, where we use sort of functions that we change in some ways (rotation, noise, frequency etc...) in order to give robustness to our transform against these basic changes that occur in any image that you want to "inspect".
That could be an idea for you !
Here is an image summarizing my idea, you rotate and scale your template, actually it creates a new rotated/scaled template that you can try to match, it will increase robustness (even if it can be very long if you choose a lot of parameters to change). Well i'm not saying that's an algorithm, but it could be a funny and very basic idea to try...
Julien,
There is another reason that this logo is problematic for feature matching. Most features work pretty bad with artificial images that doesn't have any smoothness. All the derivatives are exactly 1 pixel size and features detector rely on derivatives. You have to smooth the image a bit. Ofcorse for this specific logo it will not help due to high symmetry. You can use hough transform to detect circles inside circles. It would give you better results in comparison with template matching.
I think you can try using MSER features- https://en.wikipedia.org/wiki/Maximally_stable_extremal_regions
See an example:
https://www.mathworks.com/examples/matlab-computer-vision/mw/vision_product-TextDetectionExample-automatically-detect-and-recognize-text-in-natural-images