I've tried using Histogram Comparison for images comparison. However, it doesn't seems to provide me with good result. For your information:
-Application: visual inspection for any defects on a specific object.
-Test image (static): captured through fixed camera which may result with different contrast & brightness.
-Condition: Check for defects but not lightening issue.
As I know, histogram comparison is rather contrast & brightness sensitive. Also, I've gone through feature detection such as SURF and a very shallow way only. SURF is rather robust but it do not return me with qualitative data, such as percentage of similarity between two images. I need a threshold in order to know whether the "mismatch" is contrast & brightness issue or is the real defects.
Any suggestion or example?
Is that possible for me to continue sticking with histogram comparison? Maybe perform histogram equalization will help?
It depends on the type of defects that you want to detect. Here, it seems that your defects are can't be described by geometric features, but rather by some light level (brightness ? color ?) change.
As you guessed, the first step is to get rid of the natural intensity change.
You can do it by histogram matching of the image under test onto the reference image rather than by histogram equalization. An even more robust algorithm for this task is called Midway equalization.
After you've done that, you may need to register (i.e., overlay) your image under test to your reference image. There are many algorithms for that, and in the end it will depend on your images.
Finally, you'll want to detect the changes.
Histogram mismatch can be some metric used for that, but it seems to me to be some really coarse-level tool.
If you need finer precision, image difference followed by appropriate filtering could be useful, but it depends a lot on your images and application context.
Related
I was wondering if its possible to match the exposure across a set of images.
For example, lets say you have 5 images that were taken at different angles. Images 1-3,5 are taken with the same exposure whilst the 4th image have a slightly darker exposure. When I then try to combine these into a cylindrical panorama using (seamFinder with: gc_color, surf detection, MULTI_BAND blending,Wave correction, etc.) the result turns out with a big shadow in the middle due to the darkness from image 4.
I've also tried using exposureCompensator without luck.
Since I'm taking the pictures in iOS, I maybe could increase exposure manually when needed? But this doesn't seem optimal..
Have anyone else dealt with this problem?
This method is probably overkill (and not just a little) but the current state-of-the-art method for ensuring color consistency between different images is presented in this article from HaCohen et al.
Their algorithm can correct a wide range of errors in image sets. I have implemented and tested it on datasets with large errors and it performs very well.
But, once again, I suppose this is way overkill for panorama stitching.
Sunreef has provided a very good paper, but it does seem overkill because of the complexity of a possible implementation.
What you want to do is to equalize the exposure not on the entire images, but on the overlapping zones. If the histograms of the overlapped zones match, it is a good indicator that the images have similar brightness and exposure conditions. Since you are doing more than 1 stitch, you may require a global equalization in order to make all the images look similar, and then only equalize them using either a weighted equalization on the overlapped region or a quadratic optimiser (which is again overkill if you are not a professional photographer). OpenCV has a simple implmentation of a simple equalization compensation algorithm.
The detail::ExposureCompensator class of OpenCV (sample implementation of such a stitiching is here) would be ideal for you to use.
Just create a compensator (try the 2 different types of compensation: GAIN and GAIN_BLOCKS)
Feed the images into the compensator, based on where their top-left cornes lie (in the stitched image) along with a mask (which can be either completely white or white only in the overlapped region).
Apply compensation on each individual image and iteratively check the results.
I don't know any way to do this in iOS, just OpenCV.
How can I use one of feature types to detect object - illumination / brightness invariant?
Interested to use features that resistant to:
different lighting
half of the object in the shadow
glare/reflections
Does it make sense to use a HUE (1st component of HSV-color-space), or the average value between the HUE and brightness?
And what is the best feature SIFT/SURF, ORB, BRISK/FREAK, KAZE/AKAZE for brightness-invariant detection?
Sorry for the late response but this answer might be beneficial to someone else.
This is quite a problematic area that I am also facing. Unfortunately I don't know of any feature detector-descriptor combination that is illumination invariant. So some suggestions that you might want to consider involve the following:
You can pre-process the images using the Wallis filter so that both images get to have a balanced brightness level throughout the images.
You can also normalize the descriptor values of the detected features since the descriptors use image intensity values to construct the descriptors. So if the images have different illumination conditions, then corresponding features will also have different descriptors.
Is there a robust way to detect the water line, like the edge of a river in this image, in OpenCV?
(source: pequannockriver.org)
This task is challenging because a combination of techniques must be used. Furthermore, for each technique, the numerical parameters may only work correctly for a very narrow range. This means either a human expert must tune them by trial-and-error for each image, or that the technique must be executed many times with many different parameters, in order for the correct result to be selected.
The following outline is highly-specific to this sample image. It might not work with any other images.
One bit of advice: As usual, any multi-step image analysis should always begin with the most reliable step, and then proceed down to the less reliable steps. Whenever possible, the less reliable step should make use of the result of more-reliable steps to augment its own accuracy.
Detection of sky
Convert image to HSV colorspace, and find the cyan located at the upper-half of the image.
Keep this HSV image, becuase it could be handy for the next few steps as well.
Detection of shrubs
Run Canny edge detection on the grayscale version of image, with suitably chosen sigma and thresholds. This will pick up the branches on the shrubs, which would look like a bunch of noise. Meanwhile, the water surface would be relatively smooth.
Grayscale is used in this technique in order to reduce the influence of reflections on the water surface (the green and yellow reflections from the shrubs). There might be other colorspaces (or preprocessing techniques) more capable of removing that reflection.
Detection of water ripples from a lower elevation angle viewpoint
Firstly, mark off any image parts that are already classified as shrubs or sky. Since shrub detection would be more reliable than water detection, shrub detection's result should be used to inform the less-reliable water detection.
Observation
Because of the low elevation angle viewpoint, the water ripples appear horizontally elongated. In fact, every image feature appears stretched horizontally. This is called Anisotropy. We could make use of this tendency to detect them.
Note: I am not experienced in anisotropy detection. Perhaps you can get better ideas from other people.
Idea 1:
Use maximally-stable extremal regions (MSER) as a blob detector.
The Wikipedia introduction appears intimidating, but it is really related to connected-component algorithms. A naive implementation can be done similar to Dijkstra's algorithm.
Idea 2:
Notice that the image features are horizontally stretched, a simpler approach is to just sum up the absolute values of horizontal gradients and compare that to the sum of absolute values of vertical gradients.
Have OpenCV implementation of shape context matching? I've found only matchShapes() function which do not work for me. I want to get from shape context matching set of corresponding features. Is it good idea to compare and find rotation and displacement of detected contour on two different images.
Also some example code will be very helpfull for me.
I want to detect for example pink square, and in the second case pen. Other examples could be squares with some holes, stars etc.
The basic steps of Image Processing is
Image Acquisition > Preprocessing > Segmentation > Representation > Recognition
And what you are asking for seems to lie within the representation part os this general algorithm. You want some features that descripes the objects you are interested in, right? Before sharing what I've done for simple hand-gesture recognition, I would like you to consider what you actually need. A lot of times simplicity will make it a lot easier. Consider a fixed color on your objects, consider background subtraction (these two main ties to preprocessing and segmentation). As for representation, what features are you interested in? and can you exclude the need of some of these features.
My project group and I have taken a simple approach to preprocessing and segmentation, choosing a green glove for our hand. Here's and example of the glove, camera and detection on the screen:
We have used a threshold on defects, and specified it to find defects from fingers, and we have calculated the ratio of a rotated rectangular boundingbox, to see how quadratic our blod is. With only four different hand gestures chosen, we are able to distinguish these with only these two features.
The functions we have used, and the measurements are all available in the documentation on structural analysis for OpenCV, and for acces of values in vectors (which we've used a lot), can be found in the documentation for vectors in c++
I hope you can use the train of thought put into this; if you want more specific info I'll be happy to comment, Enjoy.
For my final year project i'l be taking the photographs from the mobile phone and then will be computing the image processing steps. I will the taking the images under various illumination conditions (natural light, poor lightning conditions and so on). Does any one knows any algorithm that I can use to compute it?
Thanks a lot
Good whitebalancing is still an active field of research I guess. From your question, it is hard to tell how "advanced" the sought solution is supposed to be and what you need exactly.
In some other context, I recently encountered this paper. They have a quite complicated approach for Whitebalancing and produce good results:
Hsu, Mertens, Paris, Avidan, Durand. "Light mixture estimation for spatially varying white balance". In: ACM Transactions on Graphics, 2008
Check the related work section for more hints, as usual.
If you are less interested in whitebalancing but rather require to process the images further (sounds a bit like that in your comment), you should possibly aim for techniques that are rather invariant to illumination - or at least robust against changes in illumination. E.g. transforming your image in any colorspace that separates the brightness/luminance (i.e. YUV, HSV) might help, depending on your actual problem. From my experience and intuition, I would suggest that in most cases it is better to make your "recognition"-algorithm robust agains changes in illumination - rather than correcting the illumination first.
One very simple method is to take the mean pixel value of an image, adjust the exposure, take another picture and compute the mean again, continuing until the mean reaches some arbitrary value.
Try the simplest method: histogram equalization first.