OpenCV Matching Exposure across images - ios

I was wondering if its possible to match the exposure across a set of images.
For example, lets say you have 5 images that were taken at different angles. Images 1-3,5 are taken with the same exposure whilst the 4th image have a slightly darker exposure. When I then try to combine these into a cylindrical panorama using (seamFinder with: gc_color, surf detection, MULTI_BAND blending,Wave correction, etc.) the result turns out with a big shadow in the middle due to the darkness from image 4.
I've also tried using exposureCompensator without luck.
Since I'm taking the pictures in iOS, I maybe could increase exposure manually when needed? But this doesn't seem optimal..
Have anyone else dealt with this problem?

This method is probably overkill (and not just a little) but the current state-of-the-art method for ensuring color consistency between different images is presented in this article from HaCohen et al.
Their algorithm can correct a wide range of errors in image sets. I have implemented and tested it on datasets with large errors and it performs very well.
But, once again, I suppose this is way overkill for panorama stitching.

Sunreef has provided a very good paper, but it does seem overkill because of the complexity of a possible implementation.
What you want to do is to equalize the exposure not on the entire images, but on the overlapping zones. If the histograms of the overlapped zones match, it is a good indicator that the images have similar brightness and exposure conditions. Since you are doing more than 1 stitch, you may require a global equalization in order to make all the images look similar, and then only equalize them using either a weighted equalization on the overlapped region or a quadratic optimiser (which is again overkill if you are not a professional photographer). OpenCV has a simple implmentation of a simple equalization compensation algorithm.
The detail::ExposureCompensator class of OpenCV (sample implementation of such a stitiching is here) would be ideal for you to use.
Just create a compensator (try the 2 different types of compensation: GAIN and GAIN_BLOCKS)
Feed the images into the compensator, based on where their top-left cornes lie (in the stitched image) along with a mask (which can be either completely white or white only in the overlapped region).
Apply compensation on each individual image and iteratively check the results.
I don't know any way to do this in iOS, just OpenCV.

Related

Ideas to process challenging image

I'm working with Infra Red image that is an output of a 3D sensor. This sensors project a Infra Red pattern in order to draw a depth map, and, because of this, the IR image has a lot of white spots that reduce its quality. So, I want to process this image to make it smoother in order to make it possible to detect objects laying in the surface.
The original image looks like this:
My objective is to have something like this (which I obtained by blocking the IR projecter with my hand) :
An "open" morphological operation does remove some noise, but I think first there should be some noise removal operation that addresses the white dots.
Any ideas?
I should mention that the algorithm to reduce the noise has to run on real time.
A median filter would be my first attempt .... possibly followed by a Gaussian blur. It really depends what you want to do with it afterwards.
For example, here's your original image after a 5x5 median filter and 5x5 Gaussian blur:
The main difficulty in your images is the large radius of the white dots.
Median and morphologic filters should be of little help here.
Usually I'm not a big fan of these algorithms, but you seem to have a perfect use case for a decomposition of your images on a functional space with a sketch and an oscillatary component.
Basically, these algorithms aim at solving for the cartoon-like image X that approaches the observed image, and that differs from Y only through the removal of some oscillatory texture.
You can find a list of related papers and algorithms here.
(Disclaimer: I'm not Jérôme Gilles, but I know him, and I know that
most of his algorithms were implemented in plain C, so I think most of
them are practical to implement with OpenCV.)
What you can try otherwise, if you want to try simpler implementations first:
taking the difference between the input image and a blurred version to see if it emphasizes the dots, in which case you have an easy way to find and mark them. The output of this part may be enough, but you may also want to fill the previous place of the dots using inpainting,
or applying anisotropic diffusion (like the Rudin-Osher-Fatemi equation) to see if the dots disappear. Despite its apparent complexity, this diffusion can be implemented easily and efficiently in OpenCV by applying the algorithms in this paper. TV diffusion can also be used for the inpainting step of the previous item.
My main point on the noise removal was to have a cleaner image so it would be easier to detect objects. However, as I tried to find a solution for the problem, I realized that it was unrealistic to remove all noise from the image using on-the-fly noise removal algorithms, since most of the image is actually noise.. So I had to find the objects despite those conditions. Here is my aproach
1 - Initial image
2 - Background subtraction followed by opening operation to smooth noise
3 - Binary threshold
4 - Morphological operation close to make sure object has no edge discontinuities (necessary for thin objects)
5 - Fill holes + opening morphological operations to remove small noise blobs
6 - Detection
Is the IR projected pattern fixed or changes over time?
In the second case, you could try to take advantage of the movement of the dots.
For instance, you could acquire a sequence of images and assign each pixel of the result image to the minimum (or a very low percentile) value of the sequence.
Edit: here is a Python script you might want to try

Comparing similar images as photographs -- detecting difference, image diff

The situation is kind of unique from anything I have been able to find asked already, and is as follows: If I took a photo of two similar images, I'd like to be able to highlight the differing features in the two images. For example the following two halves of a children's spot the difference game:
The differences in the images will be bits missing/added and/or colour change, and the type of differences which would be easily detectable from the original image files by doing nothing cleverer than a pixel-by-pixel comparison. However the fact that they're subject to the fluctuations of light and imprecision of photography, I'll need a far more lenient/clever algorithm.
As you can see, the images won't necessarily line up perfectly if overlaid.
This question is tagged language-agnostic as I expect answers that point me towards relevant algorithms, however I'd also be interested in current implementations if they exist, particularly in Java, Ruby, or C.
The following approach should work. All of these functionalities are available in OpenCV. Take a look at this example for computing homographies.
Detect keypoints in the two images using a corner detector.
Extract descriptors (SIFT/SURF) for the keypoints.
Match the keypoints and compute a homography using RANSAC, that aligns the second image to the first.
Apply the homography to the second image, so that it is aligned with the first.
Now simply compute the pixel-wise difference between the two images, and the difference image will highlight everything that has changed from the first to the second.
My general approach would be to use an optical flow to align both images and perform a pixel by pixel comparison once they are aligned.
However, for the specifics, standard optical flows (OpenCV etc.) are likely to fail if the two images differ significantly like in your case. If that indeed fails, there are recent optical flow techniques that are supposed to work even if the images are drastically different. For instance, you might want to look at the paper about SIFT flows by Ce Liu et al that addresses this problem with sparse correspondences.

find mosquitos' head in the image

I have images of mosquitos similar to these ones and I would like to automatically circle around the head of each mosquito in the images. They are obviously in different orientations and there are random number of them in different images. some error is fine. Any ideas of algorithms to do this?
This problem resembles a face detection problem, so you could try a naïve approach first and refine it if necessary.
First you would need to recreate your training set. For this you would like to extract small images with examples of what is a mosquito head or what is not.
Then you can use those images to train a classification algorithm, be careful to have a balanced training set, since if your data is skewed to one class it would hit the performance of the algorithm. Since images are 2D and algorithms usually just take 1D arrays as input, you will need to arrange your images to that format as well (for instance: http://en.wikipedia.org/wiki/Row-major_order).
I normally use support vector machines, but other algorithms such as logistic regression could make the trick too. If you decide to use support vector machines I strongly recommend you to check libsvm (http://www.csie.ntu.edu.tw/~cjlin/libsvm/), since it's a very mature library with bindings to several programming languages. Also they have a very easy to follow guide targeted to beginners (http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf).
If you have enough data, you should be able to avoid tolerance to orientation. If you don't have enough data, then you could create more training rows with some samples rotated, so you would have a more representative training set.
As for the prediction what you could do is given an image, cut it using a grid where each cell has the same dimension that the ones you used on your training set. Then you pass each of this image to the classifier and mark those squares where the classifier gave you a positive output. If you really need circles then take the center of the given square and the radius would be the half of the square side size (sorry for stating the obvious).
So after you do this you might have problems with sizes (some mosquitos might appear closer to the camera than others) , since we are not trained the algorithm to be tolerant to scale. Moreover, even with all mosquitos in the same scale, we still might miss some of them just because they didn't fit in our grid perfectly. To address this, we will need to repeat this procedure (grid cut and predict) rescaling the given image to different sizes. How many sizes? well here you would have to determine that through experimentation.
This approach is sensitive to the size of the "window" that you are using, that is also something I would recommend you to experiment with.
There are some research may be useful:
A Multistep Approach for Shape Similarity Search in Image Databases
Representation and Detection of Shapes in Images
From the pictures you provided this seems to be an extremely hard image recognition problem, and I doubt you will get anywhere near acceptable recognition rates.
I would recommend a simpler approach:
First, if you have any control over the images, separate the mosquitoes before taking the picture, and use a white unmarked underground, perhaps even something illuminated from below. This will make separating the mosquitoes much easier.
Then threshold the image. For example here i did a quick try taking the red channel, then substracting the blue channel*5, then applying a threshold of 80:
Use morphological dilation and erosion to get rid of the small leg structures.
Identify blobs of the right size to be moquitoes by Connected Component Labeling. If a blob is large enough to be two mosquitoes, cut it out, and apply some more dilation/erosion to it.
Once you have a single blob like this
you can find the direction of the body using Principal Component Analysis. The head should be the part of the body where the cross-section is the thickest.

What is the correct method to auto-crop objects from light background?

I'm trying to extract objects from scanned images. There could be a few documents on a white background, and I need to crop and rotate them automatically. This seems like a rather simple task, but I've got stuck at some point and get bad results all the time.
I've tried to:
Binarise the image and get connected components by performing morphological operations.
Perform watershed segmentation by using dilated and eroded binary images as mask components.
Apply Canny detector and fill the contours.
None of this gets me good results. If the object does't have contrast edges (i.e a piece of paper on white background), it splits into a lot of separate components. If I connect these components by applying excessive dilation, background noise also expands and everything becomes a mess.
For example, I have an image:
After applying Canny detector and filling the contours I get something like this:
As you can see, the components are not connected. They are eve too far from each other to be connected by a reasonable amount of dilation. And when I apply watershed to this mask combined with some background points, it yields very bad results.
Some images are noisy:
In this particular case I was able to obtain contour of the whole passport by Canny detector because of it's contrast edges. But threshold method doesn't work here.
If the images are always on a very light background, then you can binarize with a threshold close to the maximum possible value. After that it is a matter of correcting the binary image to get the objects, but this step will vary depending on how your other images look like.
For instance, the following image at left is what we get with a threshold at 99% of the maximum value after a gaussian filtering on the input. After removing components connected to the border and other small components, and also combining with some basic morphological tools, we get the image at right.
This may seem a bit wishy-washy but bear with me:
This looks like quite a challenging case for image processing recipes involving only edge detection, morphological operations and segmentation.
What you are not exploiting here is that you (I believe) know what your document should look like. You are currently looking at completely general solutions which do not take into account this prior knowledge. If you can get some training data then you can go all the way from simple template/patch-based matching (SSD, Normalized Cross-Correlation) to more sophisticated object detection techniques to find the position and rotation of your documents.
My guess is that if your objects are always more or less the same and at the same scale (e.g. passports scanned at a fixed resolution/similar machines) then you can get away with a fairly crude approach. There won't be any one correct method. It's also likely that the technique you end up using will not work until you have done a significant amount of parameter tweaking, so don't give up on anything too quickly.

Balancing contrast and brightness between stitched images

I'm working on an image stitching project, and I understand there's different approaches on dealing with contrast and brightness of an image. I could of course deal with this issue before I even stitched the image, but yet the result is not as consistent as I would hope. So my question is if it's possible by any chance to "balance" or rather "equalize" the contrast and brightness in color pictures after the stitching has taken place?
You want to determine the histogram equalization function not from the entire images, but on the zone where they will touch or overlap. You obviously want to have identical histograms in the overlap area, so this is where you calculate the functions. You then apply the equalization functions that accomplish this on the entire images. If you have more than two stitches, you still want to have global equalization beforehand, and then use a weighted application of the overlap-equalizing functions that decreases the impact as you move away from the stitched edge.
Apologies if this is all obvious to you already, but your general question leads me to a general answer.
You may want to have a look at the Exposure Compensator class provided by OpenCV.
Exposure compensation is done in 3 steps:
Create your exposure compensator
Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);
You input all of your images along with the top left corners of each of them. You can leave the masks completely white by default unless you want to specify certain parts of the image to work on.
compensator->feed(corners, images, masks);
Now it has all the information of how the images overlap, you can compensate each image individually
compensator->apply(image_index, corners[image_index], image, mask);
The compensated image will be stored in image

Resources