How to solve lighting differences on stitching images? - opencv

I am trying to copy a small image to a bigger image, both images are from the same scene and they are aligned very well. I am using laplacian blending which makes it look seamless. I have one problem that i couldn’t solve yet, which is Illumination problem. Both photos are from same scene and they have taken with very small time difference however there is still some color changes because of lightining differences. I have tried to solve this problem with using ExposureCompansation class from opencv stitching module, unfortunately i couldn’t make it to work and it is poorly documented and when i search for it i find similar problems asked on stackoverflow and none of them answered yet. So it seems i need to develop my own solution for this Illumination problem and i don’t know where to start. Please tell me where to start.
Source Image
Destination Image
Result Image with problem

Exclude the region that has changed (the stamp) and do a histogram matching to match the histogram of the source image to that of the destination. The histogram matching will make the colors in the two image match.

Related

Opencv Haarlike eye-detection

I have run this Opencv Haarlike eye-detection from this link with C++ visual studio 2010
http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html
And my camera isn't running smooth. So I delete for-loop(of this code) out and running only camera. The camera running smooth.
The question is if I want to modified this code to detect eye and face.
How can I modified this code to running in smooth?
please show the example to modified this code.
Best thank and sorry for bad language
Chairat(Thailand)
Generally it's not a trivial question, but a basic idea (which i used for my BSc thesis) is quite simple. It's not the entire solution i've used, but this should be enough right now, if not - let me know i will write more about it.
For first frame:
Find face (i've used haarcascade_frontalface_default.xml cascade, but you may try with different) and remember its position.
Within face rectangle find eyes (use Haar cascade for eye pair (haarcascade_mcs_eyepair_big.xml), not for one eye - it's much faster and simpler solution) and remember position.
For other frames:
Expand (for about 20-50%) rectangle in which you have found face recently.
Find face in expanded rectangle.
Find eyes within face. If you haven't found face in previous step, you may try to search for eyes in expanded rectangle of previous eyes position.
Few important things:
while searching use CV_HAAR_FIND_BIGGEST_OBJECT flag.
convert frame to grayscale before searching - during searching opencv uses only grayscale images so it will be faster to convert whole image once than converting whole image (for first search - face) and than converting only rectangle containing face (for second search - eyes)
some people say that equalizing histogram before searching may improve results, i'm not sure about that, but if you want you may try this - use equalizeHist function. Note that it works only on grayscale images.

How to write a simple image recognition

I have a problem very similar but very much simple than this.
To begin with I have a small image:
Then I take a screenshot and I want to detect if my small house is in the screenshot.
The problem is that my house can be different in size and slightly different in color.
I've found so far the OpenCV library but it seem quite oversized for my need.
Do you know any simpler library to achieve this task?
Tx
Edit: I've found this about SURF algorithm
Judging by your question, there will be no sheer or skew to your image as it will be on screen, whereas the problem you referenced is a much more difficult situation. Your image will not experience any distortion, but only an increase/decrease in size.
To match regardless of color, I recommend computing the gradient image (using sobel kernels) for both your template image and your screen shot. Now you're matching based on visible edges and take color out of the mix.
To match regardless of size, create multiple versions of your template (the more versions you make the more precise, but the longer the processing) and slide your template across the image until you find an acceptable match.
OpenCV is a beast that has a steep learning curve. If my assumptions are correct, then you are correctly stating that OpenCV is oversized when simple image processing techniques can be applied :).

Approach for finding some patterns in the image captured from camera using opencv [duplicate]

This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
matchTemplate opencv not working as shown in opencv document
I have posted some questions earlier also but still can not find the solution.
According to my requirement I have to create a scanning paper app.
In this the camera takes a picture and I have to detect the patterns(that will be predefined) if it appears in the captured image or not.
I tried it with matchTemplate(opencv) but could not succeed it.
Since the image is captured from camera , so it might the case that the pattern in the captured image can be small or big from the size of the pattern image,
so in this case do the matchTemplate will work properly, or if this could not be the solution so what another approach should I try now.
match template won't work for different scales(sizes). To do that you can do a multiscale search. Basically you can run the template matching in different scales of input image. Another way you can do is to train a opencv Haar cascade to detect the template. It has built in multiscale detection.
the wrong size of the template is a standard problem of template matching. Since I don t see any example code it is not easy to understand where maybe the real problem of your question is. Did you try different thresholds in the algorithm ?
For the theoretical aspect there are two big main problems for feature extraction the size (distance) and the rotation (object orientation). The general hough transformation could be a solution.

opecv template matching -> get exact location?

I have opencv installed and working on my iphone (big thanks to this community). I'm doing template matching with it. It does find the object in the captured image. However, the exact location seems to be hard to tell.
Please take a look at the following video (18 seconds):
http://www.youtube.com/watch?v=PQnXNZMqpsU
As you can see in the video, it does find the template in the image. But when i move the camera a bit further away, then the found template is positioned somewhere inside that square. That way it's hard to tell the exact location of the found object.
The square that you see is basically the found x,y location of the template plus the width,height of the actual template image.
So basically my question is, is there a way to find the exact location of the found template image? Because currently it can be at any locastion inside that square. No real way to tell the exact location...?
It seems that you're not well-pleased with your template matching algorithm :)
Shortly, there are some ways to improve it, but I would recommend you to try something else. If your images are always as simple as in the video, you can use thresholding, contour finding, blob detection, etc. They are simple and fast.
For a more demanding environment, you may try feature matching. Look for SIFT, SURF, ORB, or other ways to describe your objects with features. Actually, ORB was specifically designed to be fast enough for the limited power of mobile phones.
Try this sample in the OCV samples/cpp/ folder
matching_to_many_images.cpp
And check this detailed answer on how to use feature detectors;
Detecting if an object from one image is in another image with OpenCV
Template matching (cvMatchTemplate()) is not invariant to scale and rotation. When you move the phone back, the image appears smaller, and the template "match" is just the place with the best match score, though it is not really a true match.
If you want scale and/or rotation invariance you will have to try non-template matching methods such as those using 2D-feature descriptors.
Check out the OpenCV samples for examples of how to do this.

How do I recognize squares in this image?

So I'm using openCV to do square recognition on this image. I compiled the squares.c file on an image that I took and here are the results:
http://www.learntobe.org/urs/index1.php
The image on the left is the original and on the right is the image that is a result of running the square detection.
The results aren't bad, but I really need this to detect ALL of the squares and I'm really new to this openCV and image processing stuff. Does anyone know of how I can edit the squares.c file to possibly get the detection to be more inclusive so that all of the squares are highlighted?
Thanks a lot ahead of time.
All the whitish colors are tough to detect. Nothing separates it from the page itself. Try doing some kind of edge detection (check cvCanny or cvSobel).
You should also "pre-process" the image. That is, increase the contrast, make the colors more saturated, etc.
Also check this article http://www.aishack.in/2010/01/an-introduction-to-contours/ It talks about how the squares.c sample works. Then you'll understand a bit about how to improves the detection in your case.
Hope this helps!

Resources