stitching microscope images of a microchip - opencv

So, I'm trying to stitch images taken by a microscope of a microchip, but it's very hard to have all the features aligned. I already have a 50% overlap between two adjacent images, but even with that, it's not always a good fit.
I'm using SURF with OpenCV to extract the keypoints and find the homographic matrix. But still, it's far from being an acceptable result.
My objective is to be able to stitch perfectly 2x2 images, so this way, I can repeat that process recursively until I have the final image.
Do you have any suggestion ? A nice algorithm to approach this problem. Or maybe a way to transform the images to be able to extract better keypoints from them. Play with the threshold (a smaller one to get more keypoints, or a larger one?).
Right now, my approach is to first stitch two 2x1 images and then, stitch these two together. It's close from what we want, but still not acceptable. Also, the problem might be that the image used to be the "source" (while the second image is transform with the matrix to overlap that one) might not be a bit misaligned or there's a small angle on that image that affects the whole result.
Any help or suggestion is appreciated. Specially any solution that would allow to use OpenCV and SURF (even if I'm not totally against other libraries... it's just that most of the project has been developed with that).
Thanks!

I found using TurboReg during image registration development to be a helpful comparison tool. It is a free ImageJ plugin, and has many different fitting types.
Have you taken a look at the new OpenCV stitching samples: stitching.cpp and stitching_detailed.cpp?
EDIT : I forgot this was cutting edge OpenCV because I'm using the trunk at home :) To get access to these new samples, you'll need to check out the OpenCV trunk from SVN like this:
svn co https://code.ros.org/svn/opencv/trunk/opencv opencv-trunk
Unfortunately, you'll need to recompile it, but you should be able to use the new stitching code :) If you haven't built OpenCV from source before, here is a good little tutorial to get you started. I will mention that OpenCV has a lot more options that can be enabled/disabled than are mentioned in the tutorial, so you might want to use the cmake-gui to get a look at all of the options. You can apt-get it with this command:
> sudo apt-get install cmake-qt-gui
Also, if you're more concerned with quality, and you don't mind slower performance; you might consider using the Lucas-Kanade method for image registration. Here is a lecture, and here is a paper on the topic that might be helpful to you.

The Fiji's stitching plugin handles this situation of alignment error propagation with 2D mosaicing. We use it in daily use for microscopic stitching, and I must say it is perfect.

Related

Image segmentation for iOS

I need a way to transform image containing human into a image containing only body sihlouette in one color. First i took a look at Canny edge detector (OpenCV implementation), but this may lead to problems with background of the image.
I`ve tried with GrabCut OpenCV implementation. This works fine in most of cases, bit it have extremely bad time performance, example for 480x320 image it takes up to 1 minute to process. Also the problem with grabcut is that user need to make interaction and to set the background area and user area, which in my case is not allowed.
So, maybe you can give me ideas about some another approach using something different than GrabCut, or suggest me how to enhance GrabBut time performance(Maybe gpu implementation). Also i need a suggestion about algorithm that will locate human body, and help grabcut algorithm with positioning of body/background area.
Example:
can suggest two things to investigate which may help:
1) CIDetector class
2) OpenCV library for iOS. This project doesn't look active, but you can find some forks or related projects here.
Downscale the image by half the resolution (use pyrDown()), run GrabCut, use contour detection to vectorize the cut out, and upscale as necessary.
OpenCV would be a good option for something like that. You can get a pre-built library for ios on the official site:
http://opencv.org/
And there is also a Tutorial-App using OpenCV, that may have a similar ability to what you're looking for:
http://computer-vision-talks.com/
You can use Canny's edge detection for this.
http://iosgpuar.blogspot.com/2011/12/canny-edge-detection-using-fragment.html

Image Processing open source program? [duplicate]

My current project involves transcribing texts in pdf into text files, and I first tried putting the image file directly into OCR program (tesseract) and it didnt' do that well.
The original image files are old news papers, basically, and have some background noises, which I am sure tesseract has problem with. So I am trying to use some image preprocessing before feeding it into tesseract. Is there any suggestion for open source image preprocessing engine that fits well to this situation??? And instructions on how to use it would be even more appreciated !
I never heard of an "image preprocessing engine" for that purpose, but you can take a look at OpenCV (Open Source Computer Vision Library) and implement your own "pre-processing engine". OpenCV is a computer vision library that offers many features to perform image processing.
One interesting thing you might want test as a preprocessing step is apply a threshold to the image to remove noises and stuff. Anyway, I've talked about this kind of stuff in this thread.
Like #karlphillip mentioned, I highly doubt there's a readily available preprocessing engine for your purposes as the preprocessing technique vary greatly with the desired result.
Some common approaches to clearing up the text in noisy images include:
1. Adaptive thresholding (Sauvola or Niblack binarization)
2. Applying a median filter of a size slightly larger than the text to get a background image, then subtract out the background from the original image (to remove the larger noise like creases, stains, handwritten notes, etc.).
OpenCV has implementations of these filters/binarization methods. If you have access to published literature there's quite a bit of work on binarization of noisy documents.
Check out ScanTailor. It has pretty impressive pre-processing functionality and it is open source.

Face Authentication

My project is Face Authentication.
System Description: My input is only one image (which was taken when the user logins for the first time) and using that image system should authenticate whenever the user logins to the application. The authentication images may differ from the first input image like -- different illumination conditions, different distance from camera and -10 to 10 degrees variation in pose. The camera used is same (ex: ipad) for all cases.
1) Authentication images are stored each time the user logins. How to
make use of these images to enhance the accuracy of the system??
2) When a new image comes, I need to select the closest image(s) (and
not all stored images) from the image repository and use for
authenticate to reduce the time. How to label an image based on
illumination/distance from camera automatically??
3) How should I make my system to perform decently for changes in
illumination and distance from camera??
Please, can anyone suggest me good alogirthm/papers/opensource-codes for my above questions??
Though it sounds like a research project, I would be extremely grateful if I get any response from someone.
For this task I think you should take a look at OpenCV's Face Recognition API. The API is basically able to identify the structure of a face (within certain limitations of course) and provide you with the coordinates of the image within which the face is available.
Having to deal with just the face in my opinion reduces the need to deal with different background colours which I think is something you do not really need.
Once you have the image of the face, you could scale it up/down to have a uniform size and also change the colour of the image to grey scale. Lastly, I would consider feeding all this information to an Artificial Neural Network since these are able to deal with inconsistencies with the input. This will allow you to increase your knowledge base each time a user logs in.
I'm pretty sure there are other ways to go around this. I would recommend taking a look at Google Scholar to try and find papers which deal with this matter for more information and quite possible other ways to achieve what you are after. Also, keep in mind that with some luck you might also find some open source project which already does most of what you are after.
If you really have a database of photographs of faces, you could probably use that to enhance the features of OpenCV face detection. The way faces are recognized is by comparing the principal components of the picture with those of the face examples in OpenCV database.
Check out:
How to create Haar Cascade (xml) for using with OpenCV?
Seeing that, you could also try to do your own Principal Component Analysis on every picture of a recognized face (use OpenCV face detection for that-> Black out everything exept the face, OpenCV gives you the position and size of the face). Compare the PCA to the ones in your database and match it to the closest. Course, this would work best with a fairly big database, so maybe at the beginning there could be wrong matches.
I think creating your own OpenCV haarcascade would be the best way to go.
Good Luck!

Speed up stitching of 2 images?

I am working with 2 fly cameras and trying to stitch them together.
I am working with OpenCV and C++ here.
Since I am trying to cover large region using both cameras (and to contour detection later on), I am wondering if there's a fast way to stitch both images from both cameras together ?
Currently here's what I am doing:
Removing each camera's image with previously stored background image (to speed up contour detection later on)
Un-distort each image using cvRemap function
And finally to set the ROI of the images for stitching them together.
My question is, is it possible to speed this up even more ? Since currently these steps take around 60ms, and with additional functionality it slows down to 0.1 second.
Have I been using the slower functions of OpenCV ? Or are there any tricks to gain more speed ?
Take the latest OpenCV snapshot from here and try the stitching module implemented here. They have been working on stitching performance lately, so it's possible to get some good improvements.
By the way, what step takes the most? Did you profile your app? Take a look at the profile results, and you'll be able to understand exactly where to optimize, and maybe how to do it.

How can I compare images of the same origin that were cropped?

Suppose I have an image file/URL, and I want my software to search it within a set of up to 100 images (or at least in that order of magnitude). The target image that the software should find should be the "same" image as the given image, but it should still be able to "forgive" slight processing on either of them (the two images may have been cropped differently, or they were compressed differently).
The question is - is this feasible a task, given that I won't have any of the images before the search is taking place (i.e., there won't be any indexing prior to the search.) Is it likely to work in subsecond time (remember that the compare set is quite small). And if feasible, which tools can I use for this task? This could be software components or even an online service (I can live with that for a proof of concept). Can OpenSURF help me here?
To focus my question further - I'm not asking which algorithms to use, at this point I would rather use an existing tool/API/service.
The target image that the software should find should be the "same" image as the given image, but it should still be able to "forgive" slight processing on either of them.
If "slight processing" doesn't involve rotation, but only "cropping", then simple cross-correlation should work, if there could be perspective correction, rotation, lens distortion correction, then things are more complicated.
I think this method is quite forgiving to slight color corrections. Anyway, you can always convert both images to grayscale and compare grayscale versions if you want.
To focus my question further - I'm not asking which algorithms to use, at this point I would rather use an existing tool/API/service.
You can start from cvMatchTemplate from OpenCV library (the link points to the C version of the API, but it's available also for C++ and Python). Use the cropped image as a template, and look for it in all your images.
If the images you compare have dark features on light backgrounds, you may benefit from using CV_TM_CCOEFF or CV_TM_CCOEFF_NORMED methods. They both subtract the average over the template area from both images. Normalized methods (CV_TM_*_NORMED) generally work better but are slower than their non-normalized counterparts.
You may consider to do some preprocessing with the images before the cross-correlation. If you normalize them first, the cross-correlation will be less sensitive to slight brightness/contrast modification. If you detect edges first, as suggested by #misha, you'll lose color/lightness information, but the results for contour overlapping will be much better.
jetxee set you off on the right track. However, if you simply use template matching, you can run into problems where the background interferes with your template matching result. For example, if your template is a building and your background is primarily light (e.g. desert sand), then the template matching will fail because the lighter background will always return a higher cross-correlation than the darker template. Here is an example of this problem.
The way you solve it is the same as what is in the link:
Perform edge-detection on both your template and the target image.
Throw original template and image away
Perform template detection using the edge-detected template and edge-detected target image
As far as forgiving slight processing, the edge detection step will take care of that. As long as the edges in the two images are not modified significantly (blurred, optically distorted), the approach will work.
I know you are not looking specifically for algorithms, but nonetheless, let me suggest the following which can accomplish exactly what you are trying to do, very efficiently...
For cropped versions of the same image, including rotation, the Fourier-Mellin transform or a log-polar transform (watch out for the artsy semi-nude drawing - good source however) will give you the translation, rotation and scale coefficients between the two images, allowing to to determine what operations were needed to go from one to the other.

Resources