Speed up stitching of 2 images? - opencv

I am working with 2 fly cameras and trying to stitch them together.
I am working with OpenCV and C++ here.
Since I am trying to cover large region using both cameras (and to contour detection later on), I am wondering if there's a fast way to stitch both images from both cameras together ?
Currently here's what I am doing:
Removing each camera's image with previously stored background image (to speed up contour detection later on)
Un-distort each image using cvRemap function
And finally to set the ROI of the images for stitching them together.
My question is, is it possible to speed this up even more ? Since currently these steps take around 60ms, and with additional functionality it slows down to 0.1 second.
Have I been using the slower functions of OpenCV ? Or are there any tricks to gain more speed ?

Take the latest OpenCV snapshot from here and try the stitching module implemented here. They have been working on stitching performance lately, so it's possible to get some good improvements.
By the way, what step takes the most? Did you profile your app? Take a look at the profile results, and you'll be able to understand exactly where to optimize, and maybe how to do it.

Related

Live 360° Panorama Image Stitching implementation

I am planning to implement a live 360° panorama stitcher having 6 cameras of the same model.
I came across the stitching_detailed.cpp implementation from OpenCV. The problem is that it takes around 1 second to stitch only 2 images together using my desired parameters, which is fairly slow.
As my application should be ran in real-time. I need to be able to stitch 6 images together in around 100 ms for it to be "acceptable". The output resolution should be around 0.2 Megapixels. Therefore, I am starting to do my own implementation in C++, based pretty much on what is done on stitchig_detailed. I am aiming to use as much as possible the CUDA functions on OpenCV (some of them are not even implemented stitching_detailed).
I have been carefully studying the stitching pipeline on which the previous algorithm is based, as described in Images stitching by OpenCV and in the paper Automatic Panoramic Image Stitching using Invariant Features.
As the stitching pipeline is too general, there are several assumptions I have made in order to simplify it and speed it up, I would like to get some feedback to know if they are valid:
All the images I will provide to the algorithm are for sure part of the panorama image. So I do not have to extra check on that.
The 6 cameras will be fixed in position and orientation. Therefore, I know beforehand the order in which the cameras need to be stitched into the panorama picture. I can therefore avoid trying to match images from cameras that are not contiguous.
As the cameras are going to remain static. It would be valid to perform the registration step in order to get the camera orientation Matrix R only once (as a kind of initialization). Afterwards, I could only perform the compositing block for subsequent frames. (Again all this assuming the cameras remain completely static).
I also have the following questions...
I can indeed calibrate the cameras prior to my application and obtain each of the intrinsic camera parameters Matrix K and its respective distortion parameters. Could I plug K into the stitching pipeline and therefore avoid the K calculation in the registration step?
What other thing (if any) could camera calibration bring into the pipeline? Distortion correction?
If my previous assumption about executing only the compositing block is correct... Could I still take out some parts of it? My guess is that maybe the seam finder should be ran only once (in the initialization of the algorithm).
Is exposure compensation needed at all for my application case? (As the cameras are literally the same).
Any lead would be deeply appreciated, thanks!
The first thing you can do to reduce your progressing time is to calibrate your camera so that you don't need to process images to find homography matrices based on features. Find them beforehand so that they are constant matrix

I'm using stitcher class to create a panorama from multiple images. How to reduce calculation time?

Is there a possibility to reduce calculation time when stitching more than two images with opencv stitcher class? I noticed that it grows rather exponentially the more images I want to stitch (why?). Is it possible that opencv stitcher tries to stitch every single image with every other image because I got a defined order for stitching my images. So maybe this would be a way to reduce calculation time. I hope you understand what I mean and maybe you can give me some advice to solve my problem.
Do you have a gpu available?
An easy way to dramatically reduce you processing time is to use GPU when possible.
Opencv makes it easier every day, and if you look at the doc you'll see that there is a GP flag for the stitcher.
Here is the doc
You want to play with that element:
--try_gpu (yes|no) Try to use GPU. The default value is 'no'. All default values are for CPU mode.
Be careful that you need OpenCV to be compiled with GPU support for this to work. You can find more information about GPU support in OPencv here.
And if you cannot use GPUs, as #perfanoff said downsampling your image is 99% of the time a good idea in image processing.
The OpenCV Stitcher class matches every input image with all other images, thus an exponential runtime. It is indeed possible to modify the example code provided by OpenCV to match only the first with the second image, the second with the third image and so on. This will result in a more or less linear runtime.
To reduce the rumtime it even further you can reduce the input image size, the kind of features (sift and surf are slow but more robust than others), and the threshold for bundle adjustment.

Face Authentication

My project is Face Authentication.
System Description: My input is only one image (which was taken when the user logins for the first time) and using that image system should authenticate whenever the user logins to the application. The authentication images may differ from the first input image like -- different illumination conditions, different distance from camera and -10 to 10 degrees variation in pose. The camera used is same (ex: ipad) for all cases.
1) Authentication images are stored each time the user logins. How to
make use of these images to enhance the accuracy of the system??
2) When a new image comes, I need to select the closest image(s) (and
not all stored images) from the image repository and use for
authenticate to reduce the time. How to label an image based on
illumination/distance from camera automatically??
3) How should I make my system to perform decently for changes in
illumination and distance from camera??
Please, can anyone suggest me good alogirthm/papers/opensource-codes for my above questions??
Though it sounds like a research project, I would be extremely grateful if I get any response from someone.
For this task I think you should take a look at OpenCV's Face Recognition API. The API is basically able to identify the structure of a face (within certain limitations of course) and provide you with the coordinates of the image within which the face is available.
Having to deal with just the face in my opinion reduces the need to deal with different background colours which I think is something you do not really need.
Once you have the image of the face, you could scale it up/down to have a uniform size and also change the colour of the image to grey scale. Lastly, I would consider feeding all this information to an Artificial Neural Network since these are able to deal with inconsistencies with the input. This will allow you to increase your knowledge base each time a user logs in.
I'm pretty sure there are other ways to go around this. I would recommend taking a look at Google Scholar to try and find papers which deal with this matter for more information and quite possible other ways to achieve what you are after. Also, keep in mind that with some luck you might also find some open source project which already does most of what you are after.
If you really have a database of photographs of faces, you could probably use that to enhance the features of OpenCV face detection. The way faces are recognized is by comparing the principal components of the picture with those of the face examples in OpenCV database.
Check out:
How to create Haar Cascade (xml) for using with OpenCV?
Seeing that, you could also try to do your own Principal Component Analysis on every picture of a recognized face (use OpenCV face detection for that-> Black out everything exept the face, OpenCV gives you the position and size of the face). Compare the PCA to the ones in your database and match it to the closest. Course, this would work best with a fairly big database, so maybe at the beginning there could be wrong matches.
I think creating your own OpenCV haarcascade would be the best way to go.
Good Luck!

Webcam image capture issue

I am trying to do some image tracking by capturing images from a webcam and comparing it with a reference image. The problem I face is that two images of the exact same spot differ in their bitmaps. I am using OpenCV. I need to know a way to capture images so that this kind of jitter is avoided.
Thanks in advance.
Well, I would say that you can't.
Two images will never be the same, due to illumination changes, and thousands of other effects (including electronic noise).
What you want to do is to find a way to uniformize it like applying some kind of gaussian filter.
http://mmlab.disi.unitn.it/wiki/index.php/Mixture_of_Gaussians_using_OpenCV
There are also some good links in this post :
Natural feature tracking with openCV- evaluating the options

stitching microscope images of a microchip

So, I'm trying to stitch images taken by a microscope of a microchip, but it's very hard to have all the features aligned. I already have a 50% overlap between two adjacent images, but even with that, it's not always a good fit.
I'm using SURF with OpenCV to extract the keypoints and find the homographic matrix. But still, it's far from being an acceptable result.
My objective is to be able to stitch perfectly 2x2 images, so this way, I can repeat that process recursively until I have the final image.
Do you have any suggestion ? A nice algorithm to approach this problem. Or maybe a way to transform the images to be able to extract better keypoints from them. Play with the threshold (a smaller one to get more keypoints, or a larger one?).
Right now, my approach is to first stitch two 2x1 images and then, stitch these two together. It's close from what we want, but still not acceptable. Also, the problem might be that the image used to be the "source" (while the second image is transform with the matrix to overlap that one) might not be a bit misaligned or there's a small angle on that image that affects the whole result.
Any help or suggestion is appreciated. Specially any solution that would allow to use OpenCV and SURF (even if I'm not totally against other libraries... it's just that most of the project has been developed with that).
Thanks!
I found using TurboReg during image registration development to be a helpful comparison tool. It is a free ImageJ plugin, and has many different fitting types.
Have you taken a look at the new OpenCV stitching samples: stitching.cpp and stitching_detailed.cpp?
EDIT : I forgot this was cutting edge OpenCV because I'm using the trunk at home :) To get access to these new samples, you'll need to check out the OpenCV trunk from SVN like this:
svn co https://code.ros.org/svn/opencv/trunk/opencv opencv-trunk
Unfortunately, you'll need to recompile it, but you should be able to use the new stitching code :) If you haven't built OpenCV from source before, here is a good little tutorial to get you started. I will mention that OpenCV has a lot more options that can be enabled/disabled than are mentioned in the tutorial, so you might want to use the cmake-gui to get a look at all of the options. You can apt-get it with this command:
> sudo apt-get install cmake-qt-gui
Also, if you're more concerned with quality, and you don't mind slower performance; you might consider using the Lucas-Kanade method for image registration. Here is a lecture, and here is a paper on the topic that might be helpful to you.
The Fiji's stitching plugin handles this situation of alignment error propagation with 2D mosaicing. We use it in daily use for microscopic stitching, and I must say it is perfect.

Resources