image stitching to generate a Panorama with opencv - opencv

Now, i'm doing the experiment with opencv to stitch several images into a panorame, but These pictures are taken at different angles. now i want to do is to project all the images onto a cylindrical surface, then using the SIFT to match the features to get the transform matrix. how should I do it? is there any interface of opencv to do that(to project all the images onto a cylindrical surface, and i don't know any parameter of the camera)?

In the OpenCV sample folder there is a script called stitching_detailed.cpp. It does the whole pipeline for creating panoramas including feature extraction, matching, warping and blending, etc.
You should have a look on it:
https://github.com/Itseez/opencv/blob/master/samples/cpp/stitching_detailed.cpp

Related

Applying lens distortion correction during panorama image stitching

I am trying to do a live panorama stitching while using 6 cameras streams (same camera model). Currently I am adapting the stitching_detailed.cpp file from OpenCV according to my needs.
My cameras lenses have a great amount of barrel distortion. Therefore, I calibrated my cameras with the checkboard function provided by OpenCV and got the respective intrinsic parameters and distortion parameters. By applying getOptimalNewCameraMatrix and initUndistortRectifyMap I get an undistorted image which fulfills my needs.
I have read in several sources that image lens correction should benefit image stitching. So far I have used the previously undistorted images as input to stitching_detailed.cpp, and the resulting stitched image looks fine.
However, my question is if I could somehow include the undistort step in the stitching pipeline.
[Stitching Pipeline taken from OpenCV documentation 1]
I am doing a cylindrical warping of the images in the stitching
process. My guess is that maybe during this warping I could somehow
include the undistortion maps calculated beforehand by
initUndistortRectifyMap.
I also do not know if the camera intrinsic matrix K from
getOptimalNewCameraMatrix could somehow help me in the whole process.
I am kind of lost and any help would be appreciated, thanks in advance.

Homography when camera translation (for stitching)

I have a camera which I take with it 2 captures. I want do make a reconstitution with the 2 images in one image.
I only do a translation with the camera an take images of a plane TV screen. I heard homography only works when the camera does a rotation.
What should i do when I only have a translation?
Because you are imaging a planar surface (in your case a TV screen), all images of it with a perspective camera will be related by homographies. This is the same if your camera is translating and/or rotating. Therefore to stitch different images of the surface, you don't need to do any 3D geometry processing (essential matrix computation/triangulation etc.).
To solve your problem you need to do the following:
You determine the homographies between your images. Because you only have two images you can select the first one as the 'source' and the second one as the 'target', and compute the homography from target to source. This is classically done with feature detection and robust homography fitting. Let's denote this homography by the 3x3 matrix H.
You warp your target image to your source using H. You can do this in openCV with the warpPerspective method.
Merge your source and warped target using a blending function.
An open source project for doing exactly these steps is here.
If your TV lacks distinct features or there is lots of background clutter, the method for estimating H might not be highly robust. If this is the case you could manually click four or more correspondences on the TV in the target and source images, and compute H using OpenCV's findHomography method. Note that your correspondences cannot be completely arbitrary. Specifically, there should not be three correspondences that are colinear (in which case H cannot be computed). They should also be clicked as accurately as possible because errors will affect the final stitch and cause ghosting artefacts.
An important caveat is if your camera has significant lens distortion. In this case your images will not be related by homographies. You can deal with this by performing a calibration of your camera using OpenCV, and then you need to pre-process your images to undo the lens distortion (using OpenCV's undistort method).

Image stitching straightening

I am trying to implement my own image stitcher for a more robust result. This it what I got so far.
The result of the panorama sticher that OpenCV provides is as follows.
Apart from the obvious blending issues, I am wondering how they distribute the warping to both images. It seems to me they project the images on some cylinder before the actual stitching. Is this part of the calculated homography or do they warp the images before the feature matching?
I had a look a the highlevel of the stitching pipeline, the actual code as well as the landmark paper for the pipeline, but I couldn't figure out where exactly this warping happens and what kind of warping it is.

OpenCV stitching with georeferencing

Is it possible to create stitched image without loosing image position and geo-referencing using OpenCV?
For example I have 2 images taken from the plane and I have 2 polygons that describes where on the ground they are located. I am using OpenCV stitching example. OpenCV stitching process will rotate and change position of images. How can I preserve my geography information after stitching? Is it possible?
Thanks in advance!

Is stitching module of OpenCV able to stitch images taken from a parallel motion camera?

I was wondering if the stitching(http://docs.opencv.org/modules/stitching/doc/stitching.html) module of OpenCV is able to stitch the images taken from a camera that is in parallel motion to the plane which is being photographed ?
I know that generally all the panoramic stitching tools assume that the center of the camera is fixed and that the camera only experiences motion such as pan or pitch.
I was thinking if I can use this module to stitch the image taken from a camera which moves parallel to the plane. The idea is to create a panoramic map of the ground.
Regards
Just for the record.
The current stitching utility in open cv does not consider translation of the camera and only assumes that camera is rotated around its axis. So, basically it tries to create and project images on cylindrical or spherical canvas.
But in my case, I needed to consider the translation motion while predicting the camera transformation and this is not possible with the existing stitching utility of opencv.
All these observations are made based on the code walk-through of opencv code and through trials.
But, you are welcome to correct this information or to add more information so that this can be a useful future reference.
Regards

Resources