I am trying to implement my own image stitcher for a more robust result. This it what I got so far.
The result of the panorama sticher that OpenCV provides is as follows.
Apart from the obvious blending issues, I am wondering how they distribute the warping to both images. It seems to me they project the images on some cylinder before the actual stitching. Is this part of the calculated homography or do they warp the images before the feature matching?
I had a look a the highlevel of the stitching pipeline, the actual code as well as the landmark paper for the pipeline, but I couldn't figure out where exactly this warping happens and what kind of warping it is.
Related
I am trying to do a live panorama stitching while using 6 cameras streams (same camera model). Currently I am adapting the stitching_detailed.cpp file from OpenCV according to my needs.
My cameras lenses have a great amount of barrel distortion. Therefore, I calibrated my cameras with the checkboard function provided by OpenCV and got the respective intrinsic parameters and distortion parameters. By applying getOptimalNewCameraMatrix and initUndistortRectifyMap I get an undistorted image which fulfills my needs.
I have read in several sources that image lens correction should benefit image stitching. So far I have used the previously undistorted images as input to stitching_detailed.cpp, and the resulting stitched image looks fine.
However, my question is if I could somehow include the undistort step in the stitching pipeline.
[Stitching Pipeline taken from OpenCV documentation 1]
I am doing a cylindrical warping of the images in the stitching
process. My guess is that maybe during this warping I could somehow
include the undistortion maps calculated beforehand by
initUndistortRectifyMap.
I also do not know if the camera intrinsic matrix K from
getOptimalNewCameraMatrix could somehow help me in the whole process.
I am kind of lost and any help would be appreciated, thanks in advance.
In OpenCV implementation, instrinsic parameters of the camera is used to correct geometric distortion.
So camera calibration is performed to obtain instrinsic parameters using multiple chessboard images.
Currently I learned that geometric distortion can be corrected using only one chessboard image.
I try to figure out how it is done, but still can't find one way to do it.
http://www.imatest.com/docs/distortion-methods-and-modules/
https://www.edmundoptics.com/resources/application-notes/imaging/distortion/
I find the two above links. It describes the radial distortion. However we can't
guarantee that the camera is parallel to the chessboard when capturing the chessboard.
I can detect the corners of the chessboard, but some corners is distorted, so I can't
fit lines because fitting can only handle noise.
Any help are appreciated.
Please take a look at this paper and this paper. Moreover, this paper proves that you can correct distortion using single image without calibration target based on identifying straight lines on image such as edges of the buildings.
I don't know whether this functionality is implemented in OpenCV but the math in those papers is should be relatively easy to implement it using OpenCV.
I have a collection of face images, with 1 or sometimes 2 faces in each image. What I wanna do, is find the face in each image and then crop It.
I've tested a couple of methods, which are implemented in python using openCV, but the results weren't that good. These methods are:
1- Implementation 1
2- Implementation 2
There's one more model that I've tested, but I'm not allowed to post more than two links.
The problem is that these Haar-Feature based algorithms, are not robust to face size, and when I tried them on images which were taken close to the face, they couldn't find any faces.
Someone mentioned to try deep learning based algorithms, but I couldn't find one corresponding to what I want to do. Basically, I guess I need a pre-trained model, which can give me the coordinates of the face bounding box in the image, or better, a pre-trained model which gives out the cropped face image as output.
You don't need machine learning algorithms, Graph-Algorithms is enough. For example Snapchats face recognition algorithm works as follows:
Create a Graph with Nodes and Edges from a most common Face ("Standard Face").
Deform that Graph / Recoordinate the Nodes to the fitted pixels in the Input Image
voila you got the face recognized in the Input Image.
Easy said, but harder to code. We implemented in our university the Dijkstra Algorithm for example and I can hand you my "Graph" Class if you need it. But I wrote it in C++.
With these graph-algorithm you can crop out the faces more efficient.
Now, i'm doing the experiment with opencv to stitch several images into a panorame, but These pictures are taken at different angles. now i want to do is to project all the images onto a cylindrical surface, then using the SIFT to match the features to get the transform matrix. how should I do it? is there any interface of opencv to do that(to project all the images onto a cylindrical surface, and i don't know any parameter of the camera)?
In the OpenCV sample folder there is a script called stitching_detailed.cpp. It does the whole pipeline for creating panoramas including feature extraction, matching, warping and blending, etc.
You should have a look on it:
https://github.com/Itseez/opencv/blob/master/samples/cpp/stitching_detailed.cpp
I'm currently implementing the stereovision with OpenCV. Now I'm using the Stereo_Calib sample to remove the distortion en rectify the image. Removing the distortion works fine.
But when I apply rectification, the image is very warped.
This is the code to rectify the images. The parameters rmap are calculated in the same way as in the Stereo_calib example (see here)
void StereoCalibration::StereoRectify(Mat &imageLeft, Mat &imageRight)
{
Mat imLeft, imRight;
remap(imageLeft, imLeft,DistLeft.rmap[0], DistLeft.rmap[1], CV_INTER_CUBIC);
remap(imageRight,imRight, DistRight.rmap[0], DistRight.rmap[1], CV_INTER_CUBIC);
imageLeft = imLeft;
imageRight = imRight;
}
I realise this question is a few years old however, I have recently had a similar issue. Building on morynicz answer about "bad chessboard" patterns to calibrate stereo images, I found that even with a slight deformation in your chessboard pattern, for example that it isn't flat, can produce large warping in the stereo image pair on rectification. The algorithms in OpenCV, for instance, assume a flat chessboard pattern is being presented such that any physical deformation in that pattern will be wrongly attributed to distortions in the camera optics (or in the relative orientations of the two camera sensors). The algorithms will then try really hard to remove this false distortion leading to very warped images.
To avoid this problem, were possible, use a tablet (or other electronic screen) to display the chessboard pattern as it is then guaranteed to be flat.
Additionally, you should check that the images you are using to calibrate the stereo pair are in focus and have no motion blur or image tearing.
If using OpenCV to do the rectification do some experimentation with the flags used in the stereoCalibrate function as this may lead to a more "optimised" rectification for your particular application.
For anyone looking for help on this, I was dealing with very large scale resolution images and was getting very low reprojection error rate with good calibration images. I was getting very warped stereo pairs after rectification and a really bad depth map.
One thing to try is if your images are warped you might need to down-sample them.
Another thing to try is to combine the flags in stereoCalibrate instead of just choosing one.
Something like this worked for me :
cv2.stereoCalibrate(
object_points, image_points_left,image_points_right,
camera_matrix_left,dist_left,
camera_matrix_right, dist_right,
(5472,3648),None,None,None,None,
cv2.CALIB_FIX_ASPECT_RATIO + cv2.CALIB_ZERO_TANGENT_DIST + cv2.CALIB_USE_INTRINSIC_GUESS + cv2.CALIB_SAME_FOCAL_LENGTH + cv2.CALIB_RATIONAL_MODEL,criteria
)
I had the same problem, and I think that the issue was bad chessboard used to calibration or mixing up the maps.
I started working on opencv stereo image calibration and rectification recently and I was getting similar images. Although it is true to make sure the board is straight and it is true that we need to take multiple images on the corners and in the middle of the camera at different x,y,z and skew positions, what did the trick for me was the flags in stereoCalibrate. I used all the flags specified in the opencv docs except for INTRINSIC_GUESS and it started very nice undistorted and rectified images.