How can I stitch a spherical panorama from 6 images in opencv when I have a list of correspondences between frames? (Taken from Hugin for example)?
Thank you.
Related
I am trying to do a live panorama stitching while using 6 cameras streams (same camera model). Currently I am adapting the stitching_detailed.cpp file from OpenCV according to my needs.
My cameras lenses have a great amount of barrel distortion. Therefore, I calibrated my cameras with the checkboard function provided by OpenCV and got the respective intrinsic parameters and distortion parameters. By applying getOptimalNewCameraMatrix and initUndistortRectifyMap I get an undistorted image which fulfills my needs.
I have read in several sources that image lens correction should benefit image stitching. So far I have used the previously undistorted images as input to stitching_detailed.cpp, and the resulting stitched image looks fine.
However, my question is if I could somehow include the undistort step in the stitching pipeline.
[Stitching Pipeline taken from OpenCV documentation 1]
I am doing a cylindrical warping of the images in the stitching
process. My guess is that maybe during this warping I could somehow
include the undistortion maps calculated beforehand by
initUndistortRectifyMap.
I also do not know if the camera intrinsic matrix K from
getOptimalNewCameraMatrix could somehow help me in the whole process.
I am kind of lost and any help would be appreciated, thanks in advance.
I have two calibrated cameras with known intrinsic and extrinsic parameters. I also have nearly 30 points and their correspondences in the other image plane.
How can I obtain the depth of only these points? Any code or resource will be really helpful.
I'm using Python and OpenCV 3.4 to implement it.
so I am trying to make a panorama in the browser out of 6 videos taken from 6 cameras. the stitching is done with OpenCV in python and that gives me back the homographies. How can I apply the homographies to 6 Three.js planes so that I get the same result as OpenCV ? and is it possible to extract the translations/rotations from a homography so that I can apply it to the plane?
Thanks a lot!
I am projecting an image on the wall using a DLP projector and then capturing the scene with a pin-hole camera. Both the camera and projector have a radial distortion.
I calibrated both of them simultaneously until I get the distortion coefficient for both of them.
How should undistorted the captured image in order to cancel the both distortion (which came from camera and projector) in order to get an image that, theoretically, matches exactly the one I send it to the projector at first.
I am using OpenCV but any theoretical hint is appreciated.
If you calibrated them, then presumably you can just undistort using those coefficients.
Also, if you calibrate the camera separately, then you can then undistort the projected images and use these undistorted images to calibrate the projector.
Is it possible to create stitched image without loosing image position and geo-referencing using OpenCV?
For example I have 2 images taken from the plane and I have 2 polygons that describes where on the ground they are located. I am using OpenCV stitching example. OpenCV stitching process will rotate and change position of images. How can I preserve my geography information after stitching? Is it possible?
Thanks in advance!