so I am trying to make a panorama in the browser out of 6 videos taken from 6 cameras. the stitching is done with OpenCV in python and that gives me back the homographies. How can I apply the homographies to 6 Three.js planes so that I get the same result as OpenCV ? and is it possible to extract the translations/rotations from a homography so that I can apply it to the plane?
Thanks a lot!
Related
I want to try augmented reality applications on opencv and opengl
I have a 170 degree FOV fisheye camera.
I want to draw a 3D polygons with these libraries.
I'm trying put distortion coeffs on opengl
Also ı want to use opencv camera matrix --> opengl perspective projection matrix
I'm trying 3d mesh polygons with opencv on distorted images (my camera model scaramuzza) .I get freezing when the polygons start to go out of frame. very low fps problem
But I don't success this process.
what way can i follow?
I am trying to do a live panorama stitching while using 6 cameras streams (same camera model). Currently I am adapting the stitching_detailed.cpp file from OpenCV according to my needs.
My cameras lenses have a great amount of barrel distortion. Therefore, I calibrated my cameras with the checkboard function provided by OpenCV and got the respective intrinsic parameters and distortion parameters. By applying getOptimalNewCameraMatrix and initUndistortRectifyMap I get an undistorted image which fulfills my needs.
I have read in several sources that image lens correction should benefit image stitching. So far I have used the previously undistorted images as input to stitching_detailed.cpp, and the resulting stitched image looks fine.
However, my question is if I could somehow include the undistort step in the stitching pipeline.
[Stitching Pipeline taken from OpenCV documentation 1]
I am doing a cylindrical warping of the images in the stitching
process. My guess is that maybe during this warping I could somehow
include the undistortion maps calculated beforehand by
initUndistortRectifyMap.
I also do not know if the camera intrinsic matrix K from
getOptimalNewCameraMatrix could somehow help me in the whole process.
I am kind of lost and any help would be appreciated, thanks in advance.
I see some 3D Facial devices that using 3 Camera and find the 3D picture of face.
IS there any specific angle these camera should be fixed for this
calculation?
Is there any SDK, or tools in this domain that could simplify producing 3D image
from these fixed camera?
The less angle you have, the less information about depth you will get from the cameras. So an angle is important, but i cannot say it will need x° degrees.
How can I stitch a spherical panorama from 6 images in opencv when I have a list of correspondences between frames? (Taken from Hugin for example)?
Thank you.
Is it possible to create stitched image without loosing image position and geo-referencing using OpenCV?
For example I have 2 images taken from the plane and I have 2 polygons that describes where on the ground they are located. I am using OpenCV stitching example. OpenCV stitching process will rotate and change position of images. How can I preserve my geography information after stitching? Is it possible?
Thanks in advance!