I'm using this function to undistort images from a fisheye camera and it's very good as result, but i cannot find the skew coefficent to reduce the undistorsion.
With cameras without fisheye I use:
getOptimalNewCameraMatrix
where the alpha can control the scaling of the result from 0 to 1.
But in
fisheye::estimateNewCameraMatrixForUndistortRectify
I cannot understand how to do.
Anyone can suggest how to do?
OpenCV (for fisheye or non-fisheye cameras) uses model that based on pin-hole camera model.
In case of non-fisheye camera you can undistort 100% of initial image.
But for fisheye camera with FOV ~ 180 degrees, undistorted image will have infinite size. So, fisheye::estimateNewCameraMatrixForUndistortRectify just calculates some "reasonable" zooming factor and doesn't let you set 100% of undistorted image surface.
Related
Consider the following problem:
I have the original image A saved as "A.png".
Moreover, I also have camera video feed that shows (possibly with some perspective transformation) an image of A, denoted Va, with some level of radial distortion.
I also have an homography from A to Va and its inverse.
How could I undistort Va? Note that I do not want to undo the perspective transformation, just remove the radial distortion from Va.
Example:
I have a fully mapped and undistorted reference image (including real world size)
an image from a video frame (left image)
and an homography and its inverse between those two
In our use case, the left image would have radial distortion but we would like to remove it without applying a simple backprojection (this would create artifacts)
Undistortion is the process to transform a distorted image (e.g. image with a fisheye pattern) to undistorted version.
In this case your video frames do not suffer from distortion. And if you have already determined homography matrix, you need to apply perspective transformation.
You might need to invert the homography matrix in case you need to invert the transformation direction.
My setup has checkerboard charts with known world coordinates present in each image that I use to stitch images together (in a 2D plane) and to find my P-matrix. However, I am stuck on finding a general approach into combining all my images into a spherical image.
Known:
Ground truth correspondence points in each image
camera calibration parameters (camera matrix, distortion coefficients)
homography between images
world-image plane matrix: P = K[R | t] for each image. However I think this matrix's estimation isn't that great.
real world coordinates of ground truthed points
camera has almost only rotation, minimal translation
I know openGL well enough to do the spherical/texture wrapping once I can stitch the images into a cubemap format
Unknown:
Spherical image
image cubemap
I am attempting camera calibration from a single RGB image (panorama) given 3D pointcloud
The methods that I have considered all require an intrinsic properties matrix (which I have no access to)
The intrinsic properties matrix can be estimated using the Bouguet’s camera calibration Toolbox, but as I have said, I have a single image only and a single point cloud for that image.
So, knowing 2D image coordinates, extrinsic properties, and 3D world coordinates, how can the intrinsic properties be estimated?
It would seem that the initCameraMatrix2D function from the OpenCV (https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) works in the same way as the Bouguet’s camera calibration Toolbox and requires multiple images of the same object
I am looking into the Direct linear transformation DLT and Levenberg–Marquardt algorithm with implementations https://drive.google.com/file/d/1gDW9zRmd0jF_7tHPqM0RgChBWz-dwPe1
but it would seem that both use the pinhole camera model and therefore find linear transformation between 3D and 2D points
I can't find my half year old source code, but from top of my head
cx, cy is optical centre which is width/2, height/2 in pixels
fx=fy is focal length in pixels (distance from camera to image plane or axis of rotation)
If you know that image distance from camera to is for example 30cm and it captures image that has 16x10cm and 1920x1200 pixels, size of pixel is 100mm/1200=1/12mm and camera distance (fx,fy) would be 300mm*12px/1mm=3600px and image centre is cx=1920/2=960, cy=1200/2=600. I assume that pixels are square and camera sensor is centered at optical axis.
You can get focal lenght from image size in pixels and measured angle of view.
I have a stereo image pair of say 100x100 resolution. I did calibration and I am able to rectify it properly and calculate disparity for the same. Now I have the cropped image of size 50x50 with ROI based on center. If I have to use the same calibration matrices, what should I do? Rescaling the principle point in camera matrix is enough or do we need to do anything else?
In my case, i use four sets of points to do the Bird's Eye Projection.But i forgot to do the camera calibration first!
So i want to know is the result is same doing Camera calibration before Bird's Eye Projection and after Bird's Eye Projection in OpenCV?
Can you give me some advice?Thank you very much.
Can you specify what calibration do you refer to? There are generally 2 kinds of camera parameters you can estimate during calibration - intrinsic and extrinsic.
Intrinsic parameters can be for simplicity assumed 'fixed' for particular camera, which includes lens and sensor. Those parameters typically include focal length, sensor's dimensions, and distortion coefficients.
Extrinsic parameters are 'dynamic', and typically refer to camera position and orientation.
Now, if you represent those as some abstract transformations - they don't commute, which means you can't change their order. So, if you want to apply homography to an image - you have to undistort it first, because generally homography maps plane to another plane, and after distortion your planes will be messed up.
But on the other hand, once you apply one transform, you can estimate how much of other transform you have 'left to do'. This is OK for linear stuff, but turns ugly if you warp distorted image using homography and THEN try to undistort it.
Tl,Dr - perform intrinsic calibration and undistortion first, since it is easier and they are fixed for camera, then apply your transformations.