Need to remove Camera Lens Distortion - opencv

We have this camera, it is ELP 180 Degree Super Wide Angle Distortion Correction. We need to normalise image capture from this camera. But it is neither fish-eye or standard camera.
As far as I understand, it is barrel distortion. But if you notice straight lines are curved horizontally but vertical lines are not curved. The manufacturer of the camera says it is 'Distortion Corrected'. So let's assume they want to say vertical distortion is corrected but not horizontal.
We tried the following 2 ways to remove distortion but it is not perfect. Please help us to remove its distortion. Thanks a lot.
We tried OpenCV Camera Calibration to get camera Intrinsic Parameters and Distortion Coefficient.
Intrinsic Parameters
[673.9683892, 0., 343.68638231]
[0., 676.08466459, 245.31865398]
[0., 0., 1.]
Distortion
[5.44787247e-02, 1.23043244e-01, -4.52559581e-04, 5.47011732e-03, -6.83110234e-01]
Matlab Computer vision tool to get Intrinsic Parameters and Distortion Coefficient
Intrinsic Parameters
[291.11314081, 0.0, 289.772432415],
[0.0, 274.219315391, 223.73258747],
[0., 0., 1.0]
Distortion
[-3.0108207175179114e-01, 1.0803633903579697e-01, 4.3487318865386296e-03, -5.9566348399883859e-04, -1.8867490263403317e-02]
Result
Original image:
After Removing Distortion:

To me it looks like there may be a prism (or a digital prism-equivalent remapping filter) "squeezing" the image horizontally, which has the effect of visually accentuating the barrel in the horizontal direction.
If I am right, I don't think the standard OpenCv Heikkila-Silven model can fit it. You'll need to fit 2 separate higher order polynomials in (x, y), one for the horizontal component of the distortion and one for the vertical one.
Look up "anamorphic lens distortion"

Try using the fisheye camera model in the Camera Calibrator in matlab

Related

align two Images with extrinsic and intrinsic matrix

I have a simple problem, but still it confuses me somehow.
I have two cameras. One is closer to an object than the other one. I have both intrinsic matrices from each camera, as well as the distortion vectors(which are 0). I already calculated the extrinsic matrix between both cameras.
Now I want to transform the image from the camera, which is further away, into the same coordinates as the closer image. Such that the images are aligned, and have the same size.
Does anyone know how I can do this? By using the intrinsic an extrinsic matrices?
Thanks in advance!

Radial distortion correction, camera parameters and openCV

I am trying to undistort a barrel/radial distortion from an image. When I see the equations they do not require the focal length of the camera. But the openCV API initundistortrectifymap requires them in form of the camera intrinsic matrix. Why so ? Anyway to do it without them? Because I understand the undistort is common for various distortion corrections.
The focal length is essential in distortion removal -since it provides info on the intrinsic parameters of the camera- and it is fairly simple to add it to the camera matrix. Just remember that you have to convert it from millimeters to pixels. This is done to ensure that the pixels are rectangular. For the conversion you need to know the sensor's height and width in millimeters, the horizontal (Sh) and vertical (Sv) number of pixels of the sensor and the focal length in millimeters. The conversion is done using the following equations:
fx = (f(mm) x Sh(px))/sensorwidth(mm)
fy = (f(mm) x Sv(px))/sensorheight(mm)
More on the camera matrix elements can be found here.

Camera calibration after doing Bird's Eye Projection in OpenCV

In my case, i use four sets of points to do the Bird's Eye Projection.But i forgot to do the camera calibration first!
So i want to know is the result is same doing Camera calibration before Bird's Eye Projection and after Bird's Eye Projection in OpenCV?
Can you give me some advice?Thank you very much.
Can you specify what calibration do you refer to? There are generally 2 kinds of camera parameters you can estimate during calibration - intrinsic and extrinsic.
Intrinsic parameters can be for simplicity assumed 'fixed' for particular camera, which includes lens and sensor. Those parameters typically include focal length, sensor's dimensions, and distortion coefficients.
Extrinsic parameters are 'dynamic', and typically refer to camera position and orientation.
Now, if you represent those as some abstract transformations - they don't commute, which means you can't change their order. So, if you want to apply homography to an image - you have to undistort it first, because generally homography maps plane to another plane, and after distortion your planes will be messed up.
But on the other hand, once you apply one transform, you can estimate how much of other transform you have 'left to do'. This is OK for linear stuff, but turns ugly if you warp distorted image using homography and THEN try to undistort it.
Tl,Dr - perform intrinsic calibration and undistortion first, since it is easier and they are fixed for camera, then apply your transformations.

camera calibration for single plane

My problem statement is very simple. But I am unable to get the opencv calibration work for me. I am using the code from here : source code.
I have to take images parallel to the camera at a fixed distance. I tried taking test images (about 20 of them) only parallel to the camera as well as at different planes. Also I changed the size and the no of squares.
What would be the best way to calibrate in this scenario?
The undistorted image is cropped later, that's why it looks smaller.
After going through the images closely, the pincushion distortion seems to have been corrected. But the "trapezoidal" distortion still remains. Since the camera is mounted in a closed box, the planes at which I can take images is limited.
To simplify what Vlad already said: It is theoretically impossible to calibrate your camera with test images only parallel to the camera. You have to change your calibration board's orientation. In fact, you should have different orientation in each test image.
Check out the first two images in the link below to see how the calibration board should be slanted (or tilted):
http://www.vision.caltech.edu/bouguetj/calib_doc/
think about calibration problem as finding a projection matrix P:
image_points = P * 3d_points, where P = intrinsic * extrinsic
Now just bear with me:
You basically are interested in intrinsic part but the calibration algorithm has to find both intrinsic and extrinsic. Now, each column of projection matrix can be obtained if you select a 3D point at infinity, for example xInf = [1, 0, 0, 0]. This point is at infinity because when you transform it from homogeneous coordinates to Cartesian you get
[1/0, 0, 0]. If you multiply a projection matrix with a point at infinity you will get its corresponding column (1st for Xinf, 2nd for yInf, 3rd for zInf and 4th for camera center).
Thus the conclusion is simple - to get a projection matrix (that is a successful calibration) you have to clearly see points at infinity or vanishing points from the converging extensions of lines in your chessboard rig (aka end of the railroad tracks at the horizon). Your images don’t make it easy to detect vanishing points since you don’t slant your chessboard, nor rotate nor scale it by stepping back. Thus your calibration will always fail.

Warping Perspective using arbitary rotation angle

I have an image of a chessboard taken at some angle. Now I want to warp perspective so the chessboard image look again as if was taken directly from above.
I know that I can try to use 'findHomography' between matched points but I wanted to avoid it and use e.g. rotation data from mobile sensors to build homography matrix on my own. I calibrated my camera to get intrinsic parameters. Then lets say the following image has been taken at ~60degrees angle around x-axis. I thought that all I have to do is to multiply camera matrix with rotation matrix to obtain homography matrix. I tried to use the following code but looks like I'm not understanding something correctly because it doesn't work as expected (result image completely black or white.
import cv2
import numpy as np
import math
camera_matrix = np.array([[ 5.7415988502105745e+02, 0., 2.3986181527877352e+02],
[0., 5.7473682183375217e+02, 3.1723734404756237e+02],
[0., 0., 1.]])
distortion_coefficients = np.array([ 1.8662919398453856e-01, -7.9649812697463640e-01,
1.8178068172317731e-03, -2.4296638847737923e-03,
7.0519002388825025e-01 ])
theta = math.radians(60)
rotx = np.array([[1, 0, 0],
[0, math.cos(theta), -math.sin(theta)],
[0, math.sin(theta), math.cos(theta)]])
homography = np.dot(camera_matrix, rotx)
im = cv2.imread('data/chess1.jpg')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
im_warped = cv2.warpPerspective(gray, homography, (480, 640), flags=cv2.WARP_INVERSE_MAP)
cv2.imshow('image', im_warped)
cv2.waitKey()
pass
I also have distortion_coefficients after calibration. How can those be incorporated into the code to improve results?
This answer is awfully late by several years, but here it is ...
(Disclaimer: my use of terminology in this answer may be imprecise or incorrect. Please do look up on this topic from other more credible sources.)
Remember:
Because you only have one image (view), you can only compute 2D homography (perspective correspondence between one 2D view and another 2D view), not the full 3D homography.
Because of that, the nice intuitive understanding of the 3D homography (rotation matrix, translation matrix, focal distance, etc.) are not available to you.
What we say is that with 2D homography you cannot factorize the 3x3 matrix into those nice intuitive components like 3D homography does.
You have one matrix - (which is the product of several matrices unknown to you) - and that is it.
However,
OpenCV provides a getPerspectiveTransform function which solves the 3x3 perspective matrix (using homogenous coordinate system) for a 2D homography between two planar quadrilaterals.
Link to documentation
To use this function,
Find the four corners of the chessboard on the image. These will be your source coordinates.
Supply four rectangle corners of your choice. These will be your destination coordinates.
Pass the source coordinates and destination coordinates into the getPerspectiveTransform to generate a 3x3 matrix that is able to dewarp your chessboard to an upright rectangle.
Notes to remember:
Mind the ordering of the four corners.
If the source coordinates are picked in clockwise order, the destination also needs to be picked in clockwise order.
Likewise, if counter-clockwise order is used, do it consistently.
Likewise, if z-order (top left, top right, bottom left, bottom right) is used, do it consistently.
Failure to order the corners consistently will generate a matrix that executes the point-to-point correspondence exactly (mathematically speaking), but will not generate a usable output image.
The aspect ratio of the destination rectangle can be chosen arbitrarily. In fact, it is not possible to deduce the "original aspect ratio" of the object in world coordinates, because "this is 2D homography, not 3D".
One problem is that to multiply by a camera matrix you need some concept of a z coordinate. You should start by getting basic image warping given Euler angles to work before you think about distortion coefficients. Have a look at this answer for a slightly more detailed explanation and try to duplicate my result. The idea of moving your image down the z axis and then projecting it with your camera matrix can be confusing, let me know if any part of it does not make sense.
You do not need to calibrate the camera nor estimate the camera orientation (the latter, however, in this case would be very easy: just find the vanishing points of those orthogonal bundles of lines, and take their cross product to find the normal to the plane, see Hartley & Zisserman's bible for details).
The only thing you need to do is estimate the homography that maps the checkers to squares, then apply it to the image.

Resources