Opencv: correcting radially distorted images when chessboard images are not available - opencv

How do I recover correct image from a radially distorted image using OpenCV? for example:
Please provide me useful links.
Edit
The biggest problem is I neither have the camera used for taking the pic nor the chessboard image.
Is that even possible?

Well, there is not much to do if you don't have the camera, or at least the model of it. As you may know a usual camera model is pin-hole, this basically consist in the 3D world coordinates are transformed (mapped) to the camera image plane 2D coordinates.
Camera Resectioning
If you don't have access to the camera or at least two chessboard images, you can't estimate the focal, principal point, and distortion coefficients. At least not in a traditional way, if you have more images than the one that you showed or a video from that camera you could try auto or self calibration.
Camera auto-calibration
Another auto-calibration
yet another
Opencv auto-calibration

Related

Advantage of fish-eyes lens and Camera Calibration

The purpose of calibration is to calibrated distortion the image.
What's main source of this distortion in the image when the lens is used, for example fish-eyes lens?
Q1-You think we are going to identify some of the objects and using fish-eyes lenses in order to cover a wide view of the environment, Do we need to calibrate the camera? That is, we must correct the image distortions and then identify the objects? Does the corrected image still cover the same amount of objects? If it's not cover all objects of distorted image, then what is the point of taking a wide-angle lens? Wouldn't it be better to use the same flat lens without having to calibrate the camera?
Q2-For calculating the distortion param like intrinsic and extrinsic param and etc, Is need to calculate parameters for all of camera with same specifics independently? That's, the finding parameters of distortion for one camera can be correctly work with other camera with same specifics?
Q1 Answer : You need to dewarp the image/video that comes out of the camera. There are some libraries that do it for you. You can also calibrate the dewarping according to your needs.
When dewarping the fisheye input, the corners of the video feed are a little lost. This won't be a huge loss.
Q2 Answer : Usually you don't have to do a different dewarping configuration based on your camera. But if you want to finetune it, there are parameters for it.
FFmpeg has lens correction filter, the parameters to finetune are also present in the link.

How to rectify my own image to the cameras of the KITTI dataset using OpenCV

Based on the documentation of stereo-rectify from OpenCV, one can rectify an image based on two camera matrices, their distortion coefficients, and a rotation-translation from one camera to another.
I would like to rectify an image I took using my own camera to the stereo setup from the KITTI dataset. From their calibration files, I know the camera matrix and size of images before rectification of all the cameras. All their data is calibrated to their camera_0.
From this PNG, I know the position of each of their cameras relative to the front wheels of the car and relative to ground.
I can also do a monocular calibration on my camera and get a camera matrix and distortion coefficients.
I am having trouble coming up with the rotation and translation matrix/vector between the coordinate systems of the first and the second cameras, i.e. from their camera to mine or vice-versa.
I positioned my camera on top of my car at almost exactly the same height and almost exactly the same distance from the center of the front wheels, as shown in the PNG.
However now I am at a loss as to how I can create the joint rotation-translation matrix. In a normal stereo-calibrate, these are returned by the setereoCalibrate function.
I looked at some references about coordinate transformation but I don't have sufficient practice in them to figure it out on my own.
Any suggestions or references are appreciated!

Stereo Camera calibration using different camera types

I'm trying to perform stereo camera calibration, rectification and disparity map generation. It's working fine with normal sample data. However, I'm trying to use the dual cameras on an iPhone 7+, which have different zoom. The telephoto lens has 2X zoom compared to the wide angle camera. I ran the images through the algorithm, and it is succeeding, although with a high error rate. However, when I open up the rectified images, they have a weird spherical look to the edges. The center looks fine. I'm assuming this is due to the cameras having different zoom levels. Is there anything special I need to do to deal with this? Or do I just need to crop any output to the usable undistorted area? Here is what I'm seeing:
EDIT:
I tried using the calibration result from these checkerboard images to rectify an image of some objects, and the rectification was way off, not even close. If I rectify one of my checkerboard images, they are spot on. Any ideas why that happens?
EDIT2:
These are what my input images look like that result in the spherical looking output image. They are both taken from the exact same position, the iPhone was mounted to a tripod and I used a bluetooth device to trigger the shutter so the image wouldn't get shaken, my code automatically takes one image with each lens. I took 19 such images from different angles, all images show the full checkerboard. The more zoomed in image is the one that rectified to the top spherical looking image.
EDIT3:
Here is the disparity map using the calibration I got.

Calibration of stationary video camera

I have a problem in which i have a stationary video camera in a room and several videos from it, i need to transform the image coordinates into world coordinates.
What i know:
1. all the measurements of the room.
2. 16 image coordinates and their respected world coordinates.
The problem i encounter:
At first i thought i just need to create a geometric transformation (According to http://xenia.media.mit.edu/~cwren/interpolator/), but i have a problem since the edge of the room are distorted in the image, and i cant calibrate the camera because i can't get a hold of the room or the camera.
Is there anyway i can overcome those difficulties and measure the distance in the room with some accuracy?
Thanks
You can calibrate the distortion of the camera by extracting first the edges of your room and then finding the best set of distortion parameters (that will minimize edge distortion).
There are few works that implement this approach though:
you can find a skeleton of distortion estimation procedure in R. Szeliski's book, but without an implementation;
alternatively, you can find a method + implementation (+ an online demo where you can upload your images) on IPOL.
Regarding the perspective distortion, after removing the lens distortion just proceed with the link that you have found by applying this method to the image of the four corners of the room floor.
This will give you the mapping between an image pixel and a ground pixel (and thus the object world coordinate, assuning you only want the X-Y coordinates). If you need the height measurement, then you need to find an object with a known height in your images to calibrate it too.

two images with camera position and angle to 3d data?

Suppose I've got two images taken by the same camera. I know the 3d position of the camera and the 3d angle of the camera when each picture was taken. I want to extract some 3d data from the images on the portion of them that overlaps. It seems that OpenCV could help me solve this problem, but I can't seem to find where my camera position and angle would be used in their method stack. Help? Is there some other C library that would be more helpful? I don't even know what keywords to search for on the web. What's the technical term for overlapping image content?
You need to learn a little more about camera geometry, and stereo rig geometry. Unless your camera was mounted on a special rig, it's rather doubtful that its pose at each image can be specified with just an angle and a point. Rather, you'd need three angles (e.g. roll, pitch, yaw). Plus, if you want your reconstruction to be metrical accurate, you need to calibrate accurately the focal length of the camera (at a minimum).

Resources