distorted depthmap when calibrating stereo cameras - image-processing

I have two identical cameras with identical manual fixed focus lenses, and i've been trying to calibrate them so my depthmap looks good. However the main issue I have is that the "floor" in the depthmap becomes angled and distorted as seen from the epilines. I've been trying many code samples online and I still get these distortions.
I have taken about 91 images both from the left camera and right camera. I 3d printed a case so they are as parallel as possible.
I slid my checkerboard (18x24") around the frame and rotated it slightly but I only moved it around on my garage floor.
Could anyone please help me?
Thank you
I used the code as described in this tutorial: https://python.plainenglish.io/the-depth-i-stereo-calibration-and-rectification-24da7b0fb1e0
to generate the stereoMap.xml database and used this database to rectify my images before trying to generate the depthmap
Here are my results:
calibration:
epilines:
test image:
depthmap:
Edit: inserted the images instead of links

Related

Remove deformation in distorted image?

I'm fairly new to Computer Vision and OpenCV. I'm working on a project in which we are building a robot that plays snooker. We installed a camera on top of the table, with the aim of detecting the balls. I originally assumed that getting rid of the barrel distortion would be quite straight forward. However, I'm not very satisfied with the results I'm getting. In the second image I attached, one can clearly see, that after applying the undistortion transformation, the sides of the table are not parallel to each other. Moreover, with respect to the first image, the balls are deformed into a sort of egg shape.
Does anyone have an idea of how I could fix this issues? I tried to take more picture of the chessboard patter is as many different positions as possible, without any visible changes. Also using more parameters to model the distortion didn't seem to yield any improvements.
Distorted Image
Undistorted Image

Stereo Camera calibration using different camera types

I'm trying to perform stereo camera calibration, rectification and disparity map generation. It's working fine with normal sample data. However, I'm trying to use the dual cameras on an iPhone 7+, which have different zoom. The telephoto lens has 2X zoom compared to the wide angle camera. I ran the images through the algorithm, and it is succeeding, although with a high error rate. However, when I open up the rectified images, they have a weird spherical look to the edges. The center looks fine. I'm assuming this is due to the cameras having different zoom levels. Is there anything special I need to do to deal with this? Or do I just need to crop any output to the usable undistorted area? Here is what I'm seeing:
EDIT:
I tried using the calibration result from these checkerboard images to rectify an image of some objects, and the rectification was way off, not even close. If I rectify one of my checkerboard images, they are spot on. Any ideas why that happens?
EDIT2:
These are what my input images look like that result in the spherical looking output image. They are both taken from the exact same position, the iPhone was mounted to a tripod and I used a bluetooth device to trigger the shutter so the image wouldn't get shaken, my code automatically takes one image with each lens. I took 19 such images from different angles, all images show the full checkerboard. The more zoomed in image is the one that rectified to the top spherical looking image.
EDIT3:
Here is the disparity map using the calibration I got.

opencv: Correcting these distorted images

What will be the procedure to correct the following distorted images ? It looks like the images are bulging out from center. These are of the same QR code, and so a combination of such images can be used to arrive at a single correct and straight image.
Please advice.
The distortion you are experiencing is called "barrel distortion". A technical name is "combination of radial distortion and tangential distortions"
The solution for your problem is openCV camera calibration module. Just google it and you will find documentations in openCV wiki. More over, openCV already has built in source code examples of how to calibrate the camera.
Basically, You need to print an image of a chess board, take a few pictures of it, run the calibration module (built in method) and get as output transformation matrix. For each video frame you apply this matrix (I think the method called cvUndistort()) and it will straighten the curved lines in the image.
Note: It will not work if you change the zoom or focal length of the camera.
If camera details are not available and uncontrollable - then your problem is very serious. There is a way to solve the distortion, but I don't know if openCV has built in modules for that. I am afraid that you will need to write a lot of code.
Basically - you need to detect as much as possible long lines. Then from those lines (vertical and horizontal) you build a grid of intersection points. Finally you fit the grid of those points to openCV calibration module.
If you have enough intersection points (say 20 or more) you will be able to calculate the distortion matrix and un-distort the image.
You will not be able to fully calibrate the camera. In other words, you will not be able to run a one time process that calculates the expected distortion. Rather - in each and every video frame, you will calculate the distortion matrix directly - reverse it and un-distort the image.
If you are not familiar with image processing techniques or unable to find a reliable open source code which directly solves your problem - then I am afraid that you will not be able to remove the distortion. sorry

Opencv: correcting radially distorted images when chessboard images are not available

How do I recover correct image from a radially distorted image using OpenCV? for example:
Please provide me useful links.
Edit
The biggest problem is I neither have the camera used for taking the pic nor the chessboard image.
Is that even possible?
Well, there is not much to do if you don't have the camera, or at least the model of it. As you may know a usual camera model is pin-hole, this basically consist in the 3D world coordinates are transformed (mapped) to the camera image plane 2D coordinates.
Camera Resectioning
If you don't have access to the camera or at least two chessboard images, you can't estimate the focal, principal point, and distortion coefficients. At least not in a traditional way, if you have more images than the one that you showed or a video from that camera you could try auto or self calibration.
Camera auto-calibration
Another auto-calibration
yet another
Opencv auto-calibration

Undistorting/rectify images with OpenCV

I took the example of code for calibrating a camera and undistorting images from this book: shop.oreilly.com/product/9780596516130.do
As far as I understood the usual camera calibration methods of OpenCV work perfectly for "normal" cameras.
When it comes to Fisheye-Lenses though we have to use a vector of 8 calibration parameters instead of 5 and also the flag CV_CALIB_RATIONAL_MODEL in the method cvCalibrateCamera2.
At least, that's what it says in the OpenCV documentary
So, when I use this on an array of images like this (Sample images from OCamCalib) I get the following results using cvInitUndistortMap: abload.de/img/rastere4u2w.jpg
Since the resulting images are cut out of the whole undistorted image, I went ahead and used cvInitUndistortRectifyMap (like it's described here stackoverflow.com/questions/8837478/opencv-cvremap-cropping-image). So I got the following results: abload.de/img/rasterxisps.jpg
And now my question is: Why is not the whole image undistorted? In some pics of my later results you can recognize that the laptop for example is still totally distorted. How can I acomplish even better results using the standard OpenCV methods?
I'm new to stackoverflow and I'm new to OpenCV as well, so please excuse any of my shortcomings when it comes to expressing my problems.
All chessboard corners should be visible to be found. The algorithm expect a certain size of chessboard such as 4x3 or 7x6 (for example). The white border around a chess board should be visible too or dark squares may not be defined precisely.
You still have high distortions at the image periphery after undistort() since distortions are radial (that is they increase with the radius) and your found coefficients are wrong. The latter are wrong since a calibration process minimizes the sum of squared errors in pixel coordinates and you did not represent the periphery with enough samples.
TODO: You have to have 20-40 chess board pattern images if you use 8 distCoeff. Slant your boards at different angles, put them at different distances and spread them around, especially at the periphery. Remember, the success of calibration depends on sampling and also on seeing vanishing points clearly from your chess board (hence slanting and tilting).

Resources