Decrease noise in Disparity map - emgucv

I calculated Disparity map in c# (Emgu). The attached file1 is left and right image, and disparity map. The noise of disparity map is high. How can i decrease noise in disparity map?
Thanks.

If you want better result in disparity map you should have good camera calibration. The more accurate camera calibration lead to more accurate disparity map.

As tiziran pointed out, good (stereo) calibration is important. Since you usually calibrate each camera retrieving its whole projection matrix, it's difficult to say which of its parameters is the most important.
Stereo calibration involves also the determination of the rotation and traslation of the second camera with respect to the first camera.
In your case, some other things have to be considered:
A) in general noise depends on window correlation size and on the correlation method. Several method exists. The bigger the correlation size, the lower the noise but also the lower the precision.
B) To have disparity, points have to be seen from both your cameras. Half of each image is out the field of view of the other camera, so it would be useless (and tipically noise occurs in areas where disparity couldn't be computated). I think in this case there is too much distance / rotation between the cameras (it does not helps).
C) It is hard to have good disparity where there is no texture at all, or where the texture dimension is too bigger than the correlation window size. In your images there are zones with uniform white color and no texture
D) I think that is hard to have good disparity where the carpet is out of focus. It is my personal consideration but this fact does not help you for sure.

Related

How to relate detected keypoints after auto-focus

I'm working with an stereo camera setup that has auto-focus (which I cannot turn off) and a really low baseline of less than 1cm.
Auto-focus process can actually change any intrinsic parameter of both cameras (as focal length and principal point, for example) and without a fix relation (left camera may add focus while right one decrease it). Luckily cameras always report the current state of intrinsics with great precision.
On every frame an object of interest is being detected and disparities between camera images are calculated. As baseline is quite low and resolution is not the greatest, performing stereo triangulation leads to quite poor results and for this matter several succeeding computer vision algorithms relay only on image keypoints and disparities.
Now, disparities being calculated between stereo frames cannot be directly related. If principal point changes disparities will be in very different magnitudes after the auto-focus process.
Is there any way to relate keypoint corners and/or disparities between frames after auto-focus process? For example, calculate where would the object lie in the image with the previous intrinsics?
Maybe using a bearing vector towards object and then seek for intersection with image plane defined by previous intrinsics?
Quite challenging your project, perhaps these patents could help you in some way:
Stereo yaw correction using autofocus feedback
Autofocus for stereo images
Depth information for auto focus using two pictures and two-dimensional Gaussian scale space theory

3D reconstruction using stereo camera

I try to construct 3D point cloud and measure real sizes or distances of objects using stereo camera. The cameras are stereo calibrated, and I find 3D points using reprojection matrix Q and disparity.
My problem is the calculated sizes are changing depending the distance from cameras. I calculate the distances between two 3D points, it has to be constant, but when object gets closer to the camera, distance increasing.
Am i missing something? The 3D coordinates have to be in camera coordinates, not in pixel coordinates. So it seems inaccurate to me. Any idea?
You didn't mention how far apart your cameras are - the baseline. If they are very close together compared with the distance of the point that you are measuring, a slight inaccuracy in your measurement can lead to a big difference in the computed distance.
One way you can check if this is the problem is by testing with only lateral movement of the camera.

3D stereo, bad 3D coordinates

I'm using stereo vision to obtain 3D reconstruction. I'm using opencv library.
I've implemented my code this way:
1) Stereo Calibration
2) undistort and Rectification of image pair
3) disparity map - using SGBM
4) 3D coordinates calculating depht map - unsing reprojectImageTo3D();
Results:
-Good disparity map, and good 3D reconstruction
-Bad 3D coordinates values, the distances don't corresponde to the reality.
The 3D distances, the distante between camera and object, have 10 mm error and increse with distance. I,ve used various baselines and i get always error.
When i compare the extrinsic parameter, vector T, output of "stereoRectify" the baseline match.
So i dont know where the problem is.
Can someone help me please, thanks in advance
CAlibration:
http://textuploader.com/ocxl
http://textuploader.com/ocxm
Ten mm error can be reasonable for stereo vision solutions, all depending of course on the sensor sensitivity, resolution, baseline and the distance to the object.
The increasing error with respect to the object's distance is also typical to the problem - the stereo correspondence essentially performs triangulation between the two video sensors to the object, and the larger the distance is the derivative of the angle between the video sensors to the object translates to larger distance on the depth axis, which means larger error. Good example is when the angle between the video sensors to the object is almost right, which means that any small positive error in estimating it will throw the estimated depth to infinity.
The architecture you selected looks good. You can try increasing the sensors resolution, or maybe dig in to the calibration process which has a lot of room for tuning in the openCV library - making sure only images taken with the chessboard being static are selected, choose higher variety of the different poses of the chessboard, adding images until the registration between the two images drops below the maximal error you can allow, etc.

How to take stereo images using single camera?

I want to find the depth map for stereo images.At present i am working on the internet image,I want to take stereo images so that i can work on it by my own.How to take best stereo images without much noise.I have single camera.IS it necessary to do rectification?How much distance must be kept between the cameras?
Not sure I've understood your problem correclty - will try anyway
I guess your currently working with images from middlebury or something similar. If you want to use similar algorithms you have to rectify your images because they are based on the assumption that corresponding pixels are on the same line in all images. If you actually want depth images (!= disparity images) you also need to get the camera extrinsics.
Your setup should have two cameras and you have to make sure that they don't change there relative position/orientation - otherwise your rectification will break apart. In the first step you have to calibrate your system to get intrinsic and extrinsic camera parameters. For that you can either use some tool or roll your own with (for example) OpenCV (calib-module). Print out a calibration board to calibrate your system. Afterwards you can take images and use the calibration to rectify the images.
Regarding color-noise:
You could make your aperture very small and use high exposure times. In my own opinion this is useless because real world situations have to deal with such things anyway.
In short, there are plenty of stereo images on the internet that are already rectified. If you want to take your own stereo images you have to follow these three steps:
The relationship between distance to the object z (mm) and disparity in pixels D is inverse: z=fb/D, where f is focal length in pixels and b is camera separation in mm. Select b such that you have at least several pixels of disparity;
If you know camera intrinsic matrix and compensated for radial distortions you still have to rectify your images in order to ensure that matches are located in the same row. For this you need to find a fundamental matrix, recover essential matrix, apply rectifying homographies and update your intrinsic camera parameters... or use stereo pairs from the Internet.
The low level of noise in the camera image is helped by brightly illuminated scenes, large aperture, large pixel size, etc.; however, depending on your set up you still can end up with a very noisy disparity map. The way to reduce this noise is to trade-off with accuracy and use larger correlation windows. Another way to clean up a disparity map is to use various validation techniques such as
error validation;
uniqueness validation or back-and-force validation
blob-noise supression, etc.
In my experience:
-I did the rectification, so I had to obtain the fundamental matrix, and this may not be correct with some image pairs.
-Better resolution of your camera is better for the matching, I use OpenCV and it has an implementation of BRISK descriptor, it was useful for me.
-Try to cover the same area and try not to do unnecessary rotations.
-Once you understand the Theory, OpenCV is a good friend. Here is some result, but I am still working on it:
Depth map:
Rectified images:

Volume of the camera calibration

I am dealing with the problem, which concerns the camera calibration. I need calibrated cameras to realize measurements of the 3D objects. I am using OpenCV to carry out the calibration and I am wondering how can I predict or calculate a volume in which the camera is well calibrated. Is there a solution to increase the volume espacially in the direction of the optical axis? Does the procedure, in which I increase the movement range of the calibration target in 'z' direction gives sufficient difference?
I think you confuse a few key things in your question:
Camera calibration - this means finding out the matrices (intrinsic and extrinsic) that describe the camera position, rotation, up vector, distortion, optical center etc. etc.
Epipolar Rectification - this means virtually "rotating" the image planes so that they become coplanar (parallel). This simplifies the stereo reconstruction algorithms.
For camera calibration you do not need to care about any volumes - there aren't volumes where the camera is well calibrated or wrong calibrated. If you use the chessboard pattern calibration, your cameras are either calibrated or not.
When dealing with rectification, you want to know which areas of the rectified images correspond and also maximize these areas. OpenCV allows you to choose between two extremes - either making all pixels in the returned areas valid and cutting out pixels that don't fit into the rectangular area or include all pixels even with invalid ones.
OpenCV documentation has some nice, more detailed descriptions here: http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html

Resources