How the checkerboard size affect an accurate Camera Calibration? - image-processing

Right now, I am calibrating a monocular camera so afterwards I can calculate distance of planar objects in the image (Z=0).
However, I'd like to know how much it is important to know the structure size. The board's squares sizes change a lot in the image based on how far you are so I am not sure if it is a robust parameter as you would always have a relative scale?
Moreover, for my camera which will be mounted on a ceiling (around 10 meters high), how could I estimate suitable sizes of a checkerboard and squares for accurate calibration?

Basic rule of thumb is that the checkerboard should be in focus, approximately fill the field of view in some of the images, and be significantly slanted in some other images. It helps if you have a finite volume of space you work with, and you span most of it with measurements. See this other answer for more.

Related

Human eye image compression

I want to find a computationly efficient way to compress picture in a way that human eye works. Given a normal high resolution picture we leave original high resolution in focus center. This could be just center of original picture and the more l2 distance is from the center less pixel resolution is used. So we have maximal resolution in center and minimal on the borders. We are not just bluring image on border we are skiping pixels maybe using some standard interpolation. So image will be significantly smaller than the original. I can imagine how to develop such algorithm using a pixel by pixel for loop. But how to do the computationaly effective? Is there some ready to use approach that I can use? Any ideas are welcome

is Camera calibration required if I change the height of the camera

I use single-camera calibration with checkerboard and I used one fix position of the camera to do the calibration. Now my question is if I use the same position but change the height of the camera then do I need to do calibration again? If no then will I get the same result by using the different height of the camera?
In my case, I changed the height of the camera but the position of the camera was the same. And I got a different result when I changed height. So I was wondering that may I need to do again calibration of the camera or not?
please help me out.
Generally speaking, and to achieve greatest accuracy, you will need to recalibrate the camera whenever it is moved. However, if the lens mount is rigid enough w.r.t the sensor, you may get away with only updating the extrinsic calibration, especially if your accuracy requirements are modest.
To see why this is the case notice that, unless you have a laboratory-grade rig holding and moving the camera, you can't just change the height only. With a standard tripod, for example, there will in general be a motion in all three axes amounting to a significant fraction of the sensor's size, which will be reflected in visible motion of several pixel with respect to the scene.
Things get worse / more complicated when you also add rotation to re-orient the field of view, since a mechanical mount will not, in general, rotate the camera around its optical center (i.e. the exit pupil of the lens), and therefore every rotation necessarily comes with an additional translation.
In your specific case, since you are only interested in measurements on a plane, and therefore can compute everything using homographies, refining the extrinsic calibration amounts to just recomputing the world-to-image scale. This can easily be achieved by taking one or more images of objects of known size on the plane - a calibration checkerboard is just such an object.

Decrease noise in Disparity map

I calculated Disparity map in c# (Emgu). The attached file1 is left and right image, and disparity map. The noise of disparity map is high. How can i decrease noise in disparity map?
Thanks.
If you want better result in disparity map you should have good camera calibration. The more accurate camera calibration lead to more accurate disparity map.
As tiziran pointed out, good (stereo) calibration is important. Since you usually calibrate each camera retrieving its whole projection matrix, it's difficult to say which of its parameters is the most important.
Stereo calibration involves also the determination of the rotation and traslation of the second camera with respect to the first camera.
In your case, some other things have to be considered:
A) in general noise depends on window correlation size and on the correlation method. Several method exists. The bigger the correlation size, the lower the noise but also the lower the precision.
B) To have disparity, points have to be seen from both your cameras. Half of each image is out the field of view of the other camera, so it would be useless (and tipically noise occurs in areas where disparity couldn't be computated). I think in this case there is too much distance / rotation between the cameras (it does not helps).
C) It is hard to have good disparity where there is no texture at all, or where the texture dimension is too bigger than the correlation window size. In your images there are zones with uniform white color and no texture
D) I think that is hard to have good disparity where the carpet is out of focus. It is my personal consideration but this fact does not help you for sure.

OpenCV - calibrate camera using static images in water

I have a photocamera mounted vertically under water in a tank, looking downwards.
There is a flat grid on the bottom of the tank (approx 2m away from the camera).
I want to be able to place markers on the bottom, and use computer vision to know their real life exact position.
So, I need to map from pixels to mm.
If I am not mistaken, cv::calibrateCamera(...) does just this, but is dependent on moving a pattern in front of the camera.
I have just static pictures of the scene, and the camera never moves in relation to the grid. Thus, I have only a "single" image to find the parameters.
How can I do this using the grid?
Thank you.
Interesting problem! The "cute" part is the effect on the intrinsic parameters of the refraction at the water-glass interface, namely to increase the focal length (or, conversely, to reduce the field of view) compared to the same lens in air. In theory, you could calibrate in air and then correct for the difference in refraction index, but calibrating directly in water is likely to give you more accurate results.
Do know your accuracy requirements? And have you verified that your lens/sensor combination is adequate to meet them (with an adequate margin)? To answer the question you need to estimate (either by calculation from the lens and sensor specifications, or experimentally using a resolution chart) whether you can resolve in an image the minimal distances required by your application.
From the wording of your question I think that you are interested only in measurements on a single plane. So you only need to (a) remove the nonlinear (barrel or pincushion) lens distortion and (b) estimate the homography between the plane of interest and the image. Once you have the latter, you can directly convert from undistorted image coordinates to world ones by matrix multiplication. Additionally if (as I imagine) the plane of interest is roughly parallel to the image plane, you should not have any problem keeping the entire field-of-view in focus.
Of course, for all of this to work as expected, you should make sure that the tank bottom is really flat, within the measurement tolerances of your application. Otherwise you are really dealing with a 3D problem, and need to modify your procedures accordingly.
The actual procedure depends a lot on the size of the tank, which you don't indicate clearly. If it's small enough that it is practical to manufacture a chessboard-like movable calibration target, by all means go for it. You may want to take a look at this other answer for suggestions. In the following I'll discuss the more interesting case in which your tank is large, e.g. the size of a swimming pool.
I'd proceed by sticking calibration markers in a regular grid at the pool bottom. I'd probably choose checker-like markers like these, maybe printing them myself with a good laser printer on plastic with an adhesive backing (assuming you can leave them in place forever). You should plan on having quite a few of them, say, an 8x8 or 10x10 grid, covering as much as possible of the field of view of the camera in its operating position and pose. To help with lining up the grid nicely you might use a laser line projector of suitable fan angle, or a laser pointer attached to a rotating support. Note carefully that it is not necessary that they be affixed in a precise X-Y grid (which may be complicated, depending on the size of your pool), only that their positions with respect any arbitrarily chosen (but fixed) three of them be known. In other words, you can attach them to the bottom approximately in a grid, then measure the distances of three extreme corners from each other as accurately as you can, thus building a base triangle, then measure the distances of all the other corners from the vertices of the triangle, and finally reconstruct their true positions with a bit of trigonometry. It's basically a surveying problem and, depending on your accuracy requirements and budget, you may want to enroll a local friendly professional surveyor (and their tools) to get it done as precisely as necessary.
Once you have your grid, you can fill the pool, get your camera, focus and f-stop the lens as needed for the application. From now on you may not touch the focus and f-stop ever again, under penalty of miscalibrating - exposure can only be controlled by the exposure time, so make sure to have enough light. Disable any and all auto-focus and auto-iris functions, if any. If the camera has a non-rigid lens mount (e.g. a DLSR), you'll need some kind of mechanical rig to ensure that the lens-body pair stay rigid. F-stop as close as you can, given the available lighting and sensor, so to have a fair bit of depth of field available. Then take several photos (~ 10) of the grid, moving and rotating the camera, and going a bit closer and farther away than your expected operating distance from the plane. You'll want to "see" in some images some significant perspective foreshortening of the grid - this is needed to accurately calibrate the focal length. Avoid JPG and any other lossy compression format when storing the images - use lossless PNG or TIFF.
Once you have the images, you can manually mark and identify the checker markers in the images. For a once-off project like this I would not bother with automatic identification, just do it manually (e.g. in Matlab, or even in Photoshop or Gimp). To help identify the markers, you could, e.g. print a number next to them. Once you have the manual marks, you can refine them automatically to subpixel accuracy, e.g. using cv::findCornerSubpix.
You're almost done. Feed the "reference" measured position of the real corners, and the observed ones in all images, to your favorite camera calibration routine, e.g. cv::calibrateCamera. You use the nominal focal length of the camera (converted to pixels) for an initial estimate, along with null distortion. If all goes well, you will obtain the camera intrinsic parameters, which you will keep, and the camera poses at all images, which you'll throw away.
Now you can mount the camera in your final setup, as needed by your application, and take one further image of the grid. Mark and refine the corner positions as before. Undistort their image positions using the distortion parameters returned by the calibration. Finally compute the homography between the reference positions of the real markers (in meters) and their undistorted positions, and you're done.
HTH
To calibrate the camera you do need multiple images of the checkerboard (or one of the other patterns found here). What you can do, is calibrate the camera outside of the water or do a calibration sequence once.
Once you have that information (focal length, center of lens, distortion, etc). You can use the solvePNP function to estimate the orientation of a single board. This estimation provides you with a distance from the camera to the board.
A completely different alternative could be to find what kind of lens the camera uses and manually fill in the data. I've not tried this, so I'm uncertain how well this would work.

How would you find the height of objects given an image?

This isn't exactly a programming question exactly. I just want to know what your approach would be to a common problem in Digital image processing.
Let's say you have an image of a few trees in say jpg format. How would you go about finding the heights of each of these trees? The photo is the only input you have.
I want to know the approaches you have not to code. So it doesn't matter if your answers are vague, or non DIP-ish.
Small correction :
The height need not be the actual height of the tree. The height can be taken to any scale. But should be consistent to all objects in the pic.
Yes it is possible. What you are describing has an entire industry around it, called Photogrammetry
There is a fair amount of computer vision research in this area. Assuming you don't know the camera constraints, you'll have to make assumptions about the scene and camera to determine the heights up to a scale factor. Note that without camera constraints or a reference height in the image it is impossible to tell the difference between a tall tree photographed from a distance or a short tree photographed up close. A great start is the Single View Metrology work by Criminisi.
It is simple to find the size of an object from images using Photogrammetry.
Photogrammetry is the science of making measurements from photographs.
For this we need to know two things,
the distance between the camera and the image plane(distance from camera to object).
Focal-length(in mm and pixels per mm) or physical size of the image sensor.
Following are the steps:
Calibrate the Camera
Use openCV to calibrate the camera.You can use the OpenCV calibrate.py tool and the Chessboard pattern PNG provided in the source code to generate a calibration matrix. Camera calibration is done to find the camera parameters. I took about a dozen of photos of the chessboard photos from many angles as I could with my webcam (to calibrate my webcam). For more details check openCV camera calibration.
We will get f_x,f_y,c_x,c_y from calibration matrix.
Checking the details of the photos you took, you will find the native resolution of the photos(heightXwidth) and in their EXIF headers you can find the focal length value(f). These items may vary depending on your camera.
Pixels per millimeter
We need to know the pixels per millimeter(px/mm) on the image sensor.
f_x=f*m_x
f_y=f*m_y
Since we have two of the variables for each formula we can solve for m_x and m_y.I just averaged f_x and f_y to get f_xy.
m=f_xy/focal_length_of_camera
Insert the image
Insert your image from which you need to find the actual size of image. You should know the distance between object and camera. Find the dimension of the image (height1Xwidth1)
Find the Object size in pixels
Determine the size of object in pixels. I simply use distance formula to find length of a selected line. You can adopt any other method.
Convert px/mm in the lower resolution
pxpermm_in_lower_resolution = (width1*m)/width
Size of object in the image sensor
size_of_object_in_image_sensor = object_size_in_pixels/(pxpermm_in_lower_resolution)
Actual size of object
The actual size of object can be found with the above data as,
real_size = (dist*size_of_object_in_image_sensor)/focal_length
Assuming they're all the same distance away, all to scale, you'd want to find a single unit of measurement you can guarantee. For example, if there's a person in the photo, again, same scale, and you know they're exactly 6 feet tall, you use that as your measure. You then take that, and count how many stacked make the tree. For example, if you need 3.5 of this person, then:
3.5 * 6 = 21
gives you a 21 foot tall tree.
Without a single point of reference for everything, or if they're all on different scales, you would need a lot more information than you could easily get without having been there.
I would rely on an object of known dimensions to be present in the picture. For instance, a man.
Or perhaps, we could use the EXIF data to reverse engineer the size of the object based on the camera's sensor dimensions, the lens and the focal length used. This again depends on the angle. We should be getting most accurate results when the camera has been held perpendicular to the subject.
If your image is 3*3 and you want to find out the size of image (i.e 3x3..so 3x3 = 9) now we have 8 pixels starting from 0 up to 8. So 9/8=(___)kb.
If you want to find the size of image in MB, like doing above example, just do like that (9/8)/(1024)=(----)MB..
So you will get the result in Mb.

Resources