Opencv: Find focal lenth in mm in an analog camera - opencv

I have sucessfully calibrated an analog camera using opencv. The ouput focal length and principal points are in pixels.
I know in digital cameras you can easily multiply the size of the pixel in the sensor by the focal length in pixels and get the focal length in mm (or whatever).
How can I do with this analog camera to get the focal length in mm?

The lens manufacturers usually write focal length on the lens. Even the name of the lens contains it, e.g. "canon lens 1.8 50mm".
If not, you can try to measure it manually.
Get lens apart from the camera. Take a small well illuminated object, place it in 1-3 meters in from of lens and sheet of paper back from it. Get sharp and focused image of the object on the paper.
Now measure following:
a - distance from lens to the object;
y - object size;
y' - object image size on the paper;
f = a/(1+y/y') - focus distance.

If your output is in pixels, you must be digitizing the analog input at some point. You just need to figure out the size of the pixel that you are creating.
For example, if you are scanning film in, then you use the pixel size of the scanner.

Related

How to estimate intrinsic properties of a camera from data?

I am attempting camera calibration from a single RGB image (panorama) given 3D pointcloud
The methods that I have considered all require an intrinsic properties matrix (which I have no access to)
The intrinsic properties matrix can be estimated using the Bouguet’s camera calibration Toolbox, but as I have said, I have a single image only and a single point cloud for that image.
So, knowing 2D image coordinates, extrinsic properties, and 3D world coordinates, how can the intrinsic properties be estimated?
It would seem that the initCameraMatrix2D function from the OpenCV (https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) works in the same way as the Bouguet’s camera calibration Toolbox and requires multiple images of the same object
I am looking into the Direct linear transformation DLT and Levenberg–Marquardt algorithm with implementations https://drive.google.com/file/d/1gDW9zRmd0jF_7tHPqM0RgChBWz-dwPe1
but it would seem that both use the pinhole camera model and therefore find linear transformation between 3D and 2D points
I can't find my half year old source code, but from top of my head
cx, cy is optical centre which is width/2, height/2 in pixels
fx=fy is focal length in pixels (distance from camera to image plane or axis of rotation)
If you know that image distance from camera to is for example 30cm and it captures image that has 16x10cm and 1920x1200 pixels, size of pixel is 100mm/1200=1/12mm and camera distance (fx,fy) would be 300mm*12px/1mm=3600px and image centre is cx=1920/2=960, cy=1200/2=600. I assume that pixels are square and camera sensor is centered at optical axis.
You can get focal lenght from image size in pixels and measured angle of view.

Calculate focus to map world point on imageplane

I try to calculate the focus value to map a world point on to image plane.
I use raspberry pi camera v2. I did get the camera matrix from opencv it gives me for fx and fy 204. Got nearly the same value by measuring at known distance and size of object.
But when I use a formular I get wrong values.
My formular is
Fpix=sensorsize_pix * focus_mm/sensorsize_mm=1pix*focus_mm/pixsize_mm
I'm using as values:
320x240 image.
Image is taken with 640x480 resolution and then binned 2x2 in Software.
Because the image is already binned by driver I would have a total binning of 4x4.
The original pixel size 1.4um and focus 3.00mm
Which would give me a binned pixelsize of 5.6um.
So I would calculate
Fpix=1pix*3.0mm/0.0056mm=536pix
which is a huge difference to 204pix
The specification for the sensor can be found herelink
As I would consider opencv and measurements as correct. Something must be wrong with my formular.

the position of image plane in optical imaging

In the book Learning OpenCV, there is one figure that show the image plane always is focal image.
The description of the figure from the book is as follow.
We begin by looking at the simplest model of a camera, the pinhole camera model. In this simple model, light is envisioned as entering from the scene or a distant object, but only a single ray enters from any particular point. In a physical pinhole camera, this point is then “projected” onto an imaging surface. As a result, the image on this image plane(also called the projective plane) is always in focus, and the size of the image relative to the distant object is given by a single parameter of the camera: its focal length. For our idealized pinhole camera, the distance from the pinhole aperture to the screen is precisely the focal length. Th is is shown in Figure 11-1, where fis the focal length of the camera, Zis the distance from the camera to the object, Xis the length of the object, and xis the object’s image on the imaging plane. In the figure, we can see by similar triangles that –x/f = X/Z,
From this link https://en.wikipedia.org/wiki/Lens_(optics)
There is thin lens equation. Only object in infinity can be imaged in focal image. Which one is correct?
Both are correct but they're stating different subjects. The mentioned Wikipedia's page is not talking about pinhole camera model. In pinhole camera you can put the image plane in any distance that you want (limited to the size of the image plane) and the image would be formed on that anyway. However in the second model that you brought from Wikipedia, the image plane has to be somewhere that is matched with the formula because of the existence of the lens, and that's the reason you have blur image when you take a picture out of the camera's focus.
In other word with a specific lens, there is a predefined focal length, so you have to set your image plane in a distance that gives you the clear image (Of course in digital camera, this distance is constant but there are multiple lenses and the camera changes the focal length by moving the lenses). In pinhole model there is no lens so there is no predefined focal length and the image will be caught correctly wherever you put the catcher plane. (There are many things that we have to consider but I came up with the simplest sketch.)

Radial distortion correction, camera parameters and openCV

I am trying to undistort a barrel/radial distortion from an image. When I see the equations they do not require the focal length of the camera. But the openCV API initundistortrectifymap requires them in form of the camera intrinsic matrix. Why so ? Anyway to do it without them? Because I understand the undistort is common for various distortion corrections.
The focal length is essential in distortion removal -since it provides info on the intrinsic parameters of the camera- and it is fairly simple to add it to the camera matrix. Just remember that you have to convert it from millimeters to pixels. This is done to ensure that the pixels are rectangular. For the conversion you need to know the sensor's height and width in millimeters, the horizontal (Sh) and vertical (Sv) number of pixels of the sensor and the focal length in millimeters. The conversion is done using the following equations:
fx = (f(mm) x Sh(px))/sensorwidth(mm)
fy = (f(mm) x Sv(px))/sensorheight(mm)
More on the camera matrix elements can be found here.

OpenCV: How-to calculate distance between camera and object using image?

I am a newbie in OpenCV. I am working with the following formula to calculate distance:
distance to object (mm) = focal length (mm) * real height of the object (mm) * image height (pixels)
----------------------------------------------------------------
object height (pixels) * sensor height (mm)
Is there a function in OpenCV that can determine object distance? If not, any reference to sample code?
How to calculate distance given an object of known size
You need to know one of 2 things up front
Focal-length (in mm and pixels per mm)
Physical size of the image sensor (to calculate pixels per mm)
I'm going to use focal-length since I don't want to google for the sensor datasheet.
Calibrate the camera
Use the OpenCV calibrate.py tool and the Chessboard pattern PNG provided in the source code to generate a calibration matrix. I took about 2 dozen photos of the chessboard from as many angles as I could and exported the files to my Mac. For more detail check OpenCV's camera calibration docs.
Camera Calibration Matrix (iPhone 5S Rear Camera)
RMS: 1.13707201375
camera matrix:
[[ 2.80360356e+03 0.00000000e+00 1.63679133e+03]
[ 0.00000000e+00 2.80521893e+03 1.27078235e+03]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
distortion coefficients: [ 0.03716712 0.29130959 0.00289784 -0.00262589 -1.73944359]
f_x = 2803
f_y = 2805
c_x = 1637
c_y = 1271
Checking the details of the series of chessboard photos you took, you will find the native resolution (3264x2448) of the photos and in their JPEG EXIF headers, visible in iPhoto, you can find the Focal Length value (4.15mm). These items should vary depending on camera.
Pixels per millimeter
We need to know the pixels per millimeter (px/mm) on the image sensor. From the page on camera resectioning we know that f_x and f_y are focal-length times a scaling factor.
f_x = f * m_x
f_y = f * m_y
Since we have two of the variables for each formula we can solve for m_x and m_y. I just averaged 2803 and 2805 to get 2804.
m = 2804px / 4.15mm = 676px/mm
Object size in pixels
I used OpenCV (C++) to grab out the Rotated Rect of the points and determined the size of the object to be 41px. Notice I have already retrieved the corners of the object and I ask the bounding rectangle for its size.
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
Small wrinkle
The object is 41px in a video shot on the camera # 640x480.
Convert px/mm in the lower resolution
3264/676 = 640/x
x = 133 px/mm
So given 41px/133px/mm we see that the size of the object on the image sensor is .308mm .
Distance formula
distance_mm = object_real_world_mm * focal-length_mm / object_image_sensor_mm
distance_mm = 70mm * 4.15mm / .308mm
distance_mm = 943mm
This happens to be pretty good. I measured 910mm and with some refinements I can probably reduce the error.
Feedback is appreciated.
Similar triangles approach
Adrian at pyimagesearch.com demonstrated a different technique using similar triangles. We discussed this topic beforehand and he took the similar triangles approach and I did camera intrinsics.
there is no such function available in opencv to calculate the distance between object and the camera. see this :
Finding distance from camera to object of known size
You should know that the parameters depend on the camera and will change if the camera is changed.
To get a mapping between the real world and camera without any prior information of the camera you need to calibrate the camera...here you can find some theory
For calculating the depth i.e. distance between camera and object you need at least two images of the same object taken by two different cameras...which is popularly called the stereo vision technique..

Resources