Real distance on image sensor - ios

I have captured an image using iPhone rear camera. Now I can measure the distance between two points in the image by pixel, and I can easily convert that distance to inch or millimeter using PPI of that iPhone.
distance_in_inch = distance_in_pixel / PPI
However, I want to know the distance in inch or millimeter on the image sensor?
How can I calculate it?

Related

how to calculate ppi of this camera?

My camera is HIKIROBOT MV-CE060-10UC 6MP USB Camera and its lens is HIKIROBOT MVL-HF2528M-6MPE 25MM lens
Camera Resolution : 3072 x 2048 == 6 MP
more details about camera
more details about lens
and the operating distance 370 mm.
My need is find the Dimensions of object using python. I try some method its not give accurate value.
I search in online to calculate the ppi link of website.I follow the steps to calculate ppi but i don't know the diagonal in inches
diagonal image
That website give some examples to calculate the ppi like
They calculate the ppi of computer screen here thay give diagonal inches of screen, but in my case don't know the daigonal inches. What to do ?

How Resolution changes X & Y cords

I am tracking the color of a pixel at X & Y at a resolution of 1920 by 1080, I am simply wondering if there is any mathematically way to remain accurate in tracking the same pixel throughout various resolutions.
The pixel is not moving and is static, however I am aware that changing resolutions affects scaling and the X & Y system of the monitor.
So any suggestions would be great!
As always the whole screen area is filled, the same location on that physical screen (determined as the ratio of xLocation divided by the xWidth and the yLocation divided by the yHeight, this in centimeters or inches) will also always be at the same ratio of xPixelindex divided by xTotalpixels and yPixelindex divided by yTotalpixels.
Lets assume you have xRefererence and yReference of the target pixel, in a resolution WidthReference and HeightReference in which these coordinates mark the desired pixel.
Lets assume you have WidthCurrent and HeightCurrent of your screen size in pixels, for the resolurion in which you want to target a pixel at the same physical location.
Lets assume that you need to determine xCurrent and yCurrent as the coordinates for the pixel in the current resolution.
Then calculate the current coordinates as:
xCurrent = (1.0 * WidthCurrent) / WidthReference * xReference;
yCurrent = (1.0 * HeightCurrent)/ HeightReference * yReference;

Difference of a circle in picture with different resolutions by the same phone camera

I shoot a circle through the phone, and obtain the diameter of the circle (the number of pixels in the picture) by image processing. I found that the number of pixels of diameter is different at different resolutions.The following is the table about diameter at different resolution that I record through experiment. **I want to know how phone camera to get photos of different resolution sizes, and the relation between different resolutions.**I am searching for a long time on net. But no use.
The sensor in your camera has a certain size - it may be 29mmx19mm, or 24mm by 16mm (APS-C) or Micro Four Thirds (18mm by 13mm) or Full Frame (36mm by 24mm).
When the light goes through the lens it forms an image of your circle on the sensor and the sensor records it. When you change resolution, the camera uses a different number of pixels to record it but the circle still shows up as the same number of millimetres on the sensor because it is the lens's focal length and the distance to the object that determines the size of the image formed on the sensor.
If you divide the resolution by the diameter, you will see that your circle is forming a picture a constant size on your sensor - 6.25 units in size:
Let's try an example and pretend your camera is full frame. That means that at 640x480 resolution, 640 pixels is 36mm, so your 104 pixel wide circle means that the image formed on your sensor is
104 * 36mm
--- = 5.85mm
640
When you record at 4160 resolution, your 36mm is divided into 4160 pixels, so your 664 pixels make
664 * 36mm
---- = 5.7mm
4160
So basically, what you are seeing is that the size of the image on your sensor is independent of the resolution you record it at - which is correct since the size of the image on the sensor is determined by the focal length of your lens and the distance to the object.

OpenCV: How-to calculate distance between camera and object using image?

I am a newbie in OpenCV. I am working with the following formula to calculate distance:
distance to object (mm) = focal length (mm) * real height of the object (mm) * image height (pixels)
----------------------------------------------------------------
object height (pixels) * sensor height (mm)
Is there a function in OpenCV that can determine object distance? If not, any reference to sample code?
How to calculate distance given an object of known size
You need to know one of 2 things up front
Focal-length (in mm and pixels per mm)
Physical size of the image sensor (to calculate pixels per mm)
I'm going to use focal-length since I don't want to google for the sensor datasheet.
Calibrate the camera
Use the OpenCV calibrate.py tool and the Chessboard pattern PNG provided in the source code to generate a calibration matrix. I took about 2 dozen photos of the chessboard from as many angles as I could and exported the files to my Mac. For more detail check OpenCV's camera calibration docs.
Camera Calibration Matrix (iPhone 5S Rear Camera)
RMS: 1.13707201375
camera matrix:
[[ 2.80360356e+03 0.00000000e+00 1.63679133e+03]
[ 0.00000000e+00 2.80521893e+03 1.27078235e+03]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
distortion coefficients: [ 0.03716712 0.29130959 0.00289784 -0.00262589 -1.73944359]
f_x = 2803
f_y = 2805
c_x = 1637
c_y = 1271
Checking the details of the series of chessboard photos you took, you will find the native resolution (3264x2448) of the photos and in their JPEG EXIF headers, visible in iPhoto, you can find the Focal Length value (4.15mm). These items should vary depending on camera.
Pixels per millimeter
We need to know the pixels per millimeter (px/mm) on the image sensor. From the page on camera resectioning we know that f_x and f_y are focal-length times a scaling factor.
f_x = f * m_x
f_y = f * m_y
Since we have two of the variables for each formula we can solve for m_x and m_y. I just averaged 2803 and 2805 to get 2804.
m = 2804px / 4.15mm = 676px/mm
Object size in pixels
I used OpenCV (C++) to grab out the Rotated Rect of the points and determined the size of the object to be 41px. Notice I have already retrieved the corners of the object and I ask the bounding rectangle for its size.
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
Small wrinkle
The object is 41px in a video shot on the camera # 640x480.
Convert px/mm in the lower resolution
3264/676 = 640/x
x = 133 px/mm
So given 41px/133px/mm we see that the size of the object on the image sensor is .308mm .
Distance formula
distance_mm = object_real_world_mm * focal-length_mm / object_image_sensor_mm
distance_mm = 70mm * 4.15mm / .308mm
distance_mm = 943mm
This happens to be pretty good. I measured 910mm and with some refinements I can probably reduce the error.
Feedback is appreciated.
Similar triangles approach
Adrian at pyimagesearch.com demonstrated a different technique using similar triangles. We discussed this topic beforehand and he took the similar triangles approach and I did camera intrinsics.
there is no such function available in opencv to calculate the distance between object and the camera. see this :
Finding distance from camera to object of known size
You should know that the parameters depend on the camera and will change if the camera is changed.
To get a mapping between the real world and camera without any prior information of the camera you need to calibrate the camera...here you can find some theory
For calculating the depth i.e. distance between camera and object you need at least two images of the same object taken by two different cameras...which is popularly called the stereo vision technique..

Pixels of an image

I have a stupid question:
I have a black circle on white background, something like:
I have a code in Matlab that gets an image with a black circle and returns the number of pixels in the circle.
will I get the same number of pixels in a camera of 5 mega pixel and a camera of 8 mega pixel?
The short answer is: Under most circumstances, No. 8MP should have more pixels than 5MP, However...
That depends on many factors related to the camera and the images that you take:
Focal length of the cameras, and other optics parameters. Consider a fish-eye lens to understand my point.
Distance of the circle from the camera. Obviously, closer objects appear larger.
What the camera does with the pixels from the sensor. For example, 5MP cameras that works in a down-scaled regime, outputting 3MP instead.
its depends on the Resolution is how many pixels you have counted horizontally or vertically when used to describe a stored image.
Higher mega pixel cameras offer the ability to print larger images.
For example a 6mp camera offers a resolution of 3000 x 2000 pixels. If you allow 300dpi (dots per inch) for print quality, this would give you a print of approx 10 in x 7 in. 3000 divided by 300 = 10, 2000 divided by 300 = approx 7
A 3.1mp camera offers a resolution of 2048 x 1536 pixels which gives a print size of 7in x 5in

Resources