How ARCore get the intrinsic parameters of the camera? - arcore

AFAIK, to estimate precisely the intrinsic parameters of a camera, we need to use a checkerboard calibration. However, I have seen that ARCore is able to get the camera's focal length, and the principal point in pixels during a session, without any checkerboard. How is it possible?

Related

Stereo calibration, do extrinsics change if the lens changes?

I have a stereo camera setup. Typically I would calibrate the intrinsics of each camera, and then using this result calibrate the extrinsics, so the baseline between the cameras.
What happens now if I change for example the focus or zoom on the lenses? Of course I will have to re-calibrate the intrinsics, but what about the extrinsics?
My first thought would be no, the actual body of the camera didn't move. But on my second thought, doesn't the focal point within the camera change with the changed focus? And isn't the extrinsic calibration actually the calibration between the two focal points of the cameras?
In short: should I re-calibrate the extrinsics of my setup after changing the intrinsics?
Thanks for any help!
Yes you should.
It's about the optical center of each camera. Different lenses put that in different places (but hopefully along the optical axis).

ArUco markers, why does the pose change when I change image resolution?

I use this reference https://automaticaddison.com/how-to-perform-pose-estimation-using-an-aruco-marker/ to estimate pose of a marker.
When I obtain cam matrix and distortion matrix I used the full camera resolution.
However, when I change the resolution (image size) before pose estimation, I am getting different results. I am not sure why and which resolution would be correct to use.
Should we always use the same resolution as what was used for camera calibration?
I expected the pose to be somewhat independent from image size other than minor changes. Any thoughts?
Yes, always use the same resolution.
One could recalculate the camera matrix and distortion coefficients to fit a different resolution but that's a hassle, and requires some knowledge of how the camera made these pictures (binning, cropping). Unless you understand the math behind it, just stick with same resolution.

Advantage of fish-eyes lens and Camera Calibration

The purpose of calibration is to calibrated distortion the image.
What's main source of this distortion in the image when the lens is used, for example fish-eyes lens?
Q1-You think we are going to identify some of the objects and using fish-eyes lenses in order to cover a wide view of the environment, Do we need to calibrate the camera? That is, we must correct the image distortions and then identify the objects? Does the corrected image still cover the same amount of objects? If it's not cover all objects of distorted image, then what is the point of taking a wide-angle lens? Wouldn't it be better to use the same flat lens without having to calibrate the camera?
Q2-For calculating the distortion param like intrinsic and extrinsic param and etc, Is need to calculate parameters for all of camera with same specifics independently? That's, the finding parameters of distortion for one camera can be correctly work with other camera with same specifics?
Q1 Answer : You need to dewarp the image/video that comes out of the camera. There are some libraries that do it for you. You can also calibrate the dewarping according to your needs.
When dewarping the fisheye input, the corners of the video feed are a little lost. This won't be a huge loss.
Q2 Answer : Usually you don't have to do a different dewarping configuration based on your camera. But if you want to finetune it, there are parameters for it.
FFmpeg has lens correction filter, the parameters to finetune are also present in the link.

Can I move my camera after intrinsic calibration?

I have 2 camera settings where the extrinsic properties between the two cameras do not matter. Generally, I start my work by calibrating each camera intrinsically and then move on to image processing.
I was just thinking - since the intrinsic calibration gives me a camera matrix that contains information on focal length, optical centre etc, as well as the distortion coefficients. From my understanding, these parameters do not change as long as the camera lenses are not adjusted. Therefore, maybe I am able to move the cameras after all?
I am thinking maybe this idea comes from my shallow understanding of the camera calibration. Please share your opinions on this matter. Thanks!
Yes, you have the correct understanding of camera calibration.
A camera's intrinsic parameters do not change if you move the camera, that is what separates the intrinsic parameters from the extrinsic ones. As you point out, the intrinsic parameters may change if you adjust the lens. Careful: depending on the lens type, simply focusing could be such a change to the lens.
There may be small influences on the intrinsic parameters from moving the camera (as the camera is not perfectly rigid) or from changing surroundings (e.g. temperature), but they are small enough to be disregarded for most use cases.

Where the origin of the camera system really is?

When we compute the pose of the camera with respect to a primitive like a marker or a 3D model..etc, the origin of that primitive is usually precisly known like the origin of a chessboard or a marker (in blue).
Now the question is where is the origin of the camera (in black)? The vector translation of the pose is expressed with respect to which reference? How can we determine where it is?
The optical center is meant to be on the optical axis (ideally it projects to the center of the image), at a distance of the sensor equal to the focal length, which can be expressed in pixel units (knowing the pixel size).
You can see where the optical axis lies (it is the symmetry axis of the lens), but the optical center is somewhere inside the camera.
OpenCV uses the pinhole camera model to model cameras. The origin of the 3D coordinate system used in OpenCV, for camera calibration and other purposes, is the camera itself, or more specifically, the pinhole of the camera model. It is the point where all light rays that enter the camera converge to a point, and is also called the "centre of projection".
Real cameras with lenses do not actually have a pinhole. But by analysing images taken with the camera, it is possible to calculate a pinhole model which models the real camera's optics very closely. That is what OpenCV does when it calibrates your camera. As #Yves Daoust said, the pinhole of this model (and hence the 3D coordinate origin) will be a 3D point somewhere inside your camera (or possibly behind it, depending on its focal length), but it is not possible for OpenCV to say exactly where it is relative to your camera's body, because OpenCV knows nothing about the physical size or shape of your camera or its sensor.
Even if you knew exactly where the origin is relative to your camera's body, it probably would not be of much use, because you can't take any physical measurements with respect to a point that is located inside your camera without taking it apart! Really, you can do everything you need to do in OpenCV without knowing this detail.

Resources