Calculate speed of a moving object with cameras - opencv

Is it possible to find the speed or direction of ships that are moving by, using a camera mounted on another ship?
The information I know is the speed, heading (true north), roll, pitch, and camera parameters of the ship where the camera is installed.

You could of course calculate the speed and direction of objects in terms of pixels per frame.
To get speed values of the real object, you would however need to use something like calibrated stereo cameras to know the distance of the objects to the camera.
Once the distance of objects in the images is know, the parameters of the moving camera could be included into the calculation.

Related

Estimation of nodal offset with pattern and OpenCV

I was trying to do a lens calibration using OpenCV for a camera with variable zoom and focus (a broadcast camera). I managed to acquire decent parameters for a lens (focal length, k1, k2), however, I stopped at the nodal offset.
As I understand the nodal point of a lens is the point at which light rays converge. This is causing shift of a object from a camera in Z-coordinate. Basically, when I do cv::SolvePnP with my know parameters the distance from a object to camera is not exactly the same as it it in a world. For example, I completely zoomed in with a camera and put it in the focus. OpenCV can estimate that the pattern is roughly 3 meters away from a camera but when I measure it with laser measuring tool, it's 1.8 meters. This is not the case when you put lens wide because nodal offset is really small.
Question is, is there any method to measure nodal offset of the camera using a pattern and without measuring the distance of a pattern from a camera?
What I tried
I used the pan, tilt and roll tripod that can provide the rotation of the camera. I have put pattern in front of a camera and captured it several times with different angles. I was hoping I can see some difference in position when I transform pattern using a rotation from a tripod.
Also I noticed that the Unreal engine is estimating a nodal offset using a pattern by placing a CG object and aligning it with a video [link]. However I thought there is a different way how to achieve that without having a CG object.

is Camera calibration required if I change the height of the camera

I use single-camera calibration with checkerboard and I used one fix position of the camera to do the calibration. Now my question is if I use the same position but change the height of the camera then do I need to do calibration again? If no then will I get the same result by using the different height of the camera?
In my case, I changed the height of the camera but the position of the camera was the same. And I got a different result when I changed height. So I was wondering that may I need to do again calibration of the camera or not?
please help me out.
Generally speaking, and to achieve greatest accuracy, you will need to recalibrate the camera whenever it is moved. However, if the lens mount is rigid enough w.r.t the sensor, you may get away with only updating the extrinsic calibration, especially if your accuracy requirements are modest.
To see why this is the case notice that, unless you have a laboratory-grade rig holding and moving the camera, you can't just change the height only. With a standard tripod, for example, there will in general be a motion in all three axes amounting to a significant fraction of the sensor's size, which will be reflected in visible motion of several pixel with respect to the scene.
Things get worse / more complicated when you also add rotation to re-orient the field of view, since a mechanical mount will not, in general, rotate the camera around its optical center (i.e. the exit pupil of the lens), and therefore every rotation necessarily comes with an additional translation.
In your specific case, since you are only interested in measurements on a plane, and therefore can compute everything using homographies, refining the extrinsic calibration amounts to just recomputing the world-to-image scale. This can easily be achieved by taking one or more images of objects of known size on the plane - a calibration checkerboard is just such an object.

3D reconstruction using stereo camera

I try to construct 3D point cloud and measure real sizes or distances of objects using stereo camera. The cameras are stereo calibrated, and I find 3D points using reprojection matrix Q and disparity.
My problem is the calculated sizes are changing depending the distance from cameras. I calculate the distances between two 3D points, it has to be constant, but when object gets closer to the camera, distance increasing.
Am i missing something? The 3D coordinates have to be in camera coordinates, not in pixel coordinates. So it seems inaccurate to me. Any idea?
You didn't mention how far apart your cameras are - the baseline. If they are very close together compared with the distance of the point that you are measuring, a slight inaccuracy in your measurement can lead to a big difference in the computed distance.
One way you can check if this is the problem is by testing with only lateral movement of the camera.

two images with camera position and angle to 3d data?

Suppose I've got two images taken by the same camera. I know the 3d position of the camera and the 3d angle of the camera when each picture was taken. I want to extract some 3d data from the images on the portion of them that overlaps. It seems that OpenCV could help me solve this problem, but I can't seem to find where my camera position and angle would be used in their method stack. Help? Is there some other C library that would be more helpful? I don't even know what keywords to search for on the web. What's the technical term for overlapping image content?
You need to learn a little more about camera geometry, and stereo rig geometry. Unless your camera was mounted on a special rig, it's rather doubtful that its pose at each image can be specified with just an angle and a point. Rather, you'd need three angles (e.g. roll, pitch, yaw). Plus, if you want your reconstruction to be metrical accurate, you need to calibrate accurately the focal length of the camera (at a minimum).

is it possible to get the depth information with two images caught in a scene?

I have two images photographed in a same scene. the two image is photographed by two camera lined horizontally with a short distance to each other but shoot to the same focus. here is the question: Is it possible to calculate the distance of the camera to the image with the information of the two images?
Stereo vision allows discovering the distance of the object to the camera, if you manage to compute a correct disparity map (which is one of the main challenges of computer stereo vision), and the objects are close enough. The focus of the cameras is not relevant, as long as it is good enough to calculate the disparity map.

Resources