To apply solvePnP I have to do a camera calibration. I have received calibration images of size 4032x3024, while the images where I have to apply solvePnP are of size 2348x3024. They were made by the same camera (iPhone SE, 2nd edition). Will the resulting calibration data be applicable to the smaller image, on which I have to apply solvepnp?
Sidenote: The calibration images taken by pointing the camera to a screen, showing this image. Is that already a problem, since it was not printed on a A4 paper.
Related
Based on the documentation of stereo-rectify from OpenCV, one can rectify an image based on two camera matrices, their distortion coefficients, and a rotation-translation from one camera to another.
I would like to rectify an image I took using my own camera to the stereo setup from the KITTI dataset. From their calibration files, I know the camera matrix and size of images before rectification of all the cameras. All their data is calibrated to their camera_0.
From this PNG, I know the position of each of their cameras relative to the front wheels of the car and relative to ground.
I can also do a monocular calibration on my camera and get a camera matrix and distortion coefficients.
I am having trouble coming up with the rotation and translation matrix/vector between the coordinate systems of the first and the second cameras, i.e. from their camera to mine or vice-versa.
I positioned my camera on top of my car at almost exactly the same height and almost exactly the same distance from the center of the front wheels, as shown in the PNG.
However now I am at a loss as to how I can create the joint rotation-translation matrix. In a normal stereo-calibrate, these are returned by the setereoCalibrate function.
I looked at some references about coordinate transformation but I don't have sufficient practice in them to figure it out on my own.
Any suggestions or references are appreciated!
I am projecting an image on the wall using a DLP projector and then capturing the scene with a pin-hole camera. Both the camera and projector have a radial distortion.
I calibrated both of them simultaneously until I get the distortion coefficient for both of them.
How should undistorted the captured image in order to cancel the both distortion (which came from camera and projector) in order to get an image that, theoretically, matches exactly the one I send it to the projector at first.
I am using OpenCV but any theoretical hint is appreciated.
If you calibrated them, then presumably you can just undistort using those coefficients.
Also, if you calibrate the camera separately, then you can then undistort the projected images and use these undistorted images to calibrate the projector.
I'm trying to perform stereo camera calibration, rectification and disparity map generation. It's working fine with normal sample data. However, I'm trying to use the dual cameras on an iPhone 7+, which have different zoom. The telephoto lens has 2X zoom compared to the wide angle camera. I ran the images through the algorithm, and it is succeeding, although with a high error rate. However, when I open up the rectified images, they have a weird spherical look to the edges. The center looks fine. I'm assuming this is due to the cameras having different zoom levels. Is there anything special I need to do to deal with this? Or do I just need to crop any output to the usable undistorted area? Here is what I'm seeing:
EDIT:
I tried using the calibration result from these checkerboard images to rectify an image of some objects, and the rectification was way off, not even close. If I rectify one of my checkerboard images, they are spot on. Any ideas why that happens?
EDIT2:
These are what my input images look like that result in the spherical looking output image. They are both taken from the exact same position, the iPhone was mounted to a tripod and I used a bluetooth device to trigger the shutter so the image wouldn't get shaken, my code automatically takes one image with each lens. I took 19 such images from different angles, all images show the full checkerboard. The more zoomed in image is the one that rectified to the top spherical looking image.
EDIT3:
Here is the disparity map using the calibration I got.
I have a program that takes a video feed from RSTP and checks for an object. The only problem is that the object needs to be about 6" from the camera but when I use a wired webcam the object can be a few feet away. Both camera are transmitting at the same resolution, what is causing this problem?
Camera transmission specs:
Resolution: 640 * 480
FPS: 20
Bitrate: 500000
Focal Length: 2.8mm
EDIT:
The algorithm I am using is the OpenCV ORB algorithm but I have also seen this behavior when previously using the Haar classifier method in OpenCV.
Below is the limit at which the webcam can no longer detect the object. (approx. 66 pixels)
Below is the limit that Glass can no longer detect the object. (approx. 68 pixels)
Looking at the image it looks like the distance is similar but the distance is at least twice that in the webcam image, which looks to me like it is a camera property that is causing this issue? if so what part of the camera would be responsible for causing this?
As you've recognized yourself, the object sizes are very similar in both images, so the algorithm seems to stop for a certain object resolution.
The difference in distance between both cameras (for the same object size) comes from camera intrinsic parameters like focal length (coming from the lens objective) and the size of the sensor chip.
Depending on the method you used to detect the object, you could resize (upscale) the second image, unless this leads to too many interpolation artifacts (which might not be handable by your detection method).
Upscaling the image is ok for many detectors that have some minimum object size, directly coming from the training data or training window size. Upscaling might lead to additonal (drastical) speed performance increase.
If intrinsic parameters of both cameras are known and the images are undistorted already, you can compute the scale factor between both images, which is:
ratioX = fx1/fx2
ratioY = fy1/fy2
if you want to upscale the 2nd image and fx1,fy1 are the focal length values of the first image.
You could crop the upscaled image afterwards, centered around the principal point. After that, both image regions should match quite well.
Hope this helps and good luck.
edit: you could use cv::undistort function to let an image look like it had another camera matrix, for testing.
How do I recover correct image from a radially distorted image using OpenCV? for example:
Please provide me useful links.
Edit
The biggest problem is I neither have the camera used for taking the pic nor the chessboard image.
Is that even possible?
Well, there is not much to do if you don't have the camera, or at least the model of it. As you may know a usual camera model is pin-hole, this basically consist in the 3D world coordinates are transformed (mapped) to the camera image plane 2D coordinates.
Camera Resectioning
If you don't have access to the camera or at least two chessboard images, you can't estimate the focal, principal point, and distortion coefficients. At least not in a traditional way, if you have more images than the one that you showed or a video from that camera you could try auto or self calibration.
Camera auto-calibration
Another auto-calibration
yet another
Opencv auto-calibration