Rectification of an image by using texture projection - ios

I need to rectify an image with texture projection on GPU (GLSL/shaders), do you have any resources/tutorials/insights to share? I have the 3D pose of the camera that created the image and the image itself as an input.
My images are 640x480 and from what I understand the buffer memory on iPhone 4S (one of the target devices) is less than that

OK, so the size is not a problem. Also to do rectification, once you have the homography that provides the rectification, use it in the vertex shader to multiply all of the initial 2D homogeneous coordinates.

Related

How to create a spherical image?

My setup has checkerboard charts with known world coordinates present in each image that I use to stitch images together (in a 2D plane) and to find my P-matrix. However, I am stuck on finding a general approach into combining all my images into a spherical image.
Known:
Ground truth correspondence points in each image
camera calibration parameters (camera matrix, distortion coefficients)
homography between images
world-image plane matrix: P = K[R | t] for each image. However I think this matrix's estimation isn't that great.
real world coordinates of ground truthed points
camera has almost only rotation, minimal translation
I know openGL well enough to do the spherical/texture wrapping once I can stitch the images into a cubemap format
Unknown:
Spherical image
image cubemap

How to estimate intrinsic properties of a camera from data?

I am attempting camera calibration from a single RGB image (panorama) given 3D pointcloud
The methods that I have considered all require an intrinsic properties matrix (which I have no access to)
The intrinsic properties matrix can be estimated using the Bouguet’s camera calibration Toolbox, but as I have said, I have a single image only and a single point cloud for that image.
So, knowing 2D image coordinates, extrinsic properties, and 3D world coordinates, how can the intrinsic properties be estimated?
It would seem that the initCameraMatrix2D function from the OpenCV (https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) works in the same way as the Bouguet’s camera calibration Toolbox and requires multiple images of the same object
I am looking into the Direct linear transformation DLT and Levenberg–Marquardt algorithm with implementations https://drive.google.com/file/d/1gDW9zRmd0jF_7tHPqM0RgChBWz-dwPe1
but it would seem that both use the pinhole camera model and therefore find linear transformation between 3D and 2D points
I can't find my half year old source code, but from top of my head
cx, cy is optical centre which is width/2, height/2 in pixels
fx=fy is focal length in pixels (distance from camera to image plane or axis of rotation)
If you know that image distance from camera to is for example 30cm and it captures image that has 16x10cm and 1920x1200 pixels, size of pixel is 100mm/1200=1/12mm and camera distance (fx,fy) would be 300mm*12px/1mm=3600px and image centre is cx=1920/2=960, cy=1200/2=600. I assume that pixels are square and camera sensor is centered at optical axis.
You can get focal lenght from image size in pixels and measured angle of view.

Calibration for cropped stereo pairs

I have a stereo image pair of say 100x100 resolution. I did calibration and I am able to rectify it properly and calculate disparity for the same. Now I have the cropped image of size 50x50 with ROI based on center. If I have to use the same calibration matrices, what should I do? Rescaling the principle point in camera matrix is enough or do we need to do anything else?

How to calibrate ToF(Time-of-Flight) camera or Lidar with RGB camera

I am trying to get depth of each pixel from RGB camera.
So I use ToF camera and Lidar(SICK) to get depth data through PCL and OpenNI.
In order to project the depth data to RGB image currectly, I need to known the Rotation and translation (so-called pose) of ToF or Lidar to the RGB camera.
OpenCV module provide the stereo calibration to get pose between two RGB camera.
But I can not use same way because of that depth sensor only get depth data so that corner detecton of chessboard for calibration will fail.
So...what sould I do if I want to get depth of each pixel from RGB camera.
thanks for any suggestion~~

fisheye::estimateNewCameraMatrixForUndistortRectify opencv

I'm using this function to undistort images from a fisheye camera and it's very good as result, but i cannot find the skew coefficent to reduce the undistorsion.
With cameras without fisheye I use:
getOptimalNewCameraMatrix
where the alpha can control the scaling of the result from 0 to 1.
But in
fisheye::estimateNewCameraMatrixForUndistortRectify
I cannot understand how to do.
Anyone can suggest how to do?
OpenCV (for fisheye or non-fisheye cameras) uses model that based on pin-hole camera model.
In case of non-fisheye camera you can undistort 100% of initial image.
But for fisheye camera with FOV ~ 180 degrees, undistorted image will have infinite size. So, fisheye::estimateNewCameraMatrixForUndistortRectify just calculates some "reasonable" zooming factor and doesn't let you set 100% of undistorted image surface.

Resources