How to get skew from intrinics/distortion - opencv

I have the camera intrinsics matrix and the distortion coefficients from OpenCV camera calibration.
Am an wondering where the skew factor of the calibration is contained in this Matrices and how I can get it as a float or float[].

Skew is by default set to 0 at least since OpenCV 2.0.
If you look at the camera matrix it looks like A = [fx skew u0; 0 fy v0; 0 0 1]
but in OpenCv it is A = [fx 0 u0; 0 fy v0; 0 0 1].

Related

opencv Perspective transformation then rotate 90 degree CW in one step

in opencv, i know i can use Perspective Transformation to do in this way:
pts1 = np.float32([[56,65],[368,52],[28,387],[389,390]])
pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(300,300))
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
then i can rotate the image by 90 degree by rotate function.
cv::rotate(image, image, ROTATE_90_CLOCKWISE);
But i think i should be able to do the two steps in one step.
How should I create the matrix to multiple with PerspectiveTransform M,
i thought about this:
0 -1 0
1 0 0
0 0 1
but this is wrong.
note: I will do it in Java.

bad distance results using stereo camera

I'm trying to measure distance in real time from stereo pair to a person detected in the scene. First i calibrated both cameras separately with a 9x6 checkerboard (square size of 59 mm) and i obtained a rms error between 0.15 and 0.19 for both cameras. Using the obtained parameters i calibrated the stereo pair and the rms error was 0.36. Later, I rectified, undistort and remap the stereo pair giving me this result:
rectified and undistorted stereo
Done that, I computed stereo correspondence using stereoSGBM. That's how i did:
Mat imgDisp= Mat(frame1.cols, frame1.rows,CV_16S);
cvtColor(frame1, frame1, CV_BGR2GRAY);
cvtColor(frame2, frame2, CV_BGR2GRAY);
//parameters for stereoSGBM
stereo.SADWindowSize = 3;
stereo.numberOfDisparities = 144;
stereo.preFilterCap = 63;
stereo.minDisparity = -39;
stereo.uniquenessRatio = 10;
stereo.speckleWindowSize = 100;
stereo.speckleRange = 32;
stereo.disp12MaxDiff = 1;
stereo.fullDP = false;
stereo.P1 = 216;
stereo.P2 = 864;
double minVal; double maxVal;
minMaxLoc(imgDisp, &minVal, &maxVal);
return imgDisp;
I attached the result from stereoSGBM here: disparity map.
For detect a person in the scene I used hog + svm (the default people dectector) and tracked that person with optical flow (cvCalcOpticalFlowPyrLK()). Using the disparity map obtained in the stereo correspondence process i obtained the disparity for each corner tracked from one person as follow:
int x= cornersA[k].x;
int y= cornersA[k].y;
short pixVal= mapaDisp.at<short>(y,x);
float dispFeatures= pixVal/ 16.0f;
with the disparity for each corner tracked for one person in the scene I computed the maxim disparity and computed the depth in that pixel using the formula ((focal*baseline)/disp):
float Disp =maxDisp_v[p];
cout<< "max disp"<< Disp<<endl;
float d = ((double)(879.85* 64.32)/(double)(Disp))/10; //distance in cms.
** for focal length I calculated the average between fx and fy obtained in the cameras matrix [3x3] parameters:
CM1: [9.0472706037497187e+02 0. 3.7829164759284492e+02
0. 8.4576999835299739e+02 1.8649783393160138e+02
0. 0. 1.]
CM2: [9.1390904648169953e+02 0. 3.5700689147467887e+02 0.
8.5514555697053311e+02 2.1723345133656409e+02 0. 0. 1.]
so fx camera1: 904.7; fy camera1: 845.7; fx camera2: 913.9; fy camera2: 855.1
** The result of T[0,0] matrix matched with the baseline that I measure manuallly so I assumed that's correct baseline.
**due to the square size of checkerboard is in mm i assumed that baseline must be in the same unit, that's why I'm put 64.32 mm in baseline.
The result of distance is aprox. 55 cms but the real distance is 300 cms. I have checked many times but the measured distance is still incorrect: distanceResult
Help me please!, I have no idea what i'm doing wrong.
*** I'm using opencv 2.4.9 in osx system.
I think you are making a mistake with units:
focal length is provided in pixels,
baseline is provided in cm
disparity is provided in pixels.
Right?
According to formula you have pix*cm/pix = cm. But you devide it by 10 and get dm. So you have the distance around 55dm which is twice bigger then 300. Which is not a bad case for you approach.
You cannot use the simple parallel-cameras triangulation formula on rectified images, because you need to undo the rectification homographies.
Use cv2.reprojectImageTo3D

Why does the cube warp in unity

I want to develop an AR application use unity and OpenCV. I use solvePNP method in openCV to get camera rotation matrix and translation as follows:
r11  r12  r13  tx
r21  r22  r23  ty
r31  r32  r33  tz
In unity, projectionMatrix and WorldToCameraMatrix are 4x4 Matrix,which correspond to camera instrinsic matrix and camera pose
In Order to align two coordinate system. I set
WorldToCameraMatrix=
r11 r12  r13 tx
-r21 -r22 -r23 -ty
-r31 -r32 -r33 -tz
0 0 0 1
and I set
projectMatrix =
2*f/w 0 0 0
0 2*f/h 0 0
0 0 -(far+near)/(far-near)) -2*(far*near)/(far-near)
0 0 -1 0
After these , the rotation and translation is correct.But the cube warps heavily.
As this image:
cube warps
Who can help me to find the reason?Thanks in advance.

3D Camera coordinates to world coordinates (change of basis?)

Suppose I have the coordinates X, Y, Z and orientation Rx, Ry, Rz of an object with respect to a camera.
In addition, I have the coordinates U, V, W and orientation Ru, Rv, Rw of this camera in the world.
How do I transform the position (location and rotation) of the object to its position in the world?
It sounds like a change of basis to me, but I haven't found a clear source yet.
I have found this document which is quite clear on the topic.
http://www.cse.psu.edu/~rcollins/CSE486/lecture12.pdf
It treats, among others, the reverse operation, i.e., going from world to camera 3D coordinates.
Pc = R ( Pw - C )
Where, Pc is a point in the camera world, Pw is a point in the normal world, R is a rotation matrix and C is the camera translation.
Unfortunately it is rather cumbersome to add latex formulae, so I will give some matlab code instead.
function lecture12_collins()
% for plotting simplicity I choose my points on plane z=0 in this example
% Point in the world
Pw = [2 2.5 0 1]';
% rotation
th = pi/3;
% translation
c = [1 2.5 0]';
% obtain world to camera coordinate matrix
T = GetT(th, c);
% calculate the camera coordinate
Pc = T*Pw
% get the camera to world coordinate
T_ = GetT_(th, c)
% Alternatively you could use the inverse matrix
% T_ = inv(R*C)
% calculate the worldcoordinate
Pw_ = T_*Pc
assert (all(eq(Pw_ ,Pw)))
function T = GetT(th, c)
% I have assumed rotation around the z axis only here.
R = [cos(th) -sin(th) 0 0
sin(th) cos(th) 0 0
0 0 1 0
0 0 0 1];
C = [1 0 0 -c(1)
0 1 0 -c(2)
0 0 1 -c(3)
0 0 0 1];
T = R*C;
function T_ = GetT_(th, c)
% negate the angle
R_ = [cos(-th) -sin(-th) 0 0
sin(-th) cos(-th) 0 0
0 0 1 0
0 0 0 1];
% negate the translation
C_ = [1 0 0 c(1)
0 1 0 c(2)
0 0 1 c(3)
0 0 0 1];
T_ = C_*R_
So far this is about the location only. The rotation I've solved by using extra knowledge I had of the rotation. I know that my camera is perpendicular to the object and that its rotation is only around the z axis. I can just add the rotation of the camera and the object.
In fact you have two basis : one relative to the camera, the other is absolute (the world). So you basically want to transform your relative data into absolute data.
Location
This is the easiest one. You have to translate the (X,Y,Z) position by the vector t(U,V,W). So all your positions in absolute are (Ax, Ay, Az) = (X,Y,Z)+t = (X+U,Y+V,Z+W).
Orientation
This is a bit more difficult. You have to find the rotation matrix that rotate your camera from (I assume) (0,0,1) to (Ru,Rv,Rw). You should look at Basic Rotation Matrices in order to decompose the 2 rotations that take (0,0,1) to (Ru,Rv,Rw) (one according to X axis, one according to Z axis for example). I advise you to draw the absolute basis and the vector (Ru,Rv,Rw) on a sheet of paper, it is the simplest way to get the right result.
So you have 2 basic rotations matrices r1 and r2. The resultant rotation matrix r = r1*r2 (or r2*r1, it doesn't matter). So the absolute orientation of your object is (ARx, ARy, ARz)
= r*(Rx,Ry,Rz).
Hope this helps !

Calculate distance (disparity) OpenCV

-- Update 2 --
The following article is really useful (although it is using Python instead of C++) if you are using a single camera to calculate the distance: Find distance from camera to object/marker using Python and OpenCV
Best link is Stereo Webcam Depth Detection. The implementation of this open source project is really clear.
Below is the original question.
For my project I am using two camera's (stereo vision) to track objects and to calculate the distance. I calibrated them with the sample code of OpenCV and generated a disparity map.
I already implemented a method to track objects based on color (this generates a threshold image).
My question: How can I calculate the distance to the tracked colored objects using the disparity map/ matrix?
Below you can find a code snippet that gets the x,y and z coordinates of each pixel. The question: Is Point.z in cm, pixels, mm?
Can I get the distance to the tracked object with this code?
Thank you in advance!
cvReprojectImageTo3D(disparity, Image3D, _Q);
vector<CvPoint3D32f> PointArray;
CvPoint3D32f Point;
for (int y = 0; y < Image3D->rows; y++) {
float *data = (float *)(Image3D->data.ptr + y * Image3D->step);
for (int x = 0; x < Image3D->cols * 3; x = x + 3)
{
Point.x = data[x];
Point.y = data[x+1];
Point.z = data[x+2];
PointArray.push_back(Point);
//Depth > 10
if(Point.z > 10)
{
printf("%f %f %f", Point.x, Point.y, Point.z);
}
}
}
cvReleaseMat(&Image3D);
--Update 1--
For example I generated this thresholded image (of the left camera). I almost have the same of the right camera.
Besides the above threshold image, the application generates a disparity map. How can I get the Z-coordinates of the pixels of the hand in the disparity map?
I actually want to get all the Z-coordinates of the pixels of the hand to calculate the average Z-value (distance) (using the disparity map).
See this links: OpenCV: How-to calculate distance between camera and object using image?, Finding distance from camera to object of known size, http://answers.opencv.org/question/5188/measure-distance-from-detected-object-using-opencv/
If it won't solve you problem, write more details - why it isn't working, etc.
The math for converting disparity (in pixels or image width percentage) to actual distance is pretty well documented (and not very difficult) but I'll document it here as well.
Below is an example given a disparity image (in pixels) and an input image width of 2K (2048 pixels across) image:
Convergence Distance is determined by the rotation between camera lenses. In this example it will be 5 meters. Convergence distance of 5 (meters) means that the disparity of objects 5 meters away is 0.
CD = 5 (meters)
Inverse of convergence distance is: 1 / CD
IZ = 1/5 = 0.2M
Size of camera's sensor in meters
SS = 0.035 (meters) //35mm camera sensor
The width of a pixel on the sensor in meters
PW = SS/image resolution = 0.035 / 2048(image width) = 0.00001708984
The focal length of your cameras in meters
FL = 0.07 //70mm lens
InterAxial distance: The distance from the center of left lens to the center of right lens
IA = 0.0025 //2.5mm
The combination of the physical parameters of your camera rig
A = FL * IA / PW
Camera Adjusted disparity: (For left view only, right view would use positive [disparity value])
AD = 2 * (-[disparity value] / A)
From here you can compute actual distance using the following equation:
realDistance = 1 / (IZ – AD)
This equation only works for "toe-in" camera systems, parallel camera rigs will use a slightly different equation to avoid infinity values, but I'll leave it at this for now. If you need the parallel stuff just let me know.
if len(puntos) == 2:
x1, y1, w1, h1 = puntos[0]
x2, y2, w2, h2 = puntos[1]
if x1 < x2:
distancia_pixeles = abs(x2 - (x1+w1))
distancia_cm = (distancia_pixeles*29.7)/720
cv2.putText(imagen_A4, "{:.2f} cm".format(distancia_cm), (x1+w1+distancia_pixeles//2, y1-30), 2, 0.8, (0,0,255), 1,
cv2.LINE_AA)
cv2.line(imagen_A4,(x1+w1,y1-20),(x2, y1-20),(0, 0, 255),2)
cv2.line(imagen_A4,(x1+w1,y1-30),(x1+w1, y1-10),(0, 0, 255),2)
cv2.line(imagen_A4,(x2,y1-30),(x2, y1-10),(0, 0, 255),2)
else:
distancia_pixeles = abs(x1 - (x2+w2))
distancia_cm = (distancia_pixeles*29.7)/720
cv2.putText(imagen_A4, "{:.2f} cm".format(distancia_cm), (x2+w2+distancia_pixeles//2, y2-30), 2, 0.8, (0,0,255), 1,
cv2.LINE_AA)
cv2.line(imagen_A4,(x2+w2,y2-20),(x1, y2-20),(0, 0, 255),2)
cv2.line(imagen_A4,(x2+w2,y2-30),(x2+w2, y2-10),(0, 0, 255),2)
cv2.line(imagen_A4,(x1,y2-30),(x1, y2-10),(0, 0, 255),2)
cv2.imshow('imagen_A4',imagen_A4)
cv2.imshow('frame',frame)
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
I think this is a good way to measure the distance between two objects

Resources