Related
I'm trying to better understand the calibrateCamera and SolvePnP functions in OpenCV, specifically the rotation vectors returned by these functions which I believe is an axis-angle rotation vector (NOT as I had thought initially the yaw,pitch,roll angles). I would like to know the rotation around the x,y and z axis of my checkerboard image. The OpenCV functions return a rotation vector in the form rot = [a,b,c]
Using this answer
as a guide I calculate the angle theta with theta = sqrt(a^2,b^2,c^2) and the rotation axis v = [a/theta, b/theta, c/theta];
Then I take these values and use the Axis-Angle To Euler conversion on euclideanspace.com. shown here:
heading = atan2(y * sin(angle)- x * z * (1 - cos(angle)) , 1 - (y^2 + z^2 ) * (1 - cos(angle)))
attitude = asin(x * y * (1 - cos(angle)) + z * sin(angle))
bank = atan2(x * sin(angle)-y * z * (1 - cos(angle)) , 1 - (x^2 + z^2) * (1 - cos(angle)))
I'm using one of the example OpenCV checkerboard images (Left01.jpg), shown below (note the frame axes in the upper left corner with red = x, green = y, blue = z
Using this image I get a rotation vector from calibrateCamera of [0.166,0.294,0.014]
Running these values through the calculations discussed and converting to degrees I get:
heading = 16.7 deg
attitude = 1.7 deg
bank = 9.3 deg
I believe these correspond to yaw,pitch,roll? The 16.7 degree heading seems high looking at the image, but it's hard to tell. Does this make sense? What would be the correct way to figure out the euler angles (angles around each axis) given the OpenCV rotation vector? Snippets of my code are shown below.
double RMSError = calibrateCamera(
objectPointsArray,
imagePointsArray,
img.size(),
intrinsics,
distortion,
rotation,
translation,
CALIB_ZERO_TANGENT_DIST |
CALIB_FIX_K3 | CALIB_FIX_K4 | CALIB_FIX_K5 |
CALIB_FIX_ASPECT_RATIO);
Mat rvec = rotation.at(0);
//try and get the rotation angles here
//https://stackoverflow.com/questions/12933284/rodrigues-into-eulerangles-and-vice-versa
float theta = sqrt(pow(rvec.at<double>(0),2) + pow(rvec.at<double>(1),2) + pow(rvec.at<double>(2),2));
Mat axis = (Mat_<double>(1, 3) << rvec.at<double>(0) / theta, rvec.at<double>(1) / theta, rvec.at<double>(2) / theta);
float x_ = axis.at<double>(0);
float y_ = axis.at<double>(1);
float z_ = axis.at<double>(2);
//this is yaw,pitch,roll respectively...maybe
float heading = atan2(y_ * sin(theta) - x_ * z_ * (1 - cos(theta)), 1 - (pow(y_,2) + pow(z_,2)) * (1 - static_cast<double>(cos(theta))));
float attitude = asin(x_ * y_ * (1 - cos(theta) + z_ * sin(theta)));
float bank = atan2(x_ * sin(theta) - y_ * z_ * (1 - cos(theta)), 1 - (pow(x_, 2) + pow(z_, 2)) * (1 - static_cast<double>(cos(theta))));
float headingDeg = heading * (180 / 3.14);
float attitudeDeg = attitude * (180 / 3.14);
float bankDeg = bank * (180 / 3.14);
I have an rendered Image. I want to apply radial and tangential distortion coefficients to my image that I got from opencv. Even though there is undistort function, there is no distort function. How can I distort my images with distortion coefficients?
I was also looking for the same type of functionality. I couldn't find it, so I implemented it myself. Here is the C++ code.
First, you need to normalize the image point using the focal length and centers
rpt(0) = (pt_x - cx) / fx
rpt(1) = (pt_y - cy) / fy
then distort the normalized image point
double x = rpt(0), y = rpt(1);
//determining the radial distortion
double r2 = x*x + y*y;
double icdist = 1 / (1 - ((D.at<double>(4) * r2 + D.at<double>(1))*r2 + D.at<double>(0))*r2);
//determining the tangential distortion
double deltaX = 2 * D.at<double>(2) * x*y + D.at<double>(3) * (r2 + 2 * x*x);
double deltaY = D.at<double>(2) * (r2 + 2 * y*y) + 2 * D.at<double>(3) * x*y;
x = (x + deltaX)*icdist;
y = (y + deltaY)*icdist;
then you can translate and scale the point using the center of projection and focal length:
x = x * fx + cx
y = y * fy + cy
I'm training a YOLO model, I have the bounding boxes in this format:-
x1, y1, x2, y2 => ex (100, 100, 200, 200)
I need to convert it to YOLO format to be something like:-
X, Y, W, H => 0.436262 0.474010 0.383663 0.178218
I already calculated the center point X, Y, the height H, and the weight W.
But still need a away to convert them to floating numbers as mentioned.
for those looking for the reverse of the question (yolo format to normal bbox format)
def yolobbox2bbox(x,y,w,h):
x1, y1 = x-w/2, y-h/2
x2, y2 = x+w/2, y+h/2
return x1, y1, x2, y2
Here's code snipet in python to convert x,y coordinates to yolo format
def convert(size, box):
dw = 1./size[0]
dh = 1./size[1]
x = (box[0] + box[1])/2.0
y = (box[2] + box[3])/2.0
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)
im=Image.open(img_path)
w= int(im.size[0])
h= int(im.size[1])
print(xmin, xmax, ymin, ymax) #define your x,y coordinates
b = (xmin, xmax, ymin, ymax)
bb = convert((w,h), b)
Check my sample program to convert from LabelMe annotation tool format to Yolo format https://github.com/ivder/LabelMeYoloConverter
There is a more straight-forward way to do those stuff with pybboxes. Install with,
pip install pybboxes
use it as below,
import pybboxes as pbx
voc_bbox = (100, 100, 200, 200)
W, H = 1000, 1000 # WxH of the image
pbx.convert_bbox(voc_bbox, from_type="voc", to_type="yolo", image_size=(W,H))
>>> (0.15, 0.15, 0.1, 0.1)
Note that, converting to YOLO format requires the image width and height for scaling.
YOLO normalises the image space to run from 0 to 1 in both x and y directions. To convert between your (x, y) coordinates and yolo (u, v) coordinates you need to transform your data as u = x / XMAX and y = y / YMAX where XMAX, YMAX are the maximum coordinates for the image array you are using.
This all depends on the image arrays being oriented the same way.
Here is a C function to perform the conversion
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <math.h>
struct yolo {
float u;
float v;
};
struct yolo
convert (unsigned int x, unsigned int y, unsigned int XMAX, unsigned int YMAX)
{
struct yolo point;
if (XMAX && YMAX && (x <= XMAX) && (y <= YMAX))
{
point.u = (float)x / (float)XMAX;
point.v = (float)y / (float)YMAX;
}
else
{
point.u = INFINITY;
point.v = INFINITY;
errno = ERANGE;
}
return point;
}/* convert */
int main()
{
struct yolo P;
P = convert (99, 201, 255, 324);
printf ("Yolo coordinate = <%f, %f>\n", P.u, P.v);
exit (EXIT_SUCCESS);
}/* main */
There are two potential solutions. First of all you have to understand if your first bounding box is in the format of Coco or Pascal_VOC. Otherwise you can't do the right math.
Here is the formatting;
Coco Format: [x_min, y_min, width, height]
Pascal_VOC Format: [x_min, y_min, x_max, y_max]
Here are some Python Code how you can do the conversion:
Converting Coco to Yolo
# Convert Coco bb to Yolo
def coco_to_yolo(x1, y1, w, h, image_w, image_h):
return [((2*x1 + w)/(2*image_w)) , ((2*y1 + h)/(2*image_h)), w/image_w, h/image_h]
Converting Pascal_voc to Yolo
# Convert Pascal_Voc bb to Yolo
def pascal_voc_to_yolo(x1, y1, x2, y2, image_w, image_h):
return [((x2 + x1)/(2*image_w)), ((y2 + y1)/(2*image_h)), (x2 - x1)/image_w, (y2 - y1)/image_h]
If need additional conversions you can check my article at Medium: https://christianbernecker.medium.com/convert-bounding-boxes-from-coco-to-pascal-voc-to-yolo-and-back-660dc6178742
For yolo format to x1,y1, x2,y2 format
def yolobbox2bbox(x,y,w,h):
x1 = int((x - w / 2) * dw)
x2 = int((x + w / 2) * dw)
y1 = int((y - h / 2) * dh)
y2 = int((y + h / 2) * dh)
if x1 < 0:
x1 = 0
if x2 > dw - 1:
x2 = dw - 1
if y1 < 0:
y1 = 0
if y2 > dh - 1:
y2 = dh - 1
return x1, y1, x2, y2
There are two things you need to do:
Divide the coordinates by the image size to normalize them to [0..1] range.
Convert (x1, y1, x2, y2) coordinates to (center_x, center_y, width, height).
If you're using PyTorch, Torchvision provides a function that you can use for the conversion:
from torch import tensor
from torchvision.ops import box_convert
image_size = tensor([608, 608])
boxes = tensor([[100, 100, 200, 200], [300, 300, 400, 400]], dtype=float)
boxes[:, :2] /= image_size
boxes[:, 2:] /= image_size
boxes = box_convert(boxes, "xyxy", "cxcywh")
Just reading the answers I am also looking for this but find this more informative to know what happening at the backend.
Form Here: Source
Assuming x/ymin and x/ymax are your bounding corners, top left and bottom right respectively. Then:
x = xmin
y = ymin
w = xmax - xmin
h = ymax - ymin
You then need to normalize these, which means give them as a proportion of the whole image, so simple divide each value by its respective size from the values above:
x = xmin / width
y = ymin / height
w = (xmax - xmin) / width
h = (ymax - ymin) / height
This assumes a top-left origin, you will have to apply a shift factor if this is not the case.
So the answer
I have an image, and I place a rectangle on the image. Then I rotate the image. How do I get the center of the rectangle on the rotated image?
Or can I rotate a rectangle somehow to put on rotated image? I think in this case rotation must be done along same point with point used to rotate image.
This is the image with a rectangle placed on it.
This is the rotated image.
Here is the code I use to rotate my image:
cv::Mat frame, frameRotated;
frame = cv::imread("lena.png");
cv::Rect rect(225,250,150,150);
cv::rectangle(frame, rect, cv::Scalar(0,0,255),2);
int theta = 30;
double radians = theta * PI / 180.0;
double sin = abs(std::sin(radians));
double cos = abs(std::cos(radians));
int newWidth = (int) (frame.cols * cos + frame.rows * sin);
int newHeight = (int) (frame.cols * sin + frame.rows * cos);
cv::Mat targetMat(cv::Size(newWidth, newHeight), frame.type());
int offsetX = (newWidth - frame.cols) / 2;
int offsetY = (newHeight - frame.rows) / 2;
frame.copyTo(targetMat.rowRange(offsetY, offsetY + frame.rows).colRange(offsetX, offsetX + frame.cols));
cv::Point2f src_center(targetMat.cols/2.0F, targetMat.rows/2.0F);
cv::Mat rot_mat = cv::getRotationMatrix2D(src_center, theta, 1.0);
cv::warpAffine(targetMat, frameRotated, rot_mat, targetMat.size());
imshow("frame", frame);
imshow("rotated frame", frameRotated);
EDIT
Suppose I have a point in the rotated image, how do I get corresponding point in the original image using rotation matrix?
You only need to use rot_mat to transform the original center of the rect. I tested the below and it works:
cv::Rect r(250, 350, 20, 30);
cv::Point2d c(r.x + r.width / 2, r.y + r.height / 2);
// c is center of rect
// get c's location in targetMat when frame is copied
c.x += offsetX;
c.y += offsetY;
int theta = 30;
double radians = theta * M_PI / 180.0;
cv::Point2d src_center(targetMat.cols/2.0F, targetMat.rows/2.0F);
cv::Mat rot_mat = cv::getRotationMatrix2D(src_center, theta, 1.0);
// now transform point using rot_mat
double *x = rot_mat.ptr<double>(0);
double *y = rot_mat.ptr<double>(1);
Point2d dst(x[0] * c.x + x[1] * c.y + x[2],
y[0] * c.x + y[1] * c.y + y[2]);
// dst is center of transformed rect
EDIT
To transform a point from the rotated image you just need to reverse the process:
// undo translation
Point2d dst1(dst.x - x[2], dst.y - y[2]);
// undo rotation
Point2d dst2(x[0] * dst1.x - x[1] * dst1.y, -y[0] * dst1.x + y[1] * dst1.y);
// undo shift
Point2d in_unrotated_image(dst2.x - offsetX, dst2.y - offsetY);
You can translate any point in your source Mat to rotated Mat by multiplying with the Rotation matrix.
If you need to translate X,Y and given T=1, you can do this by Mat multiplication
| cosθ sinθ Tx | | X | | _X |
| | * | Y | = | |
| -sinθ cosθ Ty | | 1 | | _Y |
where Tx and Ty is the translation along the x and y see OpenCV Doc.
Suppose you need to find the centre (cent_x,cent_y) of source Mat in rotated Mat
Mat rot_mat = getRotationMatrix2D(src_center, theta, 1.0); //Find the rotation matrix
Mat co_Ordinate = (Mat_<double>(3,1) << cent_x,cent_y,1); //Create 3x1 matrix with input co-ordinates.
Mat rst=rot_mat*co_Ordinate; // Multiply rotation matrix with input co-ordinate matrix
trans_x=(int)rst.at<double>(0,0); //First row of result will be x
trans_y=(int)rst.at<double>(1,0); //Second row of result will be y.
Hopes these helpful....
I've solved the problem in second question.
I used the same code provided in the accepted answer and create rotation matrix with minus theta.
I need to calculate a warp matrix in OpenCV, representing the rotation around a given axis.
Around Z axis -> straightforward: I use the standard rotation matrix
[cos(a) -sin(a) 0]
[sin(a) cos(a) 0]
[ 0 0 1]
This is not so obvious for other rotations, so I tried building a homography, as explained on Wikipedia:
H = R - t n^T / d
I tried with a simple rotation around the X axis, and assuming the distance between the camera and the image is twice the image height.
R is the standard rotation matrix
[1 0 0]
[0 cos(a) -sin(a)]
[0 sin(a) cos(a)]
n is [0 0 1]because the camera is looking directly at the image from (0, 0, z_cam)
t is the translation, that should be [0 -2h*(sin(a)) -2h*(1-cos(a))]
d is the distance, and it's 2h per definition.
So, the final matrix is:
[1 0 0]
[0 cos(a) 0]
[0 sin(a) 1]
which looks quite good, when a = 0 it's an identity, and when a = pi it's mirrored around the x axis.
And yet, using this matrix for a perspective warp doesn't yield the expected result, the image is just "too warped" for small values of a, and disappears very quickly.
So, what am I doing wrong?
(note: I've read many questions and answers about this subject, but all are going in the opposite direction: I don't want to decompose a homography matrix, but rather to build one, given a 3d transformation, and a "fixed" camera or a fixed image and a moving camera).
Thank you.
Finally found a way, thanks to this post: https://plus.google.com/103190342755104432973/posts/NoXQtYQThgQ
I let OpenCV calculate the matrix for me, but I'm doing the perspective transform myself (found it easier to implement than putting everything into a cv::Mat)
float rotx, roty, rotz; // set these first
int f = 2; // this is also configurable, f=2 should be about 50mm focal length
int h = img.rows;
int w = img.cols;
float cx = cosf(rotx), sx = sinf(rotx);
float cy = cosf(roty), sy = sinf(roty);
float cz = cosf(rotz), sz = sinf(rotz);
float roto[3][2] = { // last column not needed, our vector has z=0
{ cz * cy, cz * sy * sx - sz * cx },
{ sz * cy, sz * sy * sx + cz * cx },
{ -sy, cy * sx }
};
float pt[4][2] = {{ -w / 2, -h / 2 }, { w / 2, -h / 2 }, { w / 2, h / 2 }, { -w / 2, h / 2 }};
float ptt[4][2];
for (int i = 0; i < 4; i++) {
float pz = pt[i][0] * roto[2][0] + pt[i][1] * roto[2][1];
ptt[i][0] = w / 2 + (pt[i][0] * roto[0][0] + pt[i][1] * roto[0][1]) * f * h / (f * h + pz);
ptt[i][1] = h / 2 + (pt[i][0] * roto[1][0] + pt[i][1] * roto[1][1]) * f * h / (f * h + pz);
}
cv::Mat in_pt = (cv::Mat_<float>(4, 2) << 0, 0, w, 0, w, h, 0, h);
cv::Mat out_pt = (cv::Mat_<float>(4, 2) << ptt[0][0], ptt[0][1],
ptt[1][0], ptt[1][1], ptt[2][0], ptt[2][1], ptt[3][0], ptt[3][1]);
cv::Mat transform = cv::getPerspectiveTransform(in_pt, out_pt);
cv::Mat img_in = img.clone();
cv::warpPerspective(img_in, img, transform, img_in.size());