JavaCV or OpenCV Image Rotation - opencv

My objective is to rotate an image by a certain angle (e.g. 30 degrees). One possible way of rotating by 90 degrees in OpenCV is given by tenta4 but unfortunately, it only performs 90-degree flips.
Another possible way is a method "SkewGrayImage" given in JavaCV samples where it performs "small angle rotations" that appear to work for rotations of up to approximately 45 - 50 degrees but not for any other higher values.
So - my issues is, is there a proper way/method in OpenCV or JavaCV to actually perform an angular rotation of images or objects?

Meta has explained how to compute a rotation matrix with respect to the center of the image and then to perform a rotation as follows:
Mat rotated_image;
warpAffine(src, rotated_image, rot_mat, src.size());

there is an operation that's called warp, and it is able to just rotate, but also to do some other transformations on the image.
Some useful links are here
https://docs.opencv.org/2.4.13.2/modules/stitching/doc/warpers.html
https://docs.opencv.org/3.1.0/db/d29/group__cudawarping.html
https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html
Hope it helps ;)

A more detailed answer for IplImage rotation is given by Martin based on Mat variables which can then be converted and returned as an IplImage as follows:
Mat source = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Mat rotation_matrix = getRotationMatrix2D(src_center, angle, 1.0);
Mat destinationMat;
warpAffine(source, destinationMat, rotation_matrix, source.size());
IplImage iplframe = IplImage(destinationMat);

Hope this helps! Worked for me with JavaCV.
Mat raw = ... // your raw mat
// Create your "new" Mat and the center of your Raw Mat
Mat result = new Mat(raw.size(), [your Image Type]); // my Img type was CV_8U
Point2f rawCenter = new Point2f(raw.cols() / 2.0F, raw.rows() / 2.0F);
// Scale and Rotation of new Mat
double scale = 1.0;
int rotation = -5;
// Rotation Matrix
Mat rotationMatrix = getRotationMatrix2D(rawCenter, rotation, scale);
// Rotate
warpAffine(raw, result, rotationMatrix, raw.size());

Related

Pose Estimation - cv::SolvePnP with Scenekit - Coordinate System Question

I have been working on Pose Estimation (rectifying key points on a 3D model with 2D points on an image to match pose) via OpenCV's cv::solvePNP, using features / key points from Apples Vision framework.
TL-DR:
My scene kit model is being translated and the units look correct when introspecting the translation and rotation vectors from solvePnP (ie, they are the right order of magnitude), but the coordinate system of the translation appears off:
I am trying to understand the coordinate system requirements with solvePnP wrt to Metal / OpenGL coordinate system and my camera projection matrix.
What 'projectionMatrix' does my SCNCamera require to match image based coordinate system passed into solvePnP?
Some things ive read / believe I am taking into account.
OpenCV vs OpenGL (thus Metal) have row major vs column major differences.
OpenCV's coordinate system for 3D is different than OpenGL (thus Metal).
Longer with code:
My workflow is as such:
Step 1 - use a 3D model tool to introspect points on my 3D model and get the objects vertex positions for the major key points in the 2D detected features. I am using left pupil, right pupil, tip of nose, tip of chin, left outer lip corner, right outer lip corner.
Step 2 - Run a vision request and extract a list of points in image space (converting for OpenCV's top left coordinate system) and extract the same ordered list of 2D points.
Step 3 - Construct a camera matrix by using the size of the input image.
Step 4 - run cv::solvePnP, and then use cv::Rodrigues to convert the rotation vector to a matrix
Step 5 - Convert the coordinate system of the resulting transforms into something appropriate for the GPU - invert the y and z axis and combine the translation and rotation to a single 4x4 Matrix, and then transpose it for the appropriate major ness of OpenGL / Metal
Step 6 - apply the resulting transform to Scenekit via:
let faceNodeTransform = openCVWrapper.transform(for: landmarks, imageSize: size)
self.destinationView.pointOfView?.transform = SCNMatrix4Invert(faceNodeTransform)
Below is my Obj-C++ OpenCV Wrapper which takes in a subset of Vision Landmarks and the true pixel size of the image being looked at:
/ https://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/
- (SCNMatrix4) transformFor:(VNFaceLandmarks2D*)landmarks imageSize:(CGSize)imageSize
{
// 1 convert landmarks to image points in image space (pixels) to vector of cv::Point2f's :
// Note that this translates the point coordinate system to be top left oriented for OpenCV's image coordinates:
std::vector<cv::Point2f > imagePoints = [self imagePointsForLandmarks:landmarks imageSize:imageSize];
// 2 Load Model Points
std::vector<cv::Point3f > modelPoints = [self modelPoints];
// 3 create our camera extrinsic matrix
// TODO - see if this is sane?
double max_d = fmax(imageSize.width, imageSize.height);
cv::Mat cameraMatrix = (cv::Mat_<double>(3,3) << max_d, 0, imageSize.width/2.0,
0, max_d, imageSize.height/2.0,
0, 0, 1.0);
// 4 Run solvePnP
double distanceCoef[] = {0,0,0,0};
cv::Mat distanceCoefMat = cv::Mat(1 ,4 ,CV_64FC1,distanceCoef);
// Output Matrixes
std::vector<double> rv(3);
cv::Mat rotationOut = cv::Mat(rv);
std::vector<double> tv(3);
cv::Mat translationOut = cv::Mat(tv);
cv::solvePnP(modelPoints, imagePoints, cameraMatrix, distanceCoefMat, rotationOut, translationOut, false, cv::SOLVEPNP_EPNP);
// 5 Convert rotation matrix (actually a vector)
// To a real 4x4 rotation matrix:
cv::Mat viewMatrix = cv::Mat::zeros(4, 4, CV_64FC1);
cv::Mat rotation;
cv::Rodrigues(rotationOut, rotation);
// Append our transforms to our matrix and set final to identity:
for(unsigned int row=0; row<3; ++row)
{
for(unsigned int col=0; col<3; ++col)
{
viewMatrix.at<double>(row, col) = rotation.at<double>(row, col);
}
viewMatrix.at<double>(row, 3) = translationOut.at<double>(row, 0);
}
viewMatrix.at<double>(3, 3) = 1.0f;
// Transpose OpenCV to OpenGL coords
cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64FC1);
cvToGl.at<double>(0, 0) = 1.0f;
cvToGl.at<double>(1, 1) = -1.0f; // Invert the y axis
cvToGl.at<double>(2, 2) = -1.0f; // invert the z axis
cvToGl.at<double>(3, 3) = 1.0f;
viewMatrix = cvToGl * viewMatrix;
// Finally transpose to get correct SCN / OpenGL Matrix :
cv::Mat glViewMatrix = cv::Mat::zeros(4, 4, CV_64FC1);
cv::transpose(viewMatrix , glViewMatrix);
return [self convertCVMatToMatrix4:glViewMatrix];
}
- (SCNMatrix4) convertCVMatToMatrix4:(cv::Mat)matrix
{
SCNMatrix4 scnMatrix = SCNMatrix4Identity;
scnMatrix.m11 = matrix.at<double>(0, 0);
scnMatrix.m12 = matrix.at<double>(0, 1);
scnMatrix.m13 = matrix.at<double>(0, 2);
scnMatrix.m14 = matrix.at<double>(0, 3);
scnMatrix.m21 = matrix.at<double>(1, 0);
scnMatrix.m22 = matrix.at<double>(1, 1);
scnMatrix.m23 = matrix.at<double>(1, 2);
scnMatrix.m24 = matrix.at<double>(1, 3);
scnMatrix.m31 = matrix.at<double>(2, 0);
scnMatrix.m32 = matrix.at<double>(2, 1);
scnMatrix.m33 = matrix.at<double>(2, 2);
scnMatrix.m34 = matrix.at<double>(2, 3);
scnMatrix.m41 = matrix.at<double>(3, 0);
scnMatrix.m42 = matrix.at<double>(3, 1);
scnMatrix.m43 = matrix.at<double>(3, 2);
scnMatrix.m44 = matrix.at<double>(3, 3);
return (scnMatrix);
}
Some questions:
An SCNNode has no modelViewMatrix (just as I understand it, a transform, which is the modelMatrix) to just throw a matrix at - so I've read the inverse of the transform from SolvePNP process can be used to pose the camera instead, which appears to get me the closes result. I want to ensure this approach is correct.
If I have the modelViewMatrix, and the projectionMatrix, I should be able to calculate the appropriate modelMatrix? Is this the approach I should be taking?
Its unclear to me what projectionMatrix I should be using for my SceneKit Scene and If that has any bearing on my results. Do I need a pixel for pixel exact match of my viewport to the image size, and how do I properly configure my SCNCamera to ensure coordinate system agreeance for SolvePnP?
Thank you very much!

OpenCV, use estimateRigidTransform to transform a contour

Ok here's my scenario. I built a paper detection app that finds a piece of paper in an image. It doesn't work 100% of the time, given white balance and focusing changes, so if we found a sheet of paper in frame 1, we want to show a border around it in frame 2 (in reality there can be many frame gap), even if we didn't find it in frame 2. In order to do so, we keep the old image, and old 4 point contour from frame 1.
Then, in frame N where we did not find a convex contour, we want to transform the old contour using an affine transformation that we compute using estimageRigidTransform.
I am 100% positive that my math is slightly off, but I'm not sure where:
// new image = 3200 x 6400
// old image = 3200 x 6400
// old contour = contour found in old image, same scale
vector<cv::Point> transformContourWithNewImage(Mat & newImage, Mat & oldImage, vector<cv::Point> oldContour) {
CGFloat ratio = newImage.size().height / 500.0;
cv::Size outputSize = cv::Size(newImage.size().width / ratio, 500);
Mat image_copy;
resize(newImage, image_copy, outputSize); //shrink images down so that computations are cheaper
Mat oldImage_copy = cv::Mat();
resize(oldImage, oldImage_copy, outputSize);
cv::Mat transform = estimateRigidTransform(image_copy, oldImage_copy, false);
vector<cv::Point> transformedPoints;
cv::transform(oldContour, transformedPoints, transform);
return transformedPoints;
}
I think that I need to scale the transform, since it was done on a smaller image than the contour vector represents. I also get a crash saying my transform Mat has the wrong number of rows/cols

how to get all undistorted image with opencv

I'm unsing cv::undistort but it crops the image. I'd like to have all the undistorted image, so that the undistorted size is bigger then the original one, like this:
I think I need to use cv::getOptimalNewCameraMatrix but I had no luck with my trials.. some help?
Just for the record:
You should use cv::getOptimalNewCameraMatrix and set the alpha parameter to 1. Alpha 0 only shows valid points on the image, alpha 1 shows all the original points as well as black regions. cv::getOptimalNewCameraMatrix aslo gives you a ROI to crop the result from cv::undistort.
This code would do the trick:
void loadUndistortedImage(std::string fileName, Mat & outputImage,
Mat & cameraMatrix, Mat & distCoeffs) {
Mat image = imread(fileName, CV_LOAD_IMAGE_GRAYSCALE);
// setup enlargement and offset for new image
double y_shift = 60;
double x_shift = 70;
Size imageSize = image.size();
imageSize.height += 2*y_shift;
imageSize.width += 2*x_shift;
// create a new camera matrix with the principal point
// offest according to the offset above
Mat newCameraMatrix = cameraMatrix.clone();
newCameraMatrix.at<double>(0, 2) += x_shift; //adjust c_x by x_shift
newCameraMatrix.at<double>(1, 2) += y_shift; //adjust c_y by y_shift
// create undistortion maps
Mat map1;
Mat map2;
initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),
newCameraMatrix, imageSize, CV_16SC2, map1, map2);
//remap
remap(image, outputImage, map1, map2, INTER_LINEAR);
}
See http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
and http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html
the best would be subclass OpenCV's class and overload the method undistort(), in order to access all the images you need.

Crop and rotate picture OpenCV

I am new in OpenCV so please to be lenient.
I am doing an Android application to recognize the squares/rectangles and crop them. Function which looks for the squares/rectangles puts the found objects to vector> squares. I just wonder how to crop the picture according to the data in points stored in vector> squares and how to compute an angle on which the picture should be rotated. Thank you for any help
This post is citing from OpenCV QA: Extract a RotatedRect area.
There's a great article by Felix Abecassis on rotating and deskewing images. This also shows you how to extract the data in the RotatedRect:
http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
You basically only need cv::getRotationMatrix2D to get the rotation matrix for the affine transformation with cv::warpAffine and cv::getRectSubPix to crop the rotated image. The relevant lines in my application are:
// This is the RotatedRect, I got it from a contour for example...
RotatedRect rect = ...;
// matrices we'll use
Mat M, rotated, cropped;
// get angle and size from the bounding box
float angle = rect.angle;
Size rect_size = rect.size;
// thanks to http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
if (rect.angle < -45.) {
angle += 90.0;
swap(rect_size.width, rect_size.height);
}
// get the rotation matrix
M = getRotationMatrix2D(rect.center, angle, 1.0);
// perform the affine transformation on your image in src,
// the result is the rotated image in rotated. I am doing
// cubic interpolation here
warpAffine(src, rotated, M, src.size(), INTER_CUBIC);
// crop the resulting image, which is then given in cropped
getRectSubPix(rotated, rect_size, rect.center, cropped);
There are lots of useful posts around, I'm sure you can do a better search.
Crop:
cropping IplImage most effectively
Rotate:
OpenCV: how to rotate IplImage?
Rotating or Resizing an Image in OpenCV
Compute angle:
OpenCV - Bounding Box & Skew Angle
Altought this question is quite old, I think there is the need for an answer that is not expensive as rotating the whole image (see #bytefish's answer). You will need a bounding rect, for some reason rotatedRect.boundingRect() didn't work for me, so I had to use Imgproc.boundingRect(contour). This is OpenCV for Android, the operations are almost the same for other environments:
Rect roi = Imgproc.boundingRect(contour);
// we only work with a submat, not the whole image:
Mat mat = image.submat(roi);
RotatedRect rotatedRect = Imgproc.minAreaRect(new MatOfPoint2f(contour.toArray()));
Mat rot = Imgproc.getRotationMatrix2D(rotatedRect.center, rotatedRect.angle, 1.0);
// rotate using the center of the roi
double[] rot_0_2 = rot.get(0, 2);
for (int i = 0; i < rot_0_2.length; i++) {
rot_0_2[i] += rotatedRect.size.width / 2 - rotatedRect.center.x;
}
rot.put(0, 2, rot_0_2);
double[] rot_1_2 = rot.get(1, 2);
for (int i = 0; i < rot_1_2.length; i++) {
rot_1_2[i] += rotatedRect.size.height / 2 - rotatedRect.center.y;
}
rot.put(1, 2, rot_1_2);
// final rotated and cropped image:
Mat rotated = new Mat();
Imgproc.warpAffine(mat, rotated, rot, rotatedRect.size);

OpenCV warping from one triangle to another

I would like to map one triangle inside an OpenCV Mat to another one, pretty much like warpAffine does (check it here), but for triangles instead of quads, in order to use it in a Delaunay triangulation.
I know one is able to use a mask, but I'd like to know if there's a better solution.
I have copied the above image and the following C++ code from my post Warp one triangle to another using OpenCV ( C++ / Python ). The comments in the code below should provide a good idea what is going on. For more details and for python code you can visit the above link. All the pixels inside triangle tri1 in img1 are transformed to triangle tri2 in img2. Hope this helps.
void warpTriangle(Mat &img1, Mat &img2, vector<Point2f> tri1, vector<Point2f> tri2)
{
// Find bounding rectangle for each triangle
Rect r1 = boundingRect(tri1);
Rect r2 = boundingRect(tri2);
// Offset points by left top corner of the respective rectangles
vector<Point2f> tri1Cropped, tri2Cropped;
vector<Point> tri2CroppedInt;
for(int i = 0; i < 3; i++)
{
tri1Cropped.push_back( Point2f( tri1[i].x - r1.x, tri1[i].y - r1.y) );
tri2Cropped.push_back( Point2f( tri2[i].x - r2.x, tri2[i].y - r2.y) );
// fillConvexPoly needs a vector of Point and not Point2f
tri2CroppedInt.push_back( Point((int)(tri2[i].x - r2.x), (int)(tri2[i].y - r2.y)) );
}
// Apply warpImage to small rectangular patches
Mat img1Cropped;
img1(r1).copyTo(img1Cropped);
// Given a pair of triangles, find the affine transform.
Mat warpMat = getAffineTransform( tri1Cropped, tri2Cropped );
// Apply the Affine Transform just found to the src image
Mat img2Cropped = Mat::zeros(r2.height, r2.width, img1Cropped.type());
warpAffine( img1Cropped, img2Cropped, warpMat, img2Cropped.size(), INTER_LINEAR, BORDER_REFLECT_101);
// Get mask by filling triangle
Mat mask = Mat::zeros(r2.height, r2.width, CV_32FC3);
fillConvexPoly(mask, tri2CroppedInt, Scalar(1.0, 1.0, 1.0), 16, 0);
// Copy triangular region of the rectangular patch to the output image
multiply(img2Cropped,mask, img2Cropped);
multiply(img2(r2), Scalar(1.0,1.0,1.0) - mask, img2(r2));
img2(r2) = img2(r2) + img2Cropped;
}
You should use the getAffineTransform to find the transform, and use warpAffine to apply it

Resources