I am new in OpenCV so please to be lenient.
I am doing an Android application to recognize the squares/rectangles and crop them. Function which looks for the squares/rectangles puts the found objects to vector> squares. I just wonder how to crop the picture according to the data in points stored in vector> squares and how to compute an angle on which the picture should be rotated. Thank you for any help
This post is citing from OpenCV QA: Extract a RotatedRect area.
There's a great article by Felix Abecassis on rotating and deskewing images. This also shows you how to extract the data in the RotatedRect:
http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
You basically only need cv::getRotationMatrix2D to get the rotation matrix for the affine transformation with cv::warpAffine and cv::getRectSubPix to crop the rotated image. The relevant lines in my application are:
// This is the RotatedRect, I got it from a contour for example...
RotatedRect rect = ...;
// matrices we'll use
Mat M, rotated, cropped;
// get angle and size from the bounding box
float angle = rect.angle;
Size rect_size = rect.size;
// thanks to http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
if (rect.angle < -45.) {
angle += 90.0;
swap(rect_size.width, rect_size.height);
}
// get the rotation matrix
M = getRotationMatrix2D(rect.center, angle, 1.0);
// perform the affine transformation on your image in src,
// the result is the rotated image in rotated. I am doing
// cubic interpolation here
warpAffine(src, rotated, M, src.size(), INTER_CUBIC);
// crop the resulting image, which is then given in cropped
getRectSubPix(rotated, rect_size, rect.center, cropped);
There are lots of useful posts around, I'm sure you can do a better search.
Crop:
cropping IplImage most effectively
Rotate:
OpenCV: how to rotate IplImage?
Rotating or Resizing an Image in OpenCV
Compute angle:
OpenCV - Bounding Box & Skew Angle
Altought this question is quite old, I think there is the need for an answer that is not expensive as rotating the whole image (see #bytefish's answer). You will need a bounding rect, for some reason rotatedRect.boundingRect() didn't work for me, so I had to use Imgproc.boundingRect(contour). This is OpenCV for Android, the operations are almost the same for other environments:
Rect roi = Imgproc.boundingRect(contour);
// we only work with a submat, not the whole image:
Mat mat = image.submat(roi);
RotatedRect rotatedRect = Imgproc.minAreaRect(new MatOfPoint2f(contour.toArray()));
Mat rot = Imgproc.getRotationMatrix2D(rotatedRect.center, rotatedRect.angle, 1.0);
// rotate using the center of the roi
double[] rot_0_2 = rot.get(0, 2);
for (int i = 0; i < rot_0_2.length; i++) {
rot_0_2[i] += rotatedRect.size.width / 2 - rotatedRect.center.x;
}
rot.put(0, 2, rot_0_2);
double[] rot_1_2 = rot.get(1, 2);
for (int i = 0; i < rot_1_2.length; i++) {
rot_1_2[i] += rotatedRect.size.height / 2 - rotatedRect.center.y;
}
rot.put(1, 2, rot_1_2);
// final rotated and cropped image:
Mat rotated = new Mat();
Imgproc.warpAffine(mat, rotated, rot, rotatedRect.size);
Related
My objective is to rotate an image by a certain angle (e.g. 30 degrees). One possible way of rotating by 90 degrees in OpenCV is given by tenta4 but unfortunately, it only performs 90-degree flips.
Another possible way is a method "SkewGrayImage" given in JavaCV samples where it performs "small angle rotations" that appear to work for rotations of up to approximately 45 - 50 degrees but not for any other higher values.
So - my issues is, is there a proper way/method in OpenCV or JavaCV to actually perform an angular rotation of images or objects?
Meta has explained how to compute a rotation matrix with respect to the center of the image and then to perform a rotation as follows:
Mat rotated_image;
warpAffine(src, rotated_image, rot_mat, src.size());
there is an operation that's called warp, and it is able to just rotate, but also to do some other transformations on the image.
Some useful links are here
https://docs.opencv.org/2.4.13.2/modules/stitching/doc/warpers.html
https://docs.opencv.org/3.1.0/db/d29/group__cudawarping.html
https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html
Hope it helps ;)
A more detailed answer for IplImage rotation is given by Martin based on Mat variables which can then be converted and returned as an IplImage as follows:
Mat source = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Mat rotation_matrix = getRotationMatrix2D(src_center, angle, 1.0);
Mat destinationMat;
warpAffine(source, destinationMat, rotation_matrix, source.size());
IplImage iplframe = IplImage(destinationMat);
Hope this helps! Worked for me with JavaCV.
Mat raw = ... // your raw mat
// Create your "new" Mat and the center of your Raw Mat
Mat result = new Mat(raw.size(), [your Image Type]); // my Img type was CV_8U
Point2f rawCenter = new Point2f(raw.cols() / 2.0F, raw.rows() / 2.0F);
// Scale and Rotation of new Mat
double scale = 1.0;
int rotation = -5;
// Rotation Matrix
Mat rotationMatrix = getRotationMatrix2D(rawCenter, rotation, scale);
// Rotate
warpAffine(raw, result, rotationMatrix, raw.size());
I want to find the corner position of an blurred image with a corner inside it. like the following example:
I can make sure that only one corner is inside the image, and I assume that
the corner is part of a black and white chessboard.
How can I detect the cross position with openCV?
Thanks!
Usually you can determine the corner using the gradient:
Gx = im[i][j+1] - im[i][j-1]; Gy = im[i+1][j] – im[i-1][j];
G^2 = Gx^2 + Gy^2;
teta = atan2 (Gy, Gx);
As your image is blurred, you should compute the gradient at a larger scale:
Gx = im[i][j+delta] - im[i][j-delta]; Gy = im[i+ delta][j] – im[i- delta][j];
Here is the result that I obtained for delta = 50:
The gradient norm (multiplied by 20)
gradient norm http://imageshack.us/scaled/thumb/822/xdpp.jpg
The gradient direction:
gradient direction http://imageshack.us/scaled/thumb/844/h6zp.jpg
another solution
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img=imread("c:/data/corner.jpg");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
threshold(gray,gray,100,255,CV_THRESH_BINARY);
int step=15;
std::vector<Point> points;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
//fit a rotated rectangle
RotatedRect box = minAreaRect(Mat(points));
//circle(img,box.center,2,Scalar(255,0,0),-1);
//invert it,fit again and get average of centers(may not be needed if a 'good' threshold is found)
Point p1=Point(box.center.x,box.center.y);
points.clear();
gray=255-gray;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
box = minAreaRect(Mat(points));
Point p2=Point(box.center.x,box.center.y);
//circle(img,p2,2,Scalar(0,255,0),-1);
circle(img,Point((p1.x+p2.x)/2,(p1.y+p2.y)/2),3,Scalar(0,0,255),-1);
imshow("img",img);
waitKey();
return 0;
}
Rather than work right away at a ridiculously large scale, as suggested by others, I recommend downsizing first (which has the effect of deblurring), do one pass of Harris to find the corner, then upscale its position and do a pass of findCornerSubpix at full resolution with a large window (large enough to encompass the obvious saddle point of the intensity).
In this way you get the best of both worlds: fast detection to initialize the refinement, and accurate refinement given the original imagery.
See also this other relevant answer
When finding a reference image in a scene using SURF, I would like to crop the found object in the scene, and "straighten" it back using warpPerspective and the reversed homography matrix.
Meaning, let's say I have this SURF result:
Now, I would like to crop the found object in the scene:
and "straighten" only the cropped image with warpPerspective using the reversed homography matrix. The result I'm aiming at is that I'll get an image containing, roughly, only the object, and some distorted leftovers from the original scene (as the cropping is not a 100% the object alone).
Cropping the found object, and finding the homography matrix and reversing it are simple enough. Problem is, I can't seem to understand the results from warpPerspective. Seems like the resulting image contains only a small portion of the cropped image, and in a very large size.
While researching warpPerspective I found that the resulting image is very large due to the nature of the process, but I can't seem to wrap my head around how to do this properly. Seems like I just don't understand the process well enough. Would I need to warpPerspective the original (not cropped) image and than crop the "straightened" object?
Any advice?
try this.
given that you have the unconnected contour of your object (e.g. the outer corner points of the box contour) you can transform them with your inverse homography and adjust that homography to place the result of that transformation to the top left region of the image.
compute where those object points will be warped to (use the inverse homography and the contour points as input):
cv::Rect computeWarpedContourRegion(const std::vector<cv::Point> & points, const cv::Mat & homography)
{
std::vector<cv::Point2f> transformed_points(points.size());
for(unsigned int i=0; i<points.size(); ++i)
{
// warp the points
transformed_points[i].x = points[i].x * homography.at<double>(0,0) + points[i].y * homography.at<double>(0,1) + homography.at<double>(0,2) ;
transformed_points[i].y = points[i].x * homography.at<double>(1,0) + points[i].y * homography.at<double>(1,1) + homography.at<double>(1,2) ;
}
// dehomogenization necessary?
if(homography.rows == 3)
{
float homog_comp;
for(unsigned int i=0; i<transformed_points.size(); ++i)
{
homog_comp = points[i].x * homography.at<double>(2,0) + points[i].y * homography.at<double>(2,1) + homography.at<double>(2,2) ;
transformed_points[i].x /= homog_comp;
transformed_points[i].y /= homog_comp;
}
}
// now find the bounding box for these points:
cv::Rect boundingBox = cv::boundingRect(transformed_points);
return boundingBox;
}
modify your inverse homography (result of computeWarpedContourRegion and inverseHomography as input)
cv::Mat adjustHomography(const cv::Rect & transformedRegion, const cv::Mat & homography)
{
if(homography.rows == 2) throw("homography adjustement for affine matrix not implemented yet");
// unit matrix
cv::Mat correctionHomography = cv::Mat::eye(3,3,CV_64F);
// correction translation
correctionHomography.at<double>(0,2) = -transformedRegion.x;
correctionHomography.at<double>(1,2) = -transformedRegion.y;
return correctionHomography * homography;
}
you will call something like
cv::warpPerspective(objectWithBackground, output, adjustedInverseHomography, sizeOfComputeWarpedContourRegionResult);
hope this helps =)
I have a RotatedRect, I want to do some image processing in the rotated region (say extract the color histogram). How can I get the ROI? I mean get the region(pixels) so that I can do processing.
I find this, but it changes the region by using getRotationMatrix2D and warpAffine, so it doesn't work for my situation (I need to process the original image pixels).
Then I find this suggests using mask, which sounds reasonable, but can anyone teach me how to get the mask as the green RotatedRect below.
Excepts the mask, is there any other solutions ?
Thanks for any hint
Here is my solution, using mask:
The idea is construct a Mat mask by assigning 255 to my RotatedRect ROI.
How to know which point is in ROI (which should be assign to 255)?
I use the following function isInROI to address the problem.
/** decide whether point p is in the ROI.
*** The ROI is a rotated rectange whose 4 corners are stored in roi[]
**/
bool isInROI(Point p, Point2f roi[])
{
double pro[4];
for(int i=0; i<4; ++i)
{
pro[i] = computeProduct(p, roi[i], roi[(i+1)%4]);
}
if(pro[0]*pro[2]<0 && pro[1]*pro[3]<0)
{
return true;
}
return false;
}
/** function pro = kx-y+j, take two points a and b,
*** compute the line argument k and j, then return the pro value
*** so that can be used to determine whether the point p is on the left or right
*** of the line ab
**/
double computeProduct(Point p, Point2f a, Point2f b)
{
double k = (a.y-b.y) / (a.x-b.x);
double j = a.y - k*a.x;
return k*p.x - p.y + j;
}
How to construct the mask?
Using the following code.
Mat mask = Mat(image.size(), CV_8U, Scalar(0));
for(int i=0; i<image.rows; ++i)
{
for(int j=0; j<image.cols; ++j)
{
Point p = Point(j,i); // pay attention to the cordination
if(isInROI(p,vertices))
{
mask.at<uchar>(i,j) = 255;
}
}
}
Done,
vancexu
I found the following post very useful to do the same.
http://answers.opencv.org/question/497/extract-a-rotatedrect-area/
The only caveats are that (a) the "angle" here is assumed to be a rotation about the center of the entire image (not the bounding box) and (b) in the last line below (I think) "rect.center" needs to be transformed to the rotated image (by applying the rotation-matrix).
// rect is the RotatedRect
RotatedRect rect;
// matrices we'll use
Mat M, rotated, cropped;
// get angle and size from the bounding box
float angle = rect.angle;
Size rect_size = rect.size;
// thanks to http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
if (rect.angle < -45.) {
angle += 90.0;
swap(rect_size.width, rect_size.height);
}
// get the rotation matrix
M = getRotationMatrix2D(rect.center, angle, 1.0);
// perform the affine transformation
warpAffine(src, rotated, M, src.size(), INTER_CUBIC);
// crop the resulting image
getRectSubPix(rotated, rect_size, rect.center, cropped);
If you need a superfast solution, I suggest:
crop a Rect enclosing your RotatedRect rr.
rotate+translate back the cropped image so that the RotatedRect is now equivalent to a Rect. (using warpAffine on the product of the rotation and the translation 3x3 matrices)
Keep that roi of the rotated-back image (roi=Rect(Point(0,0), rr.size())).
It is a bit time-consuming to write though as you need to calculate the combined affine transform.
If you don't care about the speed and want to create a fast prototype for any shape of the region, you can use an openCV function pointPolygonTest() that returns a positive value if the point inside:
double pointPolygonTest(InputArray contour, Point2f pt, bool measureDist)
Simple code:
vector<Point2f> contour(4);
contour[0] = Point2f(-10, -10);
contour[1] = Point2f(-10, 10);
contour[2] = Point2f(10, 10);
contour[3] = Point2f(10, -10);
Point2f pt = Point2f(11, 11);
an double res = pointPolygonTest(contour, pt, false);
if (res>0)
cout<<"inside"<<endl;
else
cout<<"outside"<<endl;
I would like to map one triangle inside an OpenCV Mat to another one, pretty much like warpAffine does (check it here), but for triangles instead of quads, in order to use it in a Delaunay triangulation.
I know one is able to use a mask, but I'd like to know if there's a better solution.
I have copied the above image and the following C++ code from my post Warp one triangle to another using OpenCV ( C++ / Python ). The comments in the code below should provide a good idea what is going on. For more details and for python code you can visit the above link. All the pixels inside triangle tri1 in img1 are transformed to triangle tri2 in img2. Hope this helps.
void warpTriangle(Mat &img1, Mat &img2, vector<Point2f> tri1, vector<Point2f> tri2)
{
// Find bounding rectangle for each triangle
Rect r1 = boundingRect(tri1);
Rect r2 = boundingRect(tri2);
// Offset points by left top corner of the respective rectangles
vector<Point2f> tri1Cropped, tri2Cropped;
vector<Point> tri2CroppedInt;
for(int i = 0; i < 3; i++)
{
tri1Cropped.push_back( Point2f( tri1[i].x - r1.x, tri1[i].y - r1.y) );
tri2Cropped.push_back( Point2f( tri2[i].x - r2.x, tri2[i].y - r2.y) );
// fillConvexPoly needs a vector of Point and not Point2f
tri2CroppedInt.push_back( Point((int)(tri2[i].x - r2.x), (int)(tri2[i].y - r2.y)) );
}
// Apply warpImage to small rectangular patches
Mat img1Cropped;
img1(r1).copyTo(img1Cropped);
// Given a pair of triangles, find the affine transform.
Mat warpMat = getAffineTransform( tri1Cropped, tri2Cropped );
// Apply the Affine Transform just found to the src image
Mat img2Cropped = Mat::zeros(r2.height, r2.width, img1Cropped.type());
warpAffine( img1Cropped, img2Cropped, warpMat, img2Cropped.size(), INTER_LINEAR, BORDER_REFLECT_101);
// Get mask by filling triangle
Mat mask = Mat::zeros(r2.height, r2.width, CV_32FC3);
fillConvexPoly(mask, tri2CroppedInt, Scalar(1.0, 1.0, 1.0), 16, 0);
// Copy triangular region of the rectangular patch to the output image
multiply(img2Cropped,mask, img2Cropped);
multiply(img2(r2), Scalar(1.0,1.0,1.0) - mask, img2(r2));
img2(r2) = img2(r2) + img2Cropped;
}
You should use the getAffineTransform to find the transform, and use warpAffine to apply it