Ok here's my scenario. I built a paper detection app that finds a piece of paper in an image. It doesn't work 100% of the time, given white balance and focusing changes, so if we found a sheet of paper in frame 1, we want to show a border around it in frame 2 (in reality there can be many frame gap), even if we didn't find it in frame 2. In order to do so, we keep the old image, and old 4 point contour from frame 1.
Then, in frame N where we did not find a convex contour, we want to transform the old contour using an affine transformation that we compute using estimageRigidTransform.
I am 100% positive that my math is slightly off, but I'm not sure where:
// new image = 3200 x 6400
// old image = 3200 x 6400
// old contour = contour found in old image, same scale
vector<cv::Point> transformContourWithNewImage(Mat & newImage, Mat & oldImage, vector<cv::Point> oldContour) {
CGFloat ratio = newImage.size().height / 500.0;
cv::Size outputSize = cv::Size(newImage.size().width / ratio, 500);
Mat image_copy;
resize(newImage, image_copy, outputSize); //shrink images down so that computations are cheaper
Mat oldImage_copy = cv::Mat();
resize(oldImage, oldImage_copy, outputSize);
cv::Mat transform = estimateRigidTransform(image_copy, oldImage_copy, false);
vector<cv::Point> transformedPoints;
cv::transform(oldContour, transformedPoints, transform);
return transformedPoints;
}
I think that I need to scale the transform, since it was done on a smaller image than the contour vector represents. I also get a crash saying my transform Mat has the wrong number of rows/cols
Related
I want to perform matrix transformation operations like rotation, scaling, translation on an image without any other external library(PIL, CV2) function call.
I am trying with the basic matrix transformation approach. But I am facing some issues likes while rotation there are holes, image is cropped.
I want to perform all the above operation by along the center. Needed some guidance here.
Rotation Logic:
newImage = np.zeros((2*row, 2*column, 3), dtype=np.uint8)
radAngle = math.radians(int(input("angle: ")))
c, s = math.cos(radAngle), math.sin(radAngle)
mid_coords = np.floor(0.5*np.array(image.shape))
for i in range(0, row-1):
for j in range(0, column-1):
x2 = mid_coords[0] + (i-mid_coords[0])*c - (j-mid_coords[1])*s
y2 = mid_coords[1] + (i-mid_coords[0])*s + (j-mid_coords[1])*c
newImage[int(math.ceil(x2)), int(math.ceil(y2))] = image[i, j]
The reason for holes in your rotated image is you are assigning each pixel of your original image to the computed pixel location in the rotated image. Due to this, some pixels in the rotated image are left out.
You should do the other way round.
For each pixel in the rotated image, find its value from the original image.
Please refer to this answer.
In OpenCV how do you calculate the average gradient strength in a Mat and the average gradient direction?
I have sourced the below methods by googling but I want to confirm I am actually doing this correctly before moving onto the next step.
Is this correct?
Mat img = imread('foo.png', CV_8UC); // read image as grayscale single channel
// Calculate the mean intensity and the std deviation
// Any errors here or am I doing this correctly?
Scalar sMean, sStdDev;
meanStdDev(src, sMean, sStdDev);
double mean = sMean[0];
double stddev = sStdDev[0];
// Calculate the average gradient magnitude/strength across the image
// Any errors here or am I doing this correctly?
Mat dX, dY, magnitude;
Sobel(src, dX, CV_32F, 1, 0, 1);
Sobel(src, dY, CV_32F, 0, 1, 1);
magnitude(dX, dY, magnitude);
Scalar sMMean, sMStdDev;
meanStdDev(magnitude, sMMean, sMStdDev);
double magnitudeMean = sMMean[0];
double magnitudeStdDev = sMStdDev[0];
// Calculate the average gradient direction across the image
// Any errors here or am I doing this correctly?
Scalar avgHorizDir = mean(dX);
Scalar avgVertDir = mean(dY);
double avgDir = atan2(-avgVertDir[0], avgHorizDir[0]);
float blurriness = cv::videostab::calcBlurriness(src); // low values = sharper. High values = blurry
Technically those are the correct ways of obtaining the two averages.
The way you compute mean direction uses weighted directional statistics, meaning that pixels without a strong gradient have less influence on the average.
However, for most images this average direction is not very meaningful, as there exist edges in all directions and cancel out.
If your image is of a single edge, then this will work great.
If your image has lines in it, containing edges in opposite directions, this will not work. In this case, you want to average the double angle (average orientations). The obvious way of doing this is to compute the direction per pixel as an angle, double them, then use directional statistics to average (ie convert back to vectors and average those). Doubling the angle causes opposite directions to be mapped to the same value, thus averaging doesn’t cancel these out.
Another simple way to average orientations is to take the average of the tensor field obtained by the outer product of the gradient field with itself, and determine the direction of the eigenvector corresponding to the largest eigenvalue. The tensor field is obtained as follows:
Mat Sxx = dX * dX;
Mat Syy = dY * dY;
Mat Sxy = dX * dY;
This should then be averaged:
Scalar mSxx = mean(sXX);
Scalar mSyy = mean(sYY);
Scalar mSxy = mean(sXY);
These values form a 2x2 real-valued symmetric matrix:
| mSxx mSxy |
| mSxy mSyy |
It is relatively straight-forward to determine its eigendecomposition, and can be done analytically. I don’t have the equations on hand right now, so I’ll leave it as an exercise to the reader. :)
My objective is to rotate an image by a certain angle (e.g. 30 degrees). One possible way of rotating by 90 degrees in OpenCV is given by tenta4 but unfortunately, it only performs 90-degree flips.
Another possible way is a method "SkewGrayImage" given in JavaCV samples where it performs "small angle rotations" that appear to work for rotations of up to approximately 45 - 50 degrees but not for any other higher values.
So - my issues is, is there a proper way/method in OpenCV or JavaCV to actually perform an angular rotation of images or objects?
Meta has explained how to compute a rotation matrix with respect to the center of the image and then to perform a rotation as follows:
Mat rotated_image;
warpAffine(src, rotated_image, rot_mat, src.size());
there is an operation that's called warp, and it is able to just rotate, but also to do some other transformations on the image.
Some useful links are here
https://docs.opencv.org/2.4.13.2/modules/stitching/doc/warpers.html
https://docs.opencv.org/3.1.0/db/d29/group__cudawarping.html
https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html
Hope it helps ;)
A more detailed answer for IplImage rotation is given by Martin based on Mat variables which can then be converted and returned as an IplImage as follows:
Mat source = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Mat rotation_matrix = getRotationMatrix2D(src_center, angle, 1.0);
Mat destinationMat;
warpAffine(source, destinationMat, rotation_matrix, source.size());
IplImage iplframe = IplImage(destinationMat);
Hope this helps! Worked for me with JavaCV.
Mat raw = ... // your raw mat
// Create your "new" Mat and the center of your Raw Mat
Mat result = new Mat(raw.size(), [your Image Type]); // my Img type was CV_8U
Point2f rawCenter = new Point2f(raw.cols() / 2.0F, raw.rows() / 2.0F);
// Scale and Rotation of new Mat
double scale = 1.0;
int rotation = -5;
// Rotation Matrix
Mat rotationMatrix = getRotationMatrix2D(rawCenter, rotation, scale);
// Rotate
warpAffine(raw, result, rotationMatrix, raw.size());
I want to find the corner position of an blurred image with a corner inside it. like the following example:
I can make sure that only one corner is inside the image, and I assume that
the corner is part of a black and white chessboard.
How can I detect the cross position with openCV?
Thanks!
Usually you can determine the corner using the gradient:
Gx = im[i][j+1] - im[i][j-1]; Gy = im[i+1][j] – im[i-1][j];
G^2 = Gx^2 + Gy^2;
teta = atan2 (Gy, Gx);
As your image is blurred, you should compute the gradient at a larger scale:
Gx = im[i][j+delta] - im[i][j-delta]; Gy = im[i+ delta][j] – im[i- delta][j];
Here is the result that I obtained for delta = 50:
The gradient norm (multiplied by 20)
gradient norm http://imageshack.us/scaled/thumb/822/xdpp.jpg
The gradient direction:
gradient direction http://imageshack.us/scaled/thumb/844/h6zp.jpg
another solution
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img=imread("c:/data/corner.jpg");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
threshold(gray,gray,100,255,CV_THRESH_BINARY);
int step=15;
std::vector<Point> points;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
//fit a rotated rectangle
RotatedRect box = minAreaRect(Mat(points));
//circle(img,box.center,2,Scalar(255,0,0),-1);
//invert it,fit again and get average of centers(may not be needed if a 'good' threshold is found)
Point p1=Point(box.center.x,box.center.y);
points.clear();
gray=255-gray;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
box = minAreaRect(Mat(points));
Point p2=Point(box.center.x,box.center.y);
//circle(img,p2,2,Scalar(0,255,0),-1);
circle(img,Point((p1.x+p2.x)/2,(p1.y+p2.y)/2),3,Scalar(0,0,255),-1);
imshow("img",img);
waitKey();
return 0;
}
Rather than work right away at a ridiculously large scale, as suggested by others, I recommend downsizing first (which has the effect of deblurring), do one pass of Harris to find the corner, then upscale its position and do a pass of findCornerSubpix at full resolution with a large window (large enough to encompass the obvious saddle point of the intensity).
In this way you get the best of both worlds: fast detection to initialize the refinement, and accurate refinement given the original imagery.
See also this other relevant answer
I am new in OpenCV so please to be lenient.
I am doing an Android application to recognize the squares/rectangles and crop them. Function which looks for the squares/rectangles puts the found objects to vector> squares. I just wonder how to crop the picture according to the data in points stored in vector> squares and how to compute an angle on which the picture should be rotated. Thank you for any help
This post is citing from OpenCV QA: Extract a RotatedRect area.
There's a great article by Felix Abecassis on rotating and deskewing images. This also shows you how to extract the data in the RotatedRect:
http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
You basically only need cv::getRotationMatrix2D to get the rotation matrix for the affine transformation with cv::warpAffine and cv::getRectSubPix to crop the rotated image. The relevant lines in my application are:
// This is the RotatedRect, I got it from a contour for example...
RotatedRect rect = ...;
// matrices we'll use
Mat M, rotated, cropped;
// get angle and size from the bounding box
float angle = rect.angle;
Size rect_size = rect.size;
// thanks to http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
if (rect.angle < -45.) {
angle += 90.0;
swap(rect_size.width, rect_size.height);
}
// get the rotation matrix
M = getRotationMatrix2D(rect.center, angle, 1.0);
// perform the affine transformation on your image in src,
// the result is the rotated image in rotated. I am doing
// cubic interpolation here
warpAffine(src, rotated, M, src.size(), INTER_CUBIC);
// crop the resulting image, which is then given in cropped
getRectSubPix(rotated, rect_size, rect.center, cropped);
There are lots of useful posts around, I'm sure you can do a better search.
Crop:
cropping IplImage most effectively
Rotate:
OpenCV: how to rotate IplImage?
Rotating or Resizing an Image in OpenCV
Compute angle:
OpenCV - Bounding Box & Skew Angle
Altought this question is quite old, I think there is the need for an answer that is not expensive as rotating the whole image (see #bytefish's answer). You will need a bounding rect, for some reason rotatedRect.boundingRect() didn't work for me, so I had to use Imgproc.boundingRect(contour). This is OpenCV for Android, the operations are almost the same for other environments:
Rect roi = Imgproc.boundingRect(contour);
// we only work with a submat, not the whole image:
Mat mat = image.submat(roi);
RotatedRect rotatedRect = Imgproc.minAreaRect(new MatOfPoint2f(contour.toArray()));
Mat rot = Imgproc.getRotationMatrix2D(rotatedRect.center, rotatedRect.angle, 1.0);
// rotate using the center of the roi
double[] rot_0_2 = rot.get(0, 2);
for (int i = 0; i < rot_0_2.length; i++) {
rot_0_2[i] += rotatedRect.size.width / 2 - rotatedRect.center.x;
}
rot.put(0, 2, rot_0_2);
double[] rot_1_2 = rot.get(1, 2);
for (int i = 0; i < rot_1_2.length; i++) {
rot_1_2[i] += rotatedRect.size.height / 2 - rotatedRect.center.y;
}
rot.put(1, 2, rot_1_2);
// final rotated and cropped image:
Mat rotated = new Mat();
Imgproc.warpAffine(mat, rotated, rot, rotatedRect.size);