I have a RotatedRect, I want to do some image processing in the rotated region (say extract the color histogram). How can I get the ROI? I mean get the region(pixels) so that I can do processing.
I find this, but it changes the region by using getRotationMatrix2D and warpAffine, so it doesn't work for my situation (I need to process the original image pixels).
Then I find this suggests using mask, which sounds reasonable, but can anyone teach me how to get the mask as the green RotatedRect below.
Excepts the mask, is there any other solutions ?
Thanks for any hint
Here is my solution, using mask:
The idea is construct a Mat mask by assigning 255 to my RotatedRect ROI.
How to know which point is in ROI (which should be assign to 255)?
I use the following function isInROI to address the problem.
/** decide whether point p is in the ROI.
*** The ROI is a rotated rectange whose 4 corners are stored in roi[]
**/
bool isInROI(Point p, Point2f roi[])
{
double pro[4];
for(int i=0; i<4; ++i)
{
pro[i] = computeProduct(p, roi[i], roi[(i+1)%4]);
}
if(pro[0]*pro[2]<0 && pro[1]*pro[3]<0)
{
return true;
}
return false;
}
/** function pro = kx-y+j, take two points a and b,
*** compute the line argument k and j, then return the pro value
*** so that can be used to determine whether the point p is on the left or right
*** of the line ab
**/
double computeProduct(Point p, Point2f a, Point2f b)
{
double k = (a.y-b.y) / (a.x-b.x);
double j = a.y - k*a.x;
return k*p.x - p.y + j;
}
How to construct the mask?
Using the following code.
Mat mask = Mat(image.size(), CV_8U, Scalar(0));
for(int i=0; i<image.rows; ++i)
{
for(int j=0; j<image.cols; ++j)
{
Point p = Point(j,i); // pay attention to the cordination
if(isInROI(p,vertices))
{
mask.at<uchar>(i,j) = 255;
}
}
}
Done,
vancexu
I found the following post very useful to do the same.
http://answers.opencv.org/question/497/extract-a-rotatedrect-area/
The only caveats are that (a) the "angle" here is assumed to be a rotation about the center of the entire image (not the bounding box) and (b) in the last line below (I think) "rect.center" needs to be transformed to the rotated image (by applying the rotation-matrix).
// rect is the RotatedRect
RotatedRect rect;
// matrices we'll use
Mat M, rotated, cropped;
// get angle and size from the bounding box
float angle = rect.angle;
Size rect_size = rect.size;
// thanks to http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
if (rect.angle < -45.) {
angle += 90.0;
swap(rect_size.width, rect_size.height);
}
// get the rotation matrix
M = getRotationMatrix2D(rect.center, angle, 1.0);
// perform the affine transformation
warpAffine(src, rotated, M, src.size(), INTER_CUBIC);
// crop the resulting image
getRectSubPix(rotated, rect_size, rect.center, cropped);
If you need a superfast solution, I suggest:
crop a Rect enclosing your RotatedRect rr.
rotate+translate back the cropped image so that the RotatedRect is now equivalent to a Rect. (using warpAffine on the product of the rotation and the translation 3x3 matrices)
Keep that roi of the rotated-back image (roi=Rect(Point(0,0), rr.size())).
It is a bit time-consuming to write though as you need to calculate the combined affine transform.
If you don't care about the speed and want to create a fast prototype for any shape of the region, you can use an openCV function pointPolygonTest() that returns a positive value if the point inside:
double pointPolygonTest(InputArray contour, Point2f pt, bool measureDist)
Simple code:
vector<Point2f> contour(4);
contour[0] = Point2f(-10, -10);
contour[1] = Point2f(-10, 10);
contour[2] = Point2f(10, 10);
contour[3] = Point2f(10, -10);
Point2f pt = Point2f(11, 11);
an double res = pointPolygonTest(contour, pt, false);
if (res>0)
cout<<"inside"<<endl;
else
cout<<"outside"<<endl;
Related
I have a bit-map image:
( However this should work with any arbitrary image )
I want to take my image and make it a 3D SCNNode. I've accomplished that much with this code. That takes each pixel in the image and creates a SCNNode with a SCNBox geometry.
static inline SCNNode* NodeFromSprite(const UIImage* image) {
SCNNode *node = [SCNNode node];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
for (int x = 0; x < image.size.width; x++)
{
for (int y = 0; y < image.size.height; y++)
{
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 alpha = data[pixelInfo + 3];
if (alpha > 3)
{
UInt8 red = data[pixelInfo];
UInt8 green = data[pixelInfo + 1];
UInt8 blue = data[pixelInfo + 2];
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
SCNNode *pixel = [SCNNode node];
pixel.geometry = [SCNBox boxWithWidth:1.001 height:1.001 length:1.001 chamferRadius:0];
pixel.geometry.firstMaterial.diffuse.contents = color;
pixel.position = SCNVector3Make(x - image.size.width / 2.0,
y - image.size.height / 2.0,
0);
[node addChildNode:pixel];
}
}
}
CFRelease(pixelData);
node = [node flattenedClone];
//The image is upside down and I have no idea why.
node.rotation = SCNVector4Make(1, 0, 0, M_PI);
return node;
}
But the problem is that what I'm doing takes up way to much memory!
I'm trying to find a way to do this with less memory.
All Code and resources can be found at:
https://github.com/KonradWright/KNodeFromSprite
Now you drawing each pixel as SCNBox of certain color, that means:
one GL draw per box
drawing of unnecessary two invisible faces between adjancent boxes
drawing N of same 1x1x1 boxes in a row when one box of 1x1xN can be drawn
Seems like common Minecraft-like optimization problem:
Treat your image is 3-dimensional array (where depth is wanted image extrusion depth), each element representing cube voxel of certain color.
Use greedy meshing algorithm (demo) and custom SCNGeometry to create mesh for SceneKit node.
Pseudo-code for meshing algorithm that skips faces of adjancent cubes (simplier, but less effective than greedy meshing):
#define SIZE_X = 16; // image width
#define SIZE_Y = 16; // image height
// pixel data, 0 = transparent pixel
int data[SIZE_X][SIZE_Y];
// check if there is non-transparent neighbour at x, y
BOOL has_neighbour(x, y) {
if (x < 0 || x >= SIZE_X || y < 0 || y >= SIZE_Y || data[x][y] == 0)
return NO; // out of dimensions or transparent
else
return YES;
}
void add_face(x, y orientation, color) {
// add face at (x, y) with specified color and orientation = TOP, BOTTOM, LEFT, RIGHT, FRONT, BACK
// can be (easier and slower) implemented with SCNPlane's: https://developer.apple.com/library/mac/documentation/SceneKit/Reference/SCNPlane_Class/index.html#//apple_ref/doc/uid/TP40012010-CLSCHSCNPlane-SW8
// or (harder and faster) using Custom Geometry: https://github.com/d-ronnqvist/blogpost-codesample-CustomGeometry/blob/master/CustomGeometry/CustomGeometryView.m#L84
}
for (x = 0; x < SIZE_X; x++) {
for (y = 0; y < SIZE_Y; y++) {
int color = data[x][y];
// skip current pixel is transparent
if (color == 0)
continue;
// check neighbour at top
if (! has_neighbour(x, y + 1))
add_face(x,y, TOP, );
// check neighbour at bottom
if (! has_neighbour(x, y - 1))
add_face(x,y, BOTTOM);
// check neighbour at bottom
if (! has_neighbour(x - 1, y))
add_face(x,y, LEFT);
// check neighbour at bottom
if (! has_neighbour(x, y - 1))
add_face(x,y, RIGHT);
// since array is 2D, front and back faces is always visible for non-transparent pixels
add_face(x,y, FRONT);
add_face(x,y, BACK);
}
}
A lot of depends on input image. If it is not big and without wide variety of colors, it I would go with SCNNode adding SCNPlane's for visible faces and then flattenedClone()ing result.
An approach similar to the one proposed by Ef Dot:
To keep the number of draw calls as small as possible you want to keep the number of materials as small as possible. Here you will want one SCNMaterial per color.
To keep the number of draw calls as small as possible make sure that no two geometry elements (SCNGeometryElement) use the same material. In other words, use one geometry element per material (color).
So you will have to build a SCNGeometry that has N geometry elements and N materials where N is the number of distinct colors in your image.
For each color in you image build a polygon (or group of disjoint polygons) from all the pixels of that color
Triangulate each polygon (or group of polygons) and build a geometry element with that triangulation.
Build the geometry from the geometry elements.
If you don't feel comfortable with triangulating the polygons yourself your can leverage SCNShape.
For each polygon (or group of polygons) create a single UIBezierPath and a build a SCNShape with that.
Merge all the geometry sources of your shapes in a single source, and reuse the geometry elements to create a custom SCNGeometry
Note that some vertices will be duplicated if you use a collection of SCNShapes to build the geometry. With little effort you can make sure that no two vertices in your final source have the same position. Update the indexes in the geometry elements accordingly.
I can also direct you to this excellent GitHub repo by Nick Lockwood:
https://github.com/nicklockwood/FPSControls
It will show you how to generate the meshes as planes (instead of cubes) which is a fast way to achieve what you need for simple scenes using a "neighboring" check.
If you need large complex scenes, then I suggest you go for the solution proposed by Ef Dot using a greedy meshing algorithm.
I'm trying to make a copy of the resizing algorithm of OpenCV with bilinear interpolation in C. What I want to achieve is that the resulting image is exactly the same (pixel value) to that produced by OpenCV. I am particularly interested in shrinking and not in the magnification, and I'm interested to use it on single channel Grayscale images. On the net I read that the bilinear interpolation algorithm is different between shrinkings and enlargements, but I did not find formulas for shrinking-implementations, so it is likely that the code I wrote is totally wrong. What I wrote comes from my knowledge of interpolation acquired in a university course in Computer Graphics and OpenGL. The result of the algorithm that I wrote are images visually identical to those produced by OpenCV but whose pixel values are not perfectly identical (in particular near edges). Can you show me the shrinking algorithm with bilinear interpolation and a possible implementation?
Note: The code attached is as a one-dimensional filter which must be applied first horizontally and then vertically (i.e. with transposed matrix).
Mat rescale(Mat src, float ratio){
float width = src.cols * ratio; //resized width
int i_width = cvRound(width);
float step = (float)src.cols / (float)i_width; //size of new pixels mapped over old image
float center = step / 2; //V1 - center position of new pixel
//float center = step / src.cols; //V2 - other possible center position of new pixel
//float center = 0.099f; //V3 - Lena 512x512 lower difference possible to OpenCV
Mat dst(src.rows, i_width, CV_8UC1);
//cycle through all rows
for(int j = 0; j < src.rows; j++){
//in each row compute new pixels
for(int i = 0; i < i_width; i++){
float pos = (i*step) + center; //position of (the center of) new pixel in old map coordinates
int pred = floor(pos); //predecessor pixel in the original image
int succ = ceil(pos); //successor pixel in the original image
float d_pred = pos - pred; //pred and succ distances from the center of new pixel
float d_succ = succ - pos;
int val_pred = src.at<uchar>(j, pred); //pred and succ values
int val_succ = src.at<uchar>(j, succ);
float val = (val_pred * d_succ) + (val_succ * d_pred); //inverting d_succ and d_pred, supposing "d_succ = 1 - d_pred"...
int i_val = cvRound(val);
if(i_val == 0) //if pos is a perfect int "x.0000", pred and succ are the same pixel
i_val = val_pred;
dst.at<uchar>(j, i) = i_val;
}
}
return dst;
}
Bilinear interpolation is not separable in the sense that you can resize vertically and the resize again vertically. See example here.
You can see OpenCV's resize code here.
When finding a reference image in a scene using SURF, I would like to crop the found object in the scene, and "straighten" it back using warpPerspective and the reversed homography matrix.
Meaning, let's say I have this SURF result:
Now, I would like to crop the found object in the scene:
and "straighten" only the cropped image with warpPerspective using the reversed homography matrix. The result I'm aiming at is that I'll get an image containing, roughly, only the object, and some distorted leftovers from the original scene (as the cropping is not a 100% the object alone).
Cropping the found object, and finding the homography matrix and reversing it are simple enough. Problem is, I can't seem to understand the results from warpPerspective. Seems like the resulting image contains only a small portion of the cropped image, and in a very large size.
While researching warpPerspective I found that the resulting image is very large due to the nature of the process, but I can't seem to wrap my head around how to do this properly. Seems like I just don't understand the process well enough. Would I need to warpPerspective the original (not cropped) image and than crop the "straightened" object?
Any advice?
try this.
given that you have the unconnected contour of your object (e.g. the outer corner points of the box contour) you can transform them with your inverse homography and adjust that homography to place the result of that transformation to the top left region of the image.
compute where those object points will be warped to (use the inverse homography and the contour points as input):
cv::Rect computeWarpedContourRegion(const std::vector<cv::Point> & points, const cv::Mat & homography)
{
std::vector<cv::Point2f> transformed_points(points.size());
for(unsigned int i=0; i<points.size(); ++i)
{
// warp the points
transformed_points[i].x = points[i].x * homography.at<double>(0,0) + points[i].y * homography.at<double>(0,1) + homography.at<double>(0,2) ;
transformed_points[i].y = points[i].x * homography.at<double>(1,0) + points[i].y * homography.at<double>(1,1) + homography.at<double>(1,2) ;
}
// dehomogenization necessary?
if(homography.rows == 3)
{
float homog_comp;
for(unsigned int i=0; i<transformed_points.size(); ++i)
{
homog_comp = points[i].x * homography.at<double>(2,0) + points[i].y * homography.at<double>(2,1) + homography.at<double>(2,2) ;
transformed_points[i].x /= homog_comp;
transformed_points[i].y /= homog_comp;
}
}
// now find the bounding box for these points:
cv::Rect boundingBox = cv::boundingRect(transformed_points);
return boundingBox;
}
modify your inverse homography (result of computeWarpedContourRegion and inverseHomography as input)
cv::Mat adjustHomography(const cv::Rect & transformedRegion, const cv::Mat & homography)
{
if(homography.rows == 2) throw("homography adjustement for affine matrix not implemented yet");
// unit matrix
cv::Mat correctionHomography = cv::Mat::eye(3,3,CV_64F);
// correction translation
correctionHomography.at<double>(0,2) = -transformedRegion.x;
correctionHomography.at<double>(1,2) = -transformedRegion.y;
return correctionHomography * homography;
}
you will call something like
cv::warpPerspective(objectWithBackground, output, adjustedInverseHomography, sizeOfComputeWarpedContourRegionResult);
hope this helps =)
I am new in OpenCV so please to be lenient.
I am doing an Android application to recognize the squares/rectangles and crop them. Function which looks for the squares/rectangles puts the found objects to vector> squares. I just wonder how to crop the picture according to the data in points stored in vector> squares and how to compute an angle on which the picture should be rotated. Thank you for any help
This post is citing from OpenCV QA: Extract a RotatedRect area.
There's a great article by Felix Abecassis on rotating and deskewing images. This also shows you how to extract the data in the RotatedRect:
http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
You basically only need cv::getRotationMatrix2D to get the rotation matrix for the affine transformation with cv::warpAffine and cv::getRectSubPix to crop the rotated image. The relevant lines in my application are:
// This is the RotatedRect, I got it from a contour for example...
RotatedRect rect = ...;
// matrices we'll use
Mat M, rotated, cropped;
// get angle and size from the bounding box
float angle = rect.angle;
Size rect_size = rect.size;
// thanks to http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
if (rect.angle < -45.) {
angle += 90.0;
swap(rect_size.width, rect_size.height);
}
// get the rotation matrix
M = getRotationMatrix2D(rect.center, angle, 1.0);
// perform the affine transformation on your image in src,
// the result is the rotated image in rotated. I am doing
// cubic interpolation here
warpAffine(src, rotated, M, src.size(), INTER_CUBIC);
// crop the resulting image, which is then given in cropped
getRectSubPix(rotated, rect_size, rect.center, cropped);
There are lots of useful posts around, I'm sure you can do a better search.
Crop:
cropping IplImage most effectively
Rotate:
OpenCV: how to rotate IplImage?
Rotating or Resizing an Image in OpenCV
Compute angle:
OpenCV - Bounding Box & Skew Angle
Altought this question is quite old, I think there is the need for an answer that is not expensive as rotating the whole image (see #bytefish's answer). You will need a bounding rect, for some reason rotatedRect.boundingRect() didn't work for me, so I had to use Imgproc.boundingRect(contour). This is OpenCV for Android, the operations are almost the same for other environments:
Rect roi = Imgproc.boundingRect(contour);
// we only work with a submat, not the whole image:
Mat mat = image.submat(roi);
RotatedRect rotatedRect = Imgproc.minAreaRect(new MatOfPoint2f(contour.toArray()));
Mat rot = Imgproc.getRotationMatrix2D(rotatedRect.center, rotatedRect.angle, 1.0);
// rotate using the center of the roi
double[] rot_0_2 = rot.get(0, 2);
for (int i = 0; i < rot_0_2.length; i++) {
rot_0_2[i] += rotatedRect.size.width / 2 - rotatedRect.center.x;
}
rot.put(0, 2, rot_0_2);
double[] rot_1_2 = rot.get(1, 2);
for (int i = 0; i < rot_1_2.length; i++) {
rot_1_2[i] += rotatedRect.size.height / 2 - rotatedRect.center.y;
}
rot.put(1, 2, rot_1_2);
// final rotated and cropped image:
Mat rotated = new Mat();
Imgproc.warpAffine(mat, rotated, rot, rotatedRect.size);
as the title says i'm trying to find the number of non-zero pixels in a certain area of a cv::Mat, namely within a RotatedRect.
For a regular Rect one could simply use countNonZeroPixels on a ROI. However ROIs can only be regular (non rotated) rectangles.
Another idea was to draw the rotated rectangle and use that as a mask. However openCV neither supports the drawing of rotated rectangles nor does countNonZeroPixels accept a mask.
Does anyone have a solution for how to elegantly solve this ?
Thank you !
Ok, so here's my first take at it.
The idea is to rotate the image reverse to the rectangle's rotation and than apply a roi on the straightened rectangle.
This will break if the rotated rectangle is not completely within the image
You can probably speed this up by applying another roi before rotation to avoid having to rotate the whole image...
#include <highgui.h>
#include <cv.h>
// From http://stackoverflow.com/questions/2289690/opencv-how-to-rotate-iplimage
cv::Mat rotateImage(const cv::Mat& source, cv::Point2f center, double angle)
{
cv::Mat rot_mat = cv::getRotationMatrix2D(center, angle, 1.0);
cv::Mat dst;
cv::warpAffine(source, dst, rot_mat, source.size());
return dst;
}
int main()
{
cv::namedWindow("test1");
// Our rotated rect
int x = 300;
int y = 350;
int w = 200;
int h = 50;
float angle = 47;
cv::RotatedRect rect = cv::RotatedRect(cv::Point2f(x,y), cv::Size2f(w,h), angle);
// An empty image
cv::Mat img = cv::Mat(cv::Size(640, 480), CV_8UC3);
// Draw rotated rect as an ellipse to get some visual feedback
cv::ellipse(img, rect, cv::Scalar(255,0,0), -1);
// Rotate the image by rect.angle * -1
cv::Mat rotimg = rotateImage(img, rect.center, -1 * rect.angle);
// Set roi to the now unrotated rectangle
cv::Rect roi;
roi.x = rect.center.x - (rect.size.width / 2);
roi.y = rect.center.y - (rect.size.height / 2);
roi.width = rect.size.width;
roi.height = rect.size.height;
cv::imshow("test1", rotimg(roi));
cv::waitKey(0);
}
A totally different approach might be to rotate your image (in opposite direction), and still use the rectangular ROI in combination with countNonZeroPixels. The only problem will be that you have to rotate your image around a pivot of the center of the ROI...
To make it clearer, see attached example:
To avoid rotation in similar task I iterate over each pixel in RotatedRect with such function:
double filling(Mat& img, RotatedRect& rect){
double non_zero = 0;
double total = 0;
Point2f rect_points[4];
rect.points( rect_points );
for(Point2f i=rect_points[0];norm(i-rect_points[1])>1;i+=(rect_points[1]-i)/norm((rect_points[1]-i))){
Point2f destination = i+rect_points[2]-rect_points[1];
for(Point2f j=i;norm(j-destination)>1;j+=(destination-j)/norm((destination-j))){
if(img.at<uchar>(j) != 0){
non_zero+=1;
}
total+=1;
}
}
return non_zero/total;
}
It's looks like usual iteration over rectangle, but on each step we add unit 1px vector to current point in direction to destination.
This loop NOT iterate over all points and skip a few pixels, but it was okay for my task.
UPD: It much better to use LineIterator to iterate:
Point2f rect_points[4];
rect.points(rect_points);
Point2f x_start = rect_points[0];
Point2f x_end = rect_points[1];
Point2f y_direction = rect_points[3] - rect_points[0];
LineIterator x = LineIterator(frame, x_start, x_end, 4);
for(int i = 0; i < x.count; ++i, ++x){
LineIterator y = LineIterator(frame, x.pos(), x.pos() + y_direction, 4);
for(int j=0; j < y_count; j++, ++y){
Vec4b pixel = frame.at<Vec4b>(y.pos);
/* YOUR CODE HERE */
}
}