I'm looking at some code and i don't understand this line. can someone explain what it is doing?
smallImg = image( Rect(0, Slice_row, image.cols, 6) );
smallImg is a sub-image/portion of the larger image.
This smallImg is formed using a cv::Rect, which is an object that describes a rectangular region in the image. This rectangular region is defined by the top-left coordinates, the width, and height of the rectangle. So here, (0, Slice_row) is the top left corner of the rectangle, image.cols is the width and 6 is the height.
So smallImg is a portion of image with the same width as the original image but only having 6 rows of pixels starting from Slice_row.
Hope this helps
Related
Can anyone please explain or describe how the coordinates points in the LinearGradient?
For example: I have my code in this way.
var gradient = new LinearGradient(0, 0, 500, 500, colors, null, Shader.TileMode.Clamp);
paint.SetShader(gradient);
paint.Dither = true;
how it display in the rectangle while applying in the rectangle ?
In Android, the coordinate system always is like what you can see above picture.
1) (0,0) is top left corner.
2) (maxX,0) is top right corner
3) (0,maxY) is bottom left corner
4) (maxX,maxY) is bottom right corner
The maxX or maxY is the screen's(or view's) max width or max height.
This new LinearGradient(0, 0, 500, 500, colors, null, Shader.TileMode.Clamp) method will be sure a Gradient line which you can see in above picture. And when you use Canvas to draw the rectangle with the paint, the colors will be rendered along this line.
I need to define a rotated rectangle from its 4 corners. The rotated rectangle is defined by a center point, a size couple (width, height), and an angle.
How is it decided which size is the height, and which one is the width?
The width is not the length of the most horizontal edge, is it? E.g. if the angle is bigger than 90°, does it swap?
height should be the largest side, width is the other one, and angle is the rotation angle (in degrees) in a clockwise direction.
Otherwise, you can get an equivalent rectangle with height and width swapped, rotated by 90 degrees.
You can use minAreaRect to find a RotatedRect:
vector<Point> pts = {pt1, pt2, pt3, pt4}
RotatedRect box = minAreaRect(pts);
// Be sure that largest side is the height
if (box.size.width > box.size.height)
{
swap(box.size.width, box.size.height);
box.angle += 90.f;
}
Ok, with Miki's help, and with some tests, I got it clearer...
It seems that the rotated rectangle is an upright rectangle (width and height are clearly defined, then)... that is rotated!
In image coords, y is directed to the bottom, the angle is given clockwise. In usual math coords (y to the top), the angle is given counter-clockwise. Then, it fits with c++ <math.h> included atan2(y,x) function for example (except that it returns radians).
Then, to summarize, if we consider one given edge of the rectangle (two corners), its length can be considered as the width if we retrieve the angle with atan2 on its y difference and x difference. Something like:
Point pt1, pt2, pt3, pt4;
RotatedRect rect;
rect.center = (pt1 + pt2 + pt3 + pt4)/4;
// assuming the points are already sorted
rect.size.width = distance(pt1, pt2); // sqrt(...)
rect.size.height = distance(pt2, pt3);
rect.angle = atan2(pt2.y-pt1.y, pt2.x-pt1.x);
and this can be improved with width being the mean value of dist(pt1,pt2) and dist(pt3,pt4) for example. The same for height.
angle can also be calculated as being the mean value of atan for (pt1, pt2) and atan for (pt3, pt4).
I am using this method to rotate a cvMat, whenever I run it I get back a rotated image however there is a lot of deadspace below it.
void rotate(cv::Mat& src, double angle, cv::Mat& dst)
{
int len = std::max(src.cols, src.rows);
cv::Point2f pt(len/2., len/2.);
cv::Mat r = cv::getRotationMatrix2D(pt, angle, 1.0);
cv::warpAffine(src, dst, r, cv::Size(len, len));
}
When given this image:
I get this image:
The image has been rotated but as you can see some extra pixels have been added, how can I only rotate the original image and not add any extra pixels?
Method call:
rotate(src, skew, res);
res being dst.
As mayank-baddi said you have to use output image size same as the input to resolve this, and my answer is based on your comment above How can I avoid adding the black area? after wrapAffine,
So you have to do,
Create white image little bigger than your source, and it will depend on your skew angle, here I used 50 pixel.
int extend=50;
Mat tmp(src.rows+2*extend,src.cols+2*extend,src.type(),Scalar::all(255));
Copy the source to above using ROI
Rect ROI(extend,extend,src.cols,src.rows);
src.copyTo(tmp(ROI));
Now rotate tmp instead of src
rotate(tmp, skew, res); res being dst.
Crop back the final image from rotated result using the same ROI.
Mat crop=res(ROI);
imshow("crop",crop);
You have to define the output image size while using warpAffine transform.
Here you are defining the size as cv::Size(len, len) where len is max of height and width.
cv::warpAffine(src, dst, r, cv::Size(len, len));
Define/calculate the size of the final image accordingly.
I need to detect the width of these lines:
These lines are parallel and have some noise on them.
Currently, what I do is:
1.Find the center using thinning (ZhangSuen)
ZhanSuenThinning(binImage, thin);
2.Compute the distance transform
cv::distanceTransform(binImage, distImg, CV_DIST_L2, CV_DIST_MASK_5);
3.Accumulate the half distance around the center
double halfWidth = 0.0;
int count = 0;
for(int a = 0; a < thinImg.cols; a++)
for(int b = 0; b < thinImg.rows; b++)
if(thinImg.ptr<uchar>(b, a)[0] > 0)
{
halfWidth += distImg.ptr<float>(b, a)[0];
count ++;
}
4.Finally, get the actual width
width = halfWidth / count * 2;
The result, isn't quite good, where it's wrong around 1-2 pixels. On bigger Image, the result is even worse, Any suggestion?
You can adapt barcode reader algorithms which is the faster way to do it.
Scan horizontal and vertical lines.
Lets X the length of the horizontal intersection with black line an Y the length of the vertical intersection (you can have it be calculating the median value of several X and Y if there are some noise).
X * Y / 2 = area
X²+Y² = hypotenuse²
hypotenuse * width / 2 = area
So : width = 2 * area / hypotenuse
EDIT : You can also easily find the angle by using PCA.
Al you need is find RotatedRect for each contour in your image, here is OpenCV tutorial how to do it. Then just take the values of 'size' from rotated rectangle where you will get height and width of contour, the height and width may interchange for different alignment of contour. Here in the above image the height become width and width become height.
Contour-->RotatedRect
|
'--> Size2f size
|
|-->width
'-->height
After find contour just do
RotatedRect minRect = minAreaRect( Mat(contours[i]) );
Size2f contourSize=minRect.size // width and height of the rectangle
Rotated rectangle for each contour
Here is C++ code
Mat src=imread("line.png",1);
Mat thr,gray;
blur(src,src,Size(3,3));
cvtColor(src,gray,CV_BGR2GRAY);
Canny(gray,thr,50, 190, 3, false );
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( thr.clone(),contours,hierarchy,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,Point(0,0));
vector<RotatedRect> minRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
minRect[i] = minAreaRect( Mat(contours[i]) );
for( int i = 0; i< contours.size(); i++ )
{
cout<<" Size ="<<minRect[i].size<<endl; //The width may interchange according to contour alignment
Size2f s=minRect[i].size;
// rotated rectangle
Point2f rect_points[4]; minRect[i].points( rect_points );
for( int j = 0; j < 4; j++ )
line( src, rect_points[j], rect_points[(j+1)%4], Scalar(0,0,255), 1, 8 );
}
imshow("src",src);
imshow("Canny",thr);
One quick and simple suggestion:
Count the total number of black pixels.
Detect the length of each line. (perhaps with CVHoughLinesP, or simply the diagonal of the bounding box around each thinned line)
Divide the number of black pixels by the sum of all line lengths, that should give you the average line width.
I am not sure whether that is more accurate than your existing approach though. The irregular end parts of each line might throw it of.
One thing you could try that could increase the accuracy for that case:
Measure the average angle of the lines
Rotate the image so the lines are aligned horizontally
crop a rectangular subsection of your shape, so all lines have the same length
(you can get the contour of your shape by morphological closing, then find a rectangle that is entirely contained within the shape. Make sure that the horizontal edges of the rectangle are inbetween lines)
then count the number of black pixels again (count gray pixels caused by rotating the image as x% of a whole pixel)
Divide by (rectangle_width * number_of_lines_in_rectangle)
Hough line fits to find each line
From each pixel on each line fit, scan in the perpendicular direction to get the distance to the edge. Find the edge using a spline fit or similar sub-pixel method.
Depending on your needs/desires, take the median or average distance. To eliminate problems with outliers, throw out the distances below the 10th percentile and above the 90th percentile before calculating the mean or median. You might also report the size using statistics: line width W, standard deviation S.
Although a connected components algorithm can be used to find the lines, it won't find the "true" edges as nicely as a spline fit.
The image like you shown is noisy/blurry and thus the number of black pixels might not reflect line properties; for example, black pixels can be partially attributed to salt-and-pepper noise. You can get rid of it with morphological erosion but this will affect your lines as well.
A better way is to extract connected components, delete small ones that likely come from noise or small blobs, then calculate the number of pixels and divide it on the number of lines. This approach will also help you to analyse the shape of the objects in your image and get rid of any artefacts other than noise or lines.
A different real word situation is when you have some grey pixels close to a line border. You can either use a threshold to discard them or count them with some weight<1. This will compensate for blur in your image. By the way, rotation of the image may increase the blur since it is typically done with interpolation and smoothing.
I use CGContextStrokePath painted on a straight line in a white background picture, stroke color is red, alpha is 1.0
After drawing the line, why the points is not (255, 0, 0), but (255, 96, 96)
Why not pure red?
Quartz (the iOS drawing layer) uses antialiasing to make things look smooth. That's why you're seeing non-pure-red pixels.
If you stroke a line of width 1.0 and you want only pure red pixels, the line needs to be horizontal or vertical and it needs to run along the center of the pixels, like this:
CGContextMoveToPoint(gc, 0, 10.5);
CGContextAddLineToPoint(gc, 50, 10.5);
CGContextStroke(gc);
The .5 in the y coordinates puts the long along the centers of the pixels.