This is the function to draw rectangle with providing respective values for parameters
void rectangle(Mat& img, Point pt1, Point pt2,const Scalar& color, int thickness=1,int lineType=8, int shift=0);
Users can use this function to set ROI with mouse , to draw rectangle on detected matches in Templte Matching application.
My Question is , 2nd and 3rd parameters are Points here. If user want to get point 1 nd point 2 values for further processing , How to get that ?! How to print the both point values?! Point to double or int conversion ?!
Anyone ,clear my doubts. Thanks in advance for help !!
Updated:
void mouseHandler(int event, int x, int y, int flags, void* param)
{
if (event == CV_EVENT_LBUTTONDOWN && !drag)
{
/* left button clicked. ROI selection begins */
point1 = Point(x,y);
drag = 1;
}
if (event == CV_EVENT_MOUSEMOVE && drag)
{
/* mouse dragged. ROI being selected */
Mat img1 = mod_tempimg.clone();
point2 = Point(x, y);
rectangle(img1, point1, point2, CV_RGB(255, 0, 0), 1, 8, 0);
imshow("image", img1);
}
if (event == CV_EVENT_LBUTTONUP && drag)
{
Mat img2=mod_tempimg.clone();
point2 = Point(x, y);
rect = Rect(point1.x,point1.y,x-point1.x,y-point1.y);
drag = 0;
roiImg = mod_tempimg(rect1);
imshow("image", img2);
}
if (event == CV_EVENT_LBUTTONUP)
{
/* ROI selected */
select_flag = 1;
drag = 0;
}
In the above code ,
How to retrieve Point values from this line?!
rect = Rect(point1.x,point1.y,x-point1.x,y-point1.y);
If I know the values that will helpful to find the angle of rect .
even after update, the question is not clear to me. I am not sure what exactly you are asking.
Anyways, as far I understand, you are creating a rectangle object here:
rect = Rect(point1.x,point1.y,x-point1.x,y-point1.y);
and you want to get the corner points of rect later.
rect.tl() gives the top left corner point and rect.br() gives the bottom right corner point. You can also get the x and y values of a corner by : rect.tl().x or rect.br().y
I do not know what you mean by "find angle of rect". Rectangles have 90 degree angles.
when you are writing the program for drawing rectangle with 2 points, you have the points in hand.
Print the point: cout << pt1
Print x value and y value of the point : cout << pt1.x << pt1.y
assign x value explicitly : pt1.x = 0
Get the pixel intensity at some point : image.at<uchar>( pt1) [ for grayscale image]
Related
I would like to perform something similar to Vignette, but instead of Darken the edges, I would like to make a transparent gradient of the edges. I am not looking for a solution using widget.
Any idea how should I modify the code below? Or is it even the same thing at all?? I am sorry but I really badly need this.
Thanks a lot
Note: The code below is from the Image library
https://github.com/brendan-duncan/image
Image vignette(Image src, {num start = 0.3, num end = 0.75, num amount = 0.8}) {
final h = src.height - 1;
final w = src.width - 1;
final num invAmt = 1.0 - amount;
final p = src.getBytes();
for (var y = 0, i = 0; y <= h; ++y) {
final num dy = 0.5 - (y / h);
for (var x = 0; x <= w; ++x, i += 4) {
final num dx = 0.5 - (x / w);
num d = sqrt(dx * dx + dy * dy);
d = _smoothStep(end, start, d);
p[i] = clamp255((amount * p[i] * d + invAmt * p[i]).toInt());
p[i + 1] = clamp255((amount * p[i + 1] * d + invAmt * p[i + 1]).toInt());
p[i + 2] = clamp255((amount * p[i + 2] * d + invAmt * p[i + 2]).toInt());
}
}
return src;
}
num _smoothStep(num edge0, num edge1, num x) {
x = ((x - edge0) / (edge1 - edge0));
if (x < 0.0) {
x = 0.0;
}
if (x > 1.0) {
x = 1.0;
}
return x * x * (3.0 - 2.0 * x);
}
Solution
This code works without any widgets. Actually it doesn't use any of the flutter libraries and is solely based on dart and the image package you
introduced in your question.
The code contains comments which may not make a lot of sense until you read the explanation. The code is as following:
vignette.dart
import 'dart:isolate';
import 'dart:typed_data';
import 'package:image/image.dart';
import 'dart:math' as math;
class VignetteParam {
final Uint8List file;
final SendPort sendPort;
final double fraction;
VignetteParam(this.file, this.sendPort, this.fraction);
}
void decodeIsolate(VignetteParam param) {
Image image = decodeImage(param.file.buffer.asUint8List())!;
Image crop = copyCropCircle(image); // crop the image with builtin function
int r = crop.height~/2; // radius is half the height
int rs = r*r; // radius squared
int tr = (param.fraction * r).floor(); // we apply the fraction to the radius to get tr
int ors = (r-tr)*(r-tr); // ors from diagram
x: for (int x = -r; x <= r; x++) { // iterate across all columns of pixels after shifting x by -r
for (int y = -r; y <= r; y++) { // iterate across all rows of pixels after shifting y by -r
int pos = x*x + y*y; // which is (r')² (diagram)
if (pos <= rs) { // if we are inside the outer circle, then ...
if (pos > ors) { // if we are outside the inner circle, then ...
double f = (rs-pos) / (rs-ors); // calculate the fraction of the alpha value
int c = setAlpha(
crop.getPixelSafe(x+r, y+r),
(0xFF * f).floor()
); // calculate the new color by changing the alpha
crop.setPixelSafe(x+r, y+r, c); // set the new color
} else { // if we reach the inner circle from the top then jump down
y = y*-1;
}
} else { // if we are outside the outer circle and ...
if (y<0) { // above it and jump down to the outer circle
y = -(math.sin(math.acos(x/r)) * r).floor()-1;
}
else continue x; // or if beneath it then jump to the next column
}
}
}
param.sendPort.send(crop);
}
Future<Uint8List> getImage(Uint8List bytes, double radiusFraction) async {
var receivePort = ReceivePort();
await Isolate.spawn(
decodeIsolate,
VignetteParam(bytes, receivePort.sendPort, radiusFraction)
);
Image image = await receivePort.first;
return encodePng(image) as Uint8List;
}
main.dart (example app)
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:flutter_playground/vignette.dart';
import 'package:flutter/services.dart' show ByteData, rootBundle;
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
MyApp({super.key});
#override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Material App',
home: Scaffold(
appBar: AppBar(
title: const Text('Material App Bar'),
),
body: Center(
child: FutureBuilder<Uint8List>(
future: imageFuture(),
builder: (context, snapshot) {
switch (snapshot.connectionState) {
case ConnectionState.done:
return Image.memory(
snapshot.data!,
);
default:
return CircularProgressIndicator();
}
}
),
),
),
);
}
Future<Uint8List> imageFuture() async {
// Load your file here and call getImage
ByteData byteData = await rootBundle.load("assets/image.jpeg");
return getImage(byteData.buffer.asUint8List(), 0.3);
}
}
Explanation
The math behind this algorithm is very simple. It's only based on the equation of a circle. But first of all, have a look at this diagram:
Diagram
The diagram contains the square which is our image. Inside the square we can see the circle which is the visible part of the image. The opaque area is fully opaque while the transparent are gets more transparent (= less alpha) if we get closer to the outer circle. r (radius) is the radius of the outer circle, tr (transparent radius) is the part of the radius which is in the transparent area. That's why r-tr is the radius of the inner circle.
In order to apply this diagram to our needs we have to shift our x-Axis and y-Axis. The image package has a grid which starts with (0,0) at the top left corner. But we need (0,0) at the center of the image, hence we shift it by the radius r. If you look close you may notice that our diagram doesn't look as usual. The y-Axis usually goes up, but in our case it really doesn't matter and makes things easier.
Calculation of the position
We need to iterate across all pixels inside the transparent area and
change the alpha value. The equation of a circle, which is x'²+y'²=r'², helps us to figure out whether the pixel is in the transparent area. (x',y') is the coordinate of the pixel and r' is the distance between the pixel and the origin. Is the distance behind the inner circle and before the outer circle, then r' <= r and r' > r-tr must hold. In order to avoid calculating the square root, we instead compare the squared values, so pos <= rs and pos > ors must hold. (Look at the diagram to see what pos, rs, and ors are.)
Changing the alpha value
The pixel is inside the transparent area. How do we change the alpha value? We need a linear gradient transparency, so we calculate the distance between the actual position pos and the outer circle, which is rs-pos. The longer the distance, the more alpha we need for this pixel. The total distance is rs-ors, so we calculate (rs-pos) / (rs-ors) to get the fraction of our alpha value f. Finally we change our alpha value of this pixel with f.
Optimization
This algorithm actually does the whole job. But we can optimize it. I wrote that we have to iterate across all pixels inside the transparent area. So, we don't need the pixels outside of the outer circle and inside the inner circle, as their alpha value don't change. But we have two for loops iterating through all pixels from left to right and from top to bottom. Well, when we reach the inner circle from outside, we can jump down by negating the y-Position. Hence, we set y = y*-1; if we are inside the outer circle (pos <= rs) but not outside the inner circle (pos <= ors) anymore.
What if we are above the outer circle (pos > rs)? Well then we can jump to the outer circle by calculating the y-Position with sine and arccosine. I won't go much into detail here, but if you want further explanation, let me know by commenting below. The if (y<0) just determines if we are above the outer circle (y is negative) or beneath it. If above then jump down, if beneath jump to the next column of pixels. Hence we 'continue' the x for loop.
Here you go - the widgetless approach based on my previous answer:
Canvas(PictureRecorder()).drawImage(
your_image, //here is the image you want to change
Offset.zero,// the offset from the corner of the canvas
Paint()
..shader = const RadialGradient(
radius: needed_radius, // the radius of the result gradient - it should depend on the image dimens
colors: [Colors.black, Colors.transparent],
).createShader(
Rect.fromLTRB(0, 0, your_image_width, your_image_height), // the portion of your image that should be influenced by the shader - in this case, I use the whole image.
)
..blendMode = BlendMode.dstIn); // for the black color of the gradient to be masking one
I will add it to the previous answer, also
Given that you want "transparency", you need the alpha channel. The code you provide seems to only have 3 bytes per pixel, so only RGB, without alpha channel.
A solution may be:
Modify the code such that it has alpha channel, i.e. 4 bytes per pixel.
Instead of modifying the RGB to make it darker, i.e. p[i] = ..., p[i+1] = ..., p[i+2] = ..., leave RGB unchanged, and modifying the alpha channel to make alpha smaller. For example, say, p[i+3]=... (suppose you are RGBA format instead of ARGB).
I am trying to detect full circles and semicircles in an image.
I am following the below mentioned procedure:
Process image (including Canny edge detection).
Find contours and draw them on an empty image, so that I can eliminate unwanted components
(The processed image is exactly what I want).
Detect circles using HoughCircles. And, this is what I get:
I tried varying the parameters in HoughCircles but the results are not consistent as it varies based on lighting and the position of circles in the image.
I accept or reject a circle based on its size. So, the result is not acceptable. Also, I have a long list of "acceptable" circles. So, I need some allowance in the HoughCircle params.
As for the full circles, it's easy - I can simply find the "roundness" of the contour. The problem is semicircles!
Please find the edited image before Hough transform
Use houghCircle directly on your image, don't extract edges first.
Then test for each detected circle, how much percentage is really present in the image:
int main()
{
cv::Mat color = cv::imread("../houghCircles.png");
cv::namedWindow("input"); cv::imshow("input", color);
cv::Mat canny;
cv::Mat gray;
/// Convert it to gray
cv::cvtColor( color, gray, CV_BGR2GRAY );
// compute canny (don't blur with that image quality!!)
cv::Canny(gray, canny, 200,20);
cv::namedWindow("canny2"); cv::imshow("canny2", canny>0);
std::vector<cv::Vec3f> circles;
/// Apply the Hough Transform to find the circles
cv::HoughCircles( gray, circles, CV_HOUGH_GRADIENT, 1, 60, 200, 20, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
cv::circle( color, center, 3, Scalar(0,255,255), -1);
cv::circle( color, center, radius, Scalar(0,0,255), 1 );
}
//compute distance transform:
cv::Mat dt;
cv::distanceTransform(255-(canny>0), dt, CV_DIST_L2 ,3);
cv::namedWindow("distance transform"); cv::imshow("distance transform", dt/255.0f);
// test for semi-circles:
float minInlierDist = 2.0f;
for( size_t i = 0; i < circles.size(); i++ )
{
// test inlier percentage:
// sample the circle and check for distance to the next edge
unsigned int counter = 0;
unsigned int inlier = 0;
cv::Point2f center((circles[i][0]), (circles[i][1]));
float radius = (circles[i][2]);
// maximal distance of inlier might depend on the size of the circle
float maxInlierDist = radius/25.0f;
if(maxInlierDist<minInlierDist) maxInlierDist = minInlierDist;
//TODO: maybe paramter incrementation might depend on circle size!
for(float t =0; t<2*3.14159265359f; t+= 0.1f)
{
counter++;
float cX = radius*cos(t) + circles[i][0];
float cY = radius*sin(t) + circles[i][1];
if(dt.at<float>(cY,cX) < maxInlierDist)
{
inlier++;
cv::circle(color, cv::Point2i(cX,cY),3, cv::Scalar(0,255,0));
}
else
cv::circle(color, cv::Point2i(cX,cY),3, cv::Scalar(255,0,0));
}
std::cout << 100.0f*(float)inlier/(float)counter << " % of a circle with radius " << radius << " detected" << std::endl;
}
cv::namedWindow("output"); cv::imshow("output", color);
cv::imwrite("houghLinesComputed.png", color);
cv::waitKey(-1);
return 0;
}
For this input:
It gives this output:
The red circles are Hough results.
The green sampled dots on the circle are inliers.
The blue dots are outliers.
Console output:
100 % of a circle with radius 27.5045 detected
100 % of a circle with radius 25.3476 detected
58.7302 % of a circle with radius 194.639 detected
50.7937 % of a circle with radius 23.1625 detected
79.3651 % of a circle with radius 7.64853 detected
If you want to test RANSAC instead of Hough, have a look at this.
Here is another way to do it, a simple RANSAC version (much optimization to be done to improve speed), that works on the Edge Image.
the method loops these steps until it is cancelled
choose randomly 3 edge pixel
estimate circle from them (3 points are enough to identify a circle)
verify or falsify that it's really a circle: count how much percentage of the circle is represented by the given edges
if a circle is verified, remove the circle from input/egdes
int main()
{
//RANSAC
//load edge image
cv::Mat color = cv::imread("../circleDetectionEdges.png");
// convert to grayscale
cv::Mat gray;
cv::cvtColor(color, gray, CV_RGB2GRAY);
// get binary image
cv::Mat mask = gray > 0;
//erode the edges to obtain sharp/thin edges (undo the blur?)
cv::erode(mask, mask, cv::Mat());
std::vector<cv::Point2f> edgePositions;
edgePositions = getPointPositions(mask);
// create distance transform to efficiently evaluate distance to nearest edge
cv::Mat dt;
cv::distanceTransform(255-mask, dt,CV_DIST_L1, 3);
//TODO: maybe seed random variable for real random numbers.
unsigned int nIterations = 0;
char quitKey = 'q';
std::cout << "press " << quitKey << " to stop" << std::endl;
while(cv::waitKey(-1) != quitKey)
{
//RANSAC: randomly choose 3 point and create a circle:
//TODO: choose randomly but more intelligent,
//so that it is more likely to choose three points of a circle.
//For example if there are many small circles, it is unlikely to randomly choose 3 points of the same circle.
unsigned int idx1 = rand()%edgePositions.size();
unsigned int idx2 = rand()%edgePositions.size();
unsigned int idx3 = rand()%edgePositions.size();
// we need 3 different samples:
if(idx1 == idx2) continue;
if(idx1 == idx3) continue;
if(idx3 == idx2) continue;
// create circle from 3 points:
cv::Point2f center; float radius;
getCircle(edgePositions[idx1],edgePositions[idx2],edgePositions[idx3],center,radius);
float minCirclePercentage = 0.4f;
// inlier set unused at the moment but could be used to approximate a (more robust) circle from alle inlier
std::vector<cv::Point2f> inlierSet;
//verify or falsify the circle by inlier counting:
float cPerc = verifyCircle(dt,center,radius, inlierSet);
if(cPerc >= minCirclePercentage)
{
std::cout << "accepted circle with " << cPerc*100.0f << " % inlier" << std::endl;
// first step would be to approximate the circle iteratively from ALL INLIER to obtain a better circle center
// but that's a TODO
std::cout << "circle: " << "center: " << center << " radius: " << radius << std::endl;
cv::circle(color, center,radius, cv::Scalar(255,255,0),1);
// accept circle => remove it from the edge list
cv::circle(mask,center,radius,cv::Scalar(0),10);
//update edge positions and distance transform
edgePositions = getPointPositions(mask);
cv::distanceTransform(255-mask, dt,CV_DIST_L1, 3);
}
cv::Mat tmp;
mask.copyTo(tmp);
// prevent cases where no fircle could be extracted (because three points collinear or sth.)
// filter NaN values
if((center.x == center.x)&&(center.y == center.y)&&(radius == radius))
{
cv::circle(tmp,center,radius,cv::Scalar(255));
}
else
{
std::cout << "circle illegal" << std::endl;
}
++nIterations;
cv::namedWindow("RANSAC"); cv::imshow("RANSAC", tmp);
}
std::cout << nIterations << " iterations performed" << std::endl;
cv::namedWindow("edges"); cv::imshow("edges", mask);
cv::namedWindow("color"); cv::imshow("color", color);
cv::imwrite("detectedCircles.png", color);
cv::waitKey(-1);
return 0;
}
float verifyCircle(cv::Mat dt, cv::Point2f center, float radius, std::vector<cv::Point2f> & inlierSet)
{
unsigned int counter = 0;
unsigned int inlier = 0;
float minInlierDist = 2.0f;
float maxInlierDistMax = 100.0f;
float maxInlierDist = radius/25.0f;
if(maxInlierDist<minInlierDist) maxInlierDist = minInlierDist;
if(maxInlierDist>maxInlierDistMax) maxInlierDist = maxInlierDistMax;
// choose samples along the circle and count inlier percentage
for(float t =0; t<2*3.14159265359f; t+= 0.05f)
{
counter++;
float cX = radius*cos(t) + center.x;
float cY = radius*sin(t) + center.y;
if(cX < dt.cols)
if(cX >= 0)
if(cY < dt.rows)
if(cY >= 0)
if(dt.at<float>(cY,cX) < maxInlierDist)
{
inlier++;
inlierSet.push_back(cv::Point2f(cX,cY));
}
}
return (float)inlier/float(counter);
}
inline void getCircle(cv::Point2f& p1,cv::Point2f& p2,cv::Point2f& p3, cv::Point2f& center, float& radius)
{
float x1 = p1.x;
float x2 = p2.x;
float x3 = p3.x;
float y1 = p1.y;
float y2 = p2.y;
float y3 = p3.y;
// PLEASE CHECK FOR TYPOS IN THE FORMULA :)
center.x = (x1*x1+y1*y1)*(y2-y3) + (x2*x2+y2*y2)*(y3-y1) + (x3*x3+y3*y3)*(y1-y2);
center.x /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
center.y = (x1*x1 + y1*y1)*(x3-x2) + (x2*x2+y2*y2)*(x1-x3) + (x3*x3 + y3*y3)*(x2-x1);
center.y /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
radius = sqrt((center.x-x1)*(center.x-x1) + (center.y-y1)*(center.y-y1));
}
std::vector<cv::Point2f> getPointPositions(cv::Mat binaryImage)
{
std::vector<cv::Point2f> pointPositions;
for(unsigned int y=0; y<binaryImage.rows; ++y)
{
//unsigned char* rowPtr = binaryImage.ptr<unsigned char>(y);
for(unsigned int x=0; x<binaryImage.cols; ++x)
{
//if(rowPtr[x] > 0) pointPositions.push_back(cv::Point2i(x,y));
if(binaryImage.at<unsigned char>(y,x) > 0) pointPositions.push_back(cv::Point2f(x,y));
}
}
return pointPositions;
}
input:
output:
console output:
press q to stop
accepted circle with 50 % inlier
circle: center: [358.511, 211.163] radius: 193.849
accepted circle with 85.7143 % inlier
circle: center: [45.2273, 171.591] radius: 24.6215
accepted circle with 100 % inlier
circle: center: [257.066, 197.066] radius: 27.819
circle illegal
30 iterations performed`
optimization should include:
use all inlier to fit a better circle
dont compute distance transform after each detected circles (it's quite expensive). compute inlier from point/edge set directly and remove the inlier edges from that list.
if there are many small circles in the image (and/or a lot of noise), it's unlikely to hit randomly 3 edge pixels or a circle. => try contour detection first and detect circles for each contour. after that try to detect all "other" circles left in the image.
a lot of other stuff
#Micka's answer is great, here it is roughly translated into python
The method takes a thresholded image mask as an argument, leaving that part as an exercise for the reader
def get_circle_percentages(image):
#compute distance transform of image
dist = cv2.distanceTransform(image, cv2.DIST_L2, 0)
rows = image.shape[0]
circles = cv2.HoughCircles(image, cv2.HOUGH_GRADIENT, 1, rows / 8, 50, param1=50, param2=10, minRadius=40, maxRadius=90)
minInlierDist = 2.0
for c in circles[0, :]:
counter = 0
inlier = 0
center = (c[0], c[1])
radius = c[2]
maxInlierDist = radius/25.0
if maxInlierDist < minInlierDist: maxInlierDist = minInlierDist
for i in np.arange(0, 2*np.pi, 0.1):
counter += 1
x = center[0] + radius * np.cos(i)
y = center[1] + radius * np.sin(i)
if dist.item(int(y), int(x)) < maxInlierDist:
inlier += 1
print(str(100.0*inlier/counter) + ' percent of a circle with radius ' + str(radius) + " detected")
I know that it's little bit late, but I used different approach which is much easier.
From the cv2.HoughCircles(...) you get centre of the circle and the diameter (x,y,r). So I simply go through all centre points of the circles and I check if they are further away from the edge of the image than their diameter.
Here is my code:
height, width = img.shape[:2]
#test top edge
up = (circles[0, :, 0] - circles[0, :, 2]) >= 0
#test left edge
left = (circles[0, :, 1] - circles[0, :, 2]) >= 0
#test right edge
right = (circles[0, :, 0] + circles[0, :, 2]) <= width
#test bottom edge
down = (circles[0, :, 1] + circles[0, :, 2]) <= height
circles = circles[:, (up & down & right & left), :]
The semicircle detected by the hough algorithm is most probably correct. The issue here might be that unless you strictly control the geometry of the scene, i.e. exact position of the camera relative to the target, so that the image axis is normal to the target plane, you will get ellipsis rather than circles projected on the image plane. Not to mention the distortions caused by the optical system, which further degenerate the geometric figure. If you rely on precision here, I would recommend camera calibration.
You better try with different kernel for gaussian blur.That will help you
GaussianBlur( src_gray, src_gray, Size(11, 11), 5,5);
so change size(i,i),j,j)
I'm trying to find the orientation of a binary image (where orientation is defined to be the axis of least moment of inertia, i.e. least second moment of area). I'm using Dr. Horn's book (MIT) on Robot Vision which can be found here as reference.
Using OpenCV, here is my function, where a, b, and c are the second moments of area as found on page 15 of the pdf above (page 60 of the text):
Point3d findCenterAndOrientation(const Mat& src)
{
Moments m = cv::moments(src, true);
double cen_x = m.m10/m.m00; //Centers are right
double cen_y = m.m01/m.m00;
double a = m.m20-m.m00*cen_x*cen_x;
double b = 2*m.m11-m.m00*(cen_x*cen_x+cen_y*cen_y);
double c = m.m02-m.m00*cen_y*cen_y;
double theta = a==c?0:atan2(b, a-c)/2.0;
return Point3d(cen_x, cen_y, theta);
}
OpenCV calculates the second moments around the origin (0,0) so I have to use the Parallel Axis Theorem to move the axis to the center of the shape, mr^2.
The center looks right when I call
Point3d p = findCenterAndOrientation(src);
rectangle(src, Point(p.x-1,p.y-1), Point(p.x+1, p.y+1), Scalar(0.25), 1);
But when I try to draw the axis with lowest moment of inertia, using this function, it looks completely wrong:
line(src, (Point(p.x,p.y)-Point(100*cos(p.z), 100*sin(p.z))), (Point(p.x, p.y)+Point(100*cos(p.z), 100*sin(p.z))), Scalar(0.5), 1);
Here are some examples of input and output:
(I'd expect it to be vertical)
(I'd expect it to be horizontal)
I worked with the orientation sometimes back and coded the following. It returns me the exact orientation of the object. largest_contour is the shape that is detected.
CvMoments moments1,cenmoments1;
double M00, M01, M10;
cvMoments(largest_contour,&moments1);
M00 = cvGetSpatialMoment(&moments1,0,0);
M10 = cvGetSpatialMoment(&moments1,1,0);
M01 = cvGetSpatialMoment(&moments1,0,1);
posX_Yellow = (int)(M10/M00);
posY_Yellow = (int)(M01/M00);
double theta = 0.5 * atan(
(2 * cvGetCentralMoment(&moments1, 1, 1)) /
(cvGetCentralMoment(&moments1, 2, 0) - cvGetCentralMoment(&moments1, 0, 2)));
theta = (theta / PI) * 180;
// fit an ellipse (and draw it)
if (largest_contour->total >= 6) // can only do an ellipse fit
// if we have > 6 points
{
CvBox2D box = cvFitEllipse2(largest_contour);
if ((box.size.width < imgYellowThresh->width) && (box.size.height < imgYellowThresh->height))
{
cvEllipseBox(imgYellowThresh, box, CV_RGB(255, 255 ,255), 3, 8, 0 );
}
}
I have a RotatedRect, I want to do some image processing in the rotated region (say extract the color histogram). How can I get the ROI? I mean get the region(pixels) so that I can do processing.
I find this, but it changes the region by using getRotationMatrix2D and warpAffine, so it doesn't work for my situation (I need to process the original image pixels).
Then I find this suggests using mask, which sounds reasonable, but can anyone teach me how to get the mask as the green RotatedRect below.
Excepts the mask, is there any other solutions ?
Thanks for any hint
Here is my solution, using mask:
The idea is construct a Mat mask by assigning 255 to my RotatedRect ROI.
How to know which point is in ROI (which should be assign to 255)?
I use the following function isInROI to address the problem.
/** decide whether point p is in the ROI.
*** The ROI is a rotated rectange whose 4 corners are stored in roi[]
**/
bool isInROI(Point p, Point2f roi[])
{
double pro[4];
for(int i=0; i<4; ++i)
{
pro[i] = computeProduct(p, roi[i], roi[(i+1)%4]);
}
if(pro[0]*pro[2]<0 && pro[1]*pro[3]<0)
{
return true;
}
return false;
}
/** function pro = kx-y+j, take two points a and b,
*** compute the line argument k and j, then return the pro value
*** so that can be used to determine whether the point p is on the left or right
*** of the line ab
**/
double computeProduct(Point p, Point2f a, Point2f b)
{
double k = (a.y-b.y) / (a.x-b.x);
double j = a.y - k*a.x;
return k*p.x - p.y + j;
}
How to construct the mask?
Using the following code.
Mat mask = Mat(image.size(), CV_8U, Scalar(0));
for(int i=0; i<image.rows; ++i)
{
for(int j=0; j<image.cols; ++j)
{
Point p = Point(j,i); // pay attention to the cordination
if(isInROI(p,vertices))
{
mask.at<uchar>(i,j) = 255;
}
}
}
Done,
vancexu
I found the following post very useful to do the same.
http://answers.opencv.org/question/497/extract-a-rotatedrect-area/
The only caveats are that (a) the "angle" here is assumed to be a rotation about the center of the entire image (not the bounding box) and (b) in the last line below (I think) "rect.center" needs to be transformed to the rotated image (by applying the rotation-matrix).
// rect is the RotatedRect
RotatedRect rect;
// matrices we'll use
Mat M, rotated, cropped;
// get angle and size from the bounding box
float angle = rect.angle;
Size rect_size = rect.size;
// thanks to http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/
if (rect.angle < -45.) {
angle += 90.0;
swap(rect_size.width, rect_size.height);
}
// get the rotation matrix
M = getRotationMatrix2D(rect.center, angle, 1.0);
// perform the affine transformation
warpAffine(src, rotated, M, src.size(), INTER_CUBIC);
// crop the resulting image
getRectSubPix(rotated, rect_size, rect.center, cropped);
If you need a superfast solution, I suggest:
crop a Rect enclosing your RotatedRect rr.
rotate+translate back the cropped image so that the RotatedRect is now equivalent to a Rect. (using warpAffine on the product of the rotation and the translation 3x3 matrices)
Keep that roi of the rotated-back image (roi=Rect(Point(0,0), rr.size())).
It is a bit time-consuming to write though as you need to calculate the combined affine transform.
If you don't care about the speed and want to create a fast prototype for any shape of the region, you can use an openCV function pointPolygonTest() that returns a positive value if the point inside:
double pointPolygonTest(InputArray contour, Point2f pt, bool measureDist)
Simple code:
vector<Point2f> contour(4);
contour[0] = Point2f(-10, -10);
contour[1] = Point2f(-10, 10);
contour[2] = Point2f(10, 10);
contour[3] = Point2f(10, -10);
Point2f pt = Point2f(11, 11);
an double res = pointPolygonTest(contour, pt, false);
if (res>0)
cout<<"inside"<<endl;
else
cout<<"outside"<<endl;
as the title says i'm trying to find the number of non-zero pixels in a certain area of a cv::Mat, namely within a RotatedRect.
For a regular Rect one could simply use countNonZeroPixels on a ROI. However ROIs can only be regular (non rotated) rectangles.
Another idea was to draw the rotated rectangle and use that as a mask. However openCV neither supports the drawing of rotated rectangles nor does countNonZeroPixels accept a mask.
Does anyone have a solution for how to elegantly solve this ?
Thank you !
Ok, so here's my first take at it.
The idea is to rotate the image reverse to the rectangle's rotation and than apply a roi on the straightened rectangle.
This will break if the rotated rectangle is not completely within the image
You can probably speed this up by applying another roi before rotation to avoid having to rotate the whole image...
#include <highgui.h>
#include <cv.h>
// From http://stackoverflow.com/questions/2289690/opencv-how-to-rotate-iplimage
cv::Mat rotateImage(const cv::Mat& source, cv::Point2f center, double angle)
{
cv::Mat rot_mat = cv::getRotationMatrix2D(center, angle, 1.0);
cv::Mat dst;
cv::warpAffine(source, dst, rot_mat, source.size());
return dst;
}
int main()
{
cv::namedWindow("test1");
// Our rotated rect
int x = 300;
int y = 350;
int w = 200;
int h = 50;
float angle = 47;
cv::RotatedRect rect = cv::RotatedRect(cv::Point2f(x,y), cv::Size2f(w,h), angle);
// An empty image
cv::Mat img = cv::Mat(cv::Size(640, 480), CV_8UC3);
// Draw rotated rect as an ellipse to get some visual feedback
cv::ellipse(img, rect, cv::Scalar(255,0,0), -1);
// Rotate the image by rect.angle * -1
cv::Mat rotimg = rotateImage(img, rect.center, -1 * rect.angle);
// Set roi to the now unrotated rectangle
cv::Rect roi;
roi.x = rect.center.x - (rect.size.width / 2);
roi.y = rect.center.y - (rect.size.height / 2);
roi.width = rect.size.width;
roi.height = rect.size.height;
cv::imshow("test1", rotimg(roi));
cv::waitKey(0);
}
A totally different approach might be to rotate your image (in opposite direction), and still use the rectangular ROI in combination with countNonZeroPixels. The only problem will be that you have to rotate your image around a pivot of the center of the ROI...
To make it clearer, see attached example:
To avoid rotation in similar task I iterate over each pixel in RotatedRect with such function:
double filling(Mat& img, RotatedRect& rect){
double non_zero = 0;
double total = 0;
Point2f rect_points[4];
rect.points( rect_points );
for(Point2f i=rect_points[0];norm(i-rect_points[1])>1;i+=(rect_points[1]-i)/norm((rect_points[1]-i))){
Point2f destination = i+rect_points[2]-rect_points[1];
for(Point2f j=i;norm(j-destination)>1;j+=(destination-j)/norm((destination-j))){
if(img.at<uchar>(j) != 0){
non_zero+=1;
}
total+=1;
}
}
return non_zero/total;
}
It's looks like usual iteration over rectangle, but on each step we add unit 1px vector to current point in direction to destination.
This loop NOT iterate over all points and skip a few pixels, but it was okay for my task.
UPD: It much better to use LineIterator to iterate:
Point2f rect_points[4];
rect.points(rect_points);
Point2f x_start = rect_points[0];
Point2f x_end = rect_points[1];
Point2f y_direction = rect_points[3] - rect_points[0];
LineIterator x = LineIterator(frame, x_start, x_end, 4);
for(int i = 0; i < x.count; ++i, ++x){
LineIterator y = LineIterator(frame, x.pos(), x.pos() + y_direction, 4);
for(int j=0; j < y_count; j++, ++y){
Vec4b pixel = frame.at<Vec4b>(y.pos);
/* YOUR CODE HERE */
}
}