How to draw line not line segment OpenCV 2.4.2 - opencv

Well the question says it all,
I know the function Line(), which draws line segment between two points.
I need to draw line NOT a line segment, also using the two points of the line segment.
[EN: Edit from what was previously posted as an answer for the question]
I used your solution and it performed good results in horizontal lines, but I still got problems in vertical lines.
For example, follows below an example using the points [306,411] and [304,8] (purple) and the draw line (red), on a image with 600x600 pixels. Do you have some tip?

I see this is pretty much old question. I had exactly the same problem and I used this simple code:
double Slope(int x0, int y0, int x1, int y1){
return (double)(y1-y0)/(x1-x0);
}
void fullLine(cv::Mat *img, cv::Point a, cv::Point b, cv::Scalar color){
double slope = Slope(a.x, a.y, b.x, b.y);
Point p(0,0), q(img->cols,img->rows);
p.y = -(a.x - p.x) * slope + a.y;
q.y = -(b.x - q.x) * slope + b.y;
line(*img,p,q,color,1,8,0);
}
First I calculate a slope of the line segment and then I "extend" the line segment into image's borders. I calculate new points of the line which lies in x = 0 and x = image.width. The point itself can be outside the Image, which is a kind of nasty trick, but the solution is very simple.

You will need to write a function to do that for yourself. I suggest you put your line in ax+by+c=0 form and then intersect it with the 4 edges of your image. Remember if you have a line in the form [a b c] finding its intersection with another line is simply the cross product of the two. The edges of your image would be
top_horizontal = [0 1 0];
left_vertical = [1 0 0];
bottom_horizontal = [0 1 -image.rows];
right_vertical = [1 0 -image.cols];
Also, if you know something about the distance between your points you could also just pick points very far along the line in each direction, I don't think the points handed to Line() need to be on the image.

I had the same problem and found out that there it is a known bug on 2.4.X OpenCV, fixed already for newer versions.
For the 2.4.X versions, the solution is to clip the line before plot it using cv::clipLine()
Here there is a function I did to myself that works fine on the 2.4.13 OpenCV
void Detector::drawFullImageLine(cv::Mat& img, const std::pair<cv::Point, cv::Point>& points, cv::Scalar color)
{
//points of line segment
cv::Point p1 = points.first;
cv::Point p2 = points.second;
//points of line segment which extend the segment P1-P2 to
//the image borders.
cv::Point p,q;
//test if line is vertical, otherwise computes line equation
//y = ax + b
if (p2.x == p1.x)
{
p = cv::Point(p1.x, 0);
q = cv::Point(p1.x, img.rows);
}
else
{
double a = (double)(p2.y - p1.y) / (double) (p2.x - p1.x);
double b = p1.y - a*p1.x;
p = cv::Point(0, b);
q = cv::Point(img.rows, a*img.rows + b);
//clipline to the image borders. It prevents a known bug on OpenCV
//versions 2.4.X when drawing
cv::clipLine(cv::Size(img.rows, img.cols), p, q);
}
cv::line(img, p, q, color, 2);
}

This answer is forked from pajus_cz answer but a little improved.
We have two points and we need to get the line equation y = mx + b to be able to draw the straight line.
There are two variables we need to get
1- Slope(m)
2- b which can be retrieved through the line equation using any given point from the two we have already after calculating the slope b = y - mx .
void drawStraightLine(cv::Mat *img, cv::Point2f p1, cv::Point2f p2, cv::Scalar color)
{
Point2f p, q;
// Check if the line is a vertical line because vertical lines don't have slope
if (p1.x != p2.x)
{
p.x = 0;
q.x = img->cols;
// Slope equation (y1 - y2) / (x1 - x2)
float m = (p1.y - p2.y) / (p1.x - p2.x);
// Line equation: y = mx + b
float b = p1.y - (m * p1.x);
p.y = m * p.x + b;
q.y = m * q.x + b;
}
else
{
p.x = q.x = p2.x;
p.y = 0;
q.y = img->rows;
}
cv::line(*img, p, q, color, 1);
}

Related

Ray tracing, translucent sphere has a dot in centre

I'm building a ray tracer as an assignment. I'm trying to get refraction working for spheres and I got it half-working. The problem is I can't get rid of the black dot in the centre of the sphere
This is the code for the intersection:
double a = rayDirection.DotProduct(rayDirection);
double b = rayOrigin.VectAdd(sphereCenter.Negative()).VectMult(2).DotProduct(rayDirection);
double c = rayOrigin.VectAdd(sphereCenter.Negative()).DotProduct(rayOrigin.VectAdd(sphereCenter.Negative())) - (radius * radius);
double discriminant = b * b - 4 * a * c;
if (discriminant >= 0)
{
// the ray intersects the sphere
// the first root
double root1 = ((-1 * b - sqrt(discriminant)) / 2.0 * a) - 0.000001;
double root2 = ((-1 * b + sqrt(discriminant)) / 2.0 * a) - 0.000001;
if (root1 > 0.00001)
{
// the first root is the smallest positive root
return root1;
}
else
{
// the second root is the smallest positive root
return root2;
}
}
else
{
// the ray missed the sphere
return -1;
}
This is the code responsible for computing the direction of the new refracted ray:
double n1 = refractionRay.GetRefractiveIndex();
double n2 = sceneObjects.at(indexOfWinningObject)->GetMaterial().GetRefractiveIndex();
if (n1 == n2)
{
// ray inside the same material, means that it is going to be refracted outside,
n2 = 1.000293;
}
double n = n1 / n2;
Vect I = refractionRay.GetRayDirection();
Vect N = sceneObjects.at(indexOfWinningObject)->GetNormalAt(intersectionPosition);
double cosTheta1 = -N.DotProduct(I);
// we need the normal pointing towards the side the ray is coming from
if (cosTheta1 < 0)
{
N = N.Negative();
cosTheta1 = -N.DotProduct(I);
}
double cosTheta2 = sqrt(1 - (n * n) * (1 - (cosTheta1 * cosTheta1)));
Vect refractionDirection = I.VectMult(n).VectAdd(N.VectMult(n * cosTheta1 - cosTheta2));
Ray newRefractionRay(intersectionPosition.VectAdd(refractionDirection.VectMult(0.001)), refractionDirection, n2, refractionRay.GetRemainingIntersections());
When creating the new refracting ray, I tried adding the direction times a small value to the intersection position to make the origin of this new ray inside the sphere. The size of the black dot changes if I change that small value. If I make it too big the margins of the sphere start turning black as well.
If I add colour to the object it looks like this:
And if make that small constant bigger (0.1) this happens:
Is there a special condition I should take into account? Thank you!
You should remove the epsilon factors that you subtract when you calculate the two roots:
double root1 = ((-1 * b - sqrt(discriminant)) / 2.0 * a);
double root2 = ((-1 * b + sqrt(discriminant)) / 2.0 * a);
In my experience the only place you need a comparison against epsilon is when checking whether the found root is along the path of the ray and not at its origin, per your:
if (root1 > 0.00001)
NB: you could eke out a little more performance by only doing the square root calculation once, and also by only calculating root2 if root1 <= epsilon

How to get coordinates of end points in Cartesian coordinate system and the corresponding (rho,theta) in polar coordinate system of line segment

I have used Canny method to get edges of a image.Then I applied the approxPolyDP method to approximate edges,and got a set of polylines (not polygons) and line segments.Each polyline is formed from line segments.My purpose is to get coordinates of each line segment's end points in Cartesian coordinate system (2D plane) and the corresponding parameters (rho,theta) in polar coordinate.Any idea?Thanks!
BTW:I know that we can use the HoughLines method to find lines (not line segments) and get the parameters (rho,theta) in polar coordinate,or we can use the HoughLinesP method to find line segments and coordinates of end points.But the two method can not get the coordinates of end points of line segment and the corresponding parameters (rho,theta) at the same time.
Here's some sample C++ code for interpreting a line from OpenCV HoughLines. If you want the slope, just find two points and compute rise over run.
The important formula, which unfortunately is not as obvious as it should be in the documentation, is: rho = x*cos(theta) + y*sin(theta)
float yForHoughLine(float x, const Vec2f houghLine) {
float rho = houghLine[0];
float theta = houghLine[1];
if (theta == 0)
return NAN;
return (rho - (x * cos(theta))) / sin(theta);
}
float xForHoughLine(float y, const Vec2f houghLine) {
float rho = houghLine[0];
float theta = houghLine[1];
float cosTheta = cos(theta);
if (cosTheta == 0)
return NAN;
return (rho - (y * sin(theta))) / cosTheta;
}
Here's the equivalent Python:
def y_for_line(x, r, theta):
if theta == 0:
return np.nan
return (r - (x * np.cos(theta))) / np.sin(theta)
def x_for_line(y, r, theta):
cos_theta = np.cos(theta)
if cos_theta == 0:
return np.nan
return (r - (y * np.sin(theta))) / cos_theta

How to find points with specific degree on the contour (Opencv)

I have the points of the contour and i want to get the (x,y) of the points that are on specific degree. From the attached image, i want to get the yellow points.
I used the hint that #Aleksander and #Zaw gave me.
The centroid of the contour was used as one point to draw the lines and i used the line equations to get the second point of each line and draw it.
y - y1 = m (x - x1)
to get m we used the following equations
arctan( **y** of the centroid / **x** of the centroid) - angle in radiant = Phi
tan(Phi) = m
then i used
y = m (x - x1) + y1
when x1 and y1 is the centroid point, and x is the assumed coordinate of the second point needed to draw the line.
void MyDraw (Mat Draw, vector<Point> hull,Scalar color,int thickness){
std::vector<std::vector<cv::Point> > T = std::vector<std::vector<cv::Point> >(1,hull);
drawContours(Draw, T, -1, color, thickness);}
Point GetCentroidOfConvexHull(vector<Point> Hull){
// Get the moments
Moments mu;
mu= moments(Hull, false );
// Get the mass centers:
return Point( mu.m10/mu.m00 , mu.m01/mu.m00 );}
void _tmain(int argc, _TCHAR* argv[]){
vector<vector<Point>> contours;
vector<Point> hull;
Mat testContour = Mat::zeros(width, height,CV_8UC3);
Mat testLine = Mat::zeros(width, height,CV_8UC3);
Mat andResult;
Point centroid;
findContours(Dilated,contour,hierarchy,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE,Point (0,0));
convexHull(contour,hull,false);
centroid = GetCentroidOfConvexHull(hull);
MyDraw(testContour,hull,White,1);
Rect myRect = boundingRect(hull);
imshow("test contour",testContour);
waitKey(0);
float degrees[8] = {0,45,90,135,180,225,270,315};
Point point;
// for each degree
for (int d = 0 ; d < 8 ; d++){
//Get line equation from the slope and the center point of the contour
float m = std::tan( std::atan2((double)centroid.y,(double)centroid.x) - (degrees[d]*3.14/180));
// using the upper left and lower right points to get the x of the other point
if ((d >= 0 && d <= 2) || d == 7) point.x = myRect.tl().x;
else point.x = myRect.br().x;
point.y = (int)(m * (float)(point.x - centroid.x) + centroid.y);
line(testLine,centroid,point,White,1,8);}
imshow("test Line",testLine);
waitKey(0);
// and operation
andResult = testContour & testLine;
imshow("Anding Result",andResult);
waitKey(0);}
After drawing the contour and the lines, i did a bit wise AND operation to get the intersection between the lines and the contour.

Draw Perpendicular line to a line in opencv

I better explain my problem with an Image
I have a contour and a line which is passing through that contour.
At the intersection point of contour and line I want to draw a perpendicular line at the intersection point of a line and contour up to a particular distance.
I know the intersection point as well as slope of the line.
For reference I am attaching this Image.
If the blue line in your picture goes from point A to point B, and you want to draw the red line at point B, you can do the following:
Get the direction vector going from A to B. This would be:
v.x = B.x - A.x; v.y = B.y - A.y;
Normalize the vector:
mag = sqrt (v.x*v.x + v.y*v.y); v.x = v.x / mag; v.y = v.y / mag;
Rotate the vector 90 degrees by swapping x and y, and inverting one of them. Note about the rotation direction: In OpenCV and image processing in general x and y axis on the image are not oriented in the Euclidian way, in particular the y axis points down and not up. In Euclidian, inverting the final x (initial y) would rotate counterclockwise (standard for euclidean), and inverting y would rotate clockwise. In OpenCV it's the opposite. So, for example to get clockwise rotation in OpenCV: temp = v.x; v.x = -v.y; v.y = temp;
Create a new line at B pointing in the direction of v:
C.x = B.x + v.x * length; C.y = B.y + v.y * length;
(Note that you can make it extend in both directions by creating a point D in the opposite direction by simply negating length.)
This is my version of the function :
def getPerpCoord(aX, aY, bX, bY, length):
vX = bX-aX
vY = bY-aY
#print(str(vX)+" "+str(vY))
if(vX == 0 or vY == 0):
return 0, 0, 0, 0
mag = math.sqrt(vX*vX + vY*vY)
vX = vX / mag
vY = vY / mag
temp = vX
vX = 0-vY
vY = temp
cX = bX + vX * length
cY = bY + vY * length
dX = bX - vX * length
dY = bY - vY * length
return int(cX), int(cY), int(dX), int(dY)

OpenCV 2d line intersection helper function

I was looking for a helper function to calculate the intersection of two lines in OpenCV. I have searched the API Documentation, but couldn't find a useful resource.
Are there basic geometric helper functions for intersection/distance calculations on lines/line segments in OpenCV?
There are no function in OpenCV API to calculate lines intersection, but distance is:
cv::Point2f start, end;
double length = cv::norm(end - start);
If you need a piece of code to calculate line intersections then here it is:
// Finds the intersection of two lines, or returns false.
// The lines are defined by (o1, p1) and (o2, p2).
bool intersection(Point2f o1, Point2f p1, Point2f o2, Point2f p2,
Point2f &r)
{
Point2f x = o2 - o1;
Point2f d1 = p1 - o1;
Point2f d2 = p2 - o2;
float cross = d1.x*d2.y - d1.y*d2.x;
if (abs(cross) < /*EPS*/1e-8)
return false;
double t1 = (x.x * d2.y - x.y * d2.x)/cross;
r = o1 + d1 * t1;
return true;
}
There's one cool trick in 2D geometry which I find to be very useful to calculate lines intersection. In order to use this trick we represent each 2D point and each 2D line in homogeneous 3D coordinates.
At first let's talk about 2D points:
Each 2D point (x, y) corresponds to a 3D line that passes through points (0, 0, 0) and (x, y, 1).
So (x, y, 1) and (α•x, α•y, α) and (β•x, β•y, β) correspond to the same point (x, y) in 2D space.
Here's formula to convert 2D point into homogeneous coordinates: (x, y) -> (x, y, 1)
Here's formula to convert homogeneous coordinates into 2D point: (x, y, ω) -> (x / ω, y / ω). If ω is zero that means "point at infinity". It doesn't correspond to any point in 2D space.
In OpenCV you may use convertPointsToHomogeneous() and convertPointsFromHomogeneous()
Now let's talk about 2D lines:
Each 2D line can be represented with three coordinates (a, b, c) which corresponds to 2D line equation: a•x + b•y + c = 0
So (a, b, c) and (ω•a, ω•b, ω•c) correspond to the same 2D line.
Also, (a, b, c) corresponds to (nx, ny, d) where (nx, ny) is unit length normal vector and d is distance from the line to (0, 0)
Also, (nx, ny, d) is (cos φ, sin φ, ρ) where (φ, ρ) are polar coordinates of the line.
There're two interesting formulas that link together points and lines:
Cross product of two distinct points in homogeneous coordinates gives homogeneous line coordinates: (α•x₁, α•y₁, α) ✕ (β•x₂, β•y₂, β) = (a, b, c)
Cross product of two distinct lines in homogeneous coordinates gives homogeneous coordinate of their intersection point: (a₁, b₁, c₁) ✕ (a₂, b₂, c₂) = (x, y, ω). If ω is zero that means lines are parallel (have no single intersection point in Euclidean geometry).
In OpenCV you may use either Mat::cross() or numpy.cross() to get cross product
If you're still here, you've got all you need to find lines given two points and intersection point given two lines.
An algorithm for finding line intersection is described very well in the post How do you detect where two line segments intersect?
The following is my openCV c++ implementation. It uses the same notation as in above post
bool getIntersectionPoint(Point a1, Point a2, Point b1, Point b2, Point & intPnt){
Point p = a1;
Point q = b1;
Point r(a2-a1);
Point s(b2-b1);
if(cross(r,s) == 0) {return false;}
double t = cross(q-p,s)/cross(r,s);
intPnt = p + t*r;
return true;
}
double cross(Point v1,Point v2){
return v1.x*v2.y - v1.y*v2.x;
}
Here is my implementation for EmguCV (C#).
static PointF GetIntersection(LineSegment2D line1, LineSegment2D line2)
{
double a1 = (line1.P1.Y - line1.P2.Y) / (double)(line1.P1.X - line1.P2.X);
double b1 = line1.P1.Y - a1 * line1.P1.X;
double a2 = (line2.P1.Y - line2.P2.Y) / (double)(line2.P1.X - line2.P2.X);
double b2 = line2.P1.Y - a2 * line2.P1.X;
if (Math.Abs(a1 - a2) < double.Epsilon)
throw new InvalidOperationException();
double x = (b2 - b1) / (a1 - a2);
double y = a1 * x + b1;
return new PointF((float)x, (float)y);
}
Using homogeneous coordinates makes your life easier:
cv::Mat intersectionPoint(const cv::Mat& line1, const cv::Mat& line2)
{
// Assume we receive lines as l=(a,b,c)^T
assert(line1.rows == 3 && line1.cols = 1
&& line2.rows == 3 && line2.cols == 1);
// Point is p=(x,y,w)^T
cv::Mat point = line1.cross(line2);
// Normalize so it is p'=(x',y',1)^T
if( point.at<double>(2,0) != 0)
point = point * (1.0/point.at<double>(2,0));
}
Note that if the third coordinate is 0 the lines are parallel and there is not solution in R² but in P^2, and then the point means a direction in 2D.
my implementation in Python (using numpy array)
with line1 = [[x1, y1],[x2, y2]] & line2 = [[x1, y1],[x2, y2]]
def getIntersection(line1, line2):
s1 = numpy.array(line1[0])
e1 = numpy.array(line1[1])
s2 = numpy.array(line2[0])
e2 = numpy.array(line2[1])
a1 = (s1[1] - e1[1]) / (s1[0] - e1[0])
b1 = s1[1] - (a1 * s1[0])
a2 = (s2[1] - e2[1]) / (s2[0] - e2[0])
b2 = s2[1] - (a2 * s2[0])
if abs(a1 - a2) < sys.float_info.epsilon:
return False
x = (b2 - b1) / (a1 - a2)
y = a1 * x + b1
return (x, y)

Resources