Why are there blue arrows on the dots in the Manim ArrowVectorField? - manim

I don't know why this vector field draws blue arrows on the dots the field is built around. I assume its because of the function generator, but I don't understand it well enough to see why it would generate them or what it means in the vector field. The documentation on ArrowVectorField did not address this issue.
The picture shows the small blue arrows on the center dot and on the other three attractor states.
# function generator
# https://github.com/3b1b/videos/blob/436842137ee6b89cbb2aa10fa2d4c2e12361dac8/_2018/div_curl.py#L100
def get_force_field_func(*point_strength_pairs, **kwargs):
radius = kwargs.get("radius", 0.5)
def func(point):
result = np.array(ORIGIN)
for center, strength in point_strength_pairs:
to_center = center - point
norm = np.linalg.norm(to_center)
if norm == 0:
continue
elif norm < radius:
to_center /= radius**3
elif norm >= radius:
to_center /= norm**3
to_center *= -strength
result += to_center
return result
return func
class Test(Scene):
def construct(self):
progenitor = Dot()
self.add(progenitor)
attractor1 = Dot().move_to(RIGHT * 2 + UP * 3)
attractor2 = Dot().move_to(UP * 2 + LEFT * 4)
attractor3 = Dot().move_to(DOWN * 2 + RIGHT * 4)
constrained_func = get_force_field_func(
(progenitor.get_center(), 1),
(attractor1.get_center(), -0.5),
(attractor2.get_center(), -2),
(attractor3.get_center(), -1)
)
constrained_field = ArrowVectorField(constrained_func)
self.add(constrained_field)

Look at the return value of your generator_function at these values, for example at the origin:
>>> constrained_func(np.array([0, 0, 0]))
array([-0.02338674, 0.05436261, 0. ])
This is a small vector pointing to the top left -- which is exactly what the small blue arrow in the origin is.
If you'd like to get rid of that, you could replace continue with return np.array([0., 0., 0.]) -- but that might not be in line with the concept modeled by the generator function.

Related

Distance from a point on axes using manim

I am having trouble to draw a horizontal line on axes. I made a point on the axes and I don't get any idea how can I draw a line on the x or y axes from the point and even don't know how to make a distance between two points. Any idea how to do them?
from manimlib.imports import *
class GraphX(GraphScene):
CONFIG ={
'x_min': -4,
'x_max': 4,
'y_min': -2,
'y_max': 2,
'x_axis_label': '$x$',
'y_axis_label': '$y$',
'graph_origin': 0.5 * DOWN + 0 * LEFT,
}
def show_function_graph(self):
self.setup_axes(animate = True)
text = TextMobject('(x,y)')
text.shift(2*UP+3*RIGHT)
self.play(Write(text))
self.wait(3)
def construct(self):
self.show_function_graph()
graph coords and screen coords are not synced, you have to use coord_to_point and points_to_coords to convert between them
here's an example
class GraphX(GraphScene):
CONFIG ={
'x_min': -4, 'x_max': 4,
'y_min': -2, 'y_max': 2,
'x_axis_label': '$x$', 'y_axis_label': '$y$',
# 'graph_origin': 0.5 * DOWN + 0 * LEFT,
'graph_origin': ORIGIN
}
def draw_graph(self): self.setup_axes(animate = True)
def construct(self):
self.draw_graph()
# dot = Dot(self.coords_to_point(x=1,y=1))
dot = Dot(self.coords_to_point(1,1))
self.play(ShowCreation(dot))
text = Text('(x,y)')
text.shift(dot.arc_center+0.5*UR)
self.play(ShowCreation(text))
self.wait()
hl = Line( self.coords_to_point(1,1), self.coords_to_point(0,1))
vl = Line( self.coords_to_point(1,1), self.coords_to_point(1,0))
self.play(ShowCreation(hl), ShowCreation(vl))
self.wait(3)

YOLO object detection opencv drawing a lot of rectangles

I have collected images of S9 phones, added labels with labellmg and trained for a few hours in google colab. I had minimal loss so I thought it is enough. I only selected the rectangles where the phone is displayed and nothing else. What I dont understand is, it draws a lot of rectangles on the phone. I only want 1 or 2 rectangles drawn on the phone itself. Did I do something wrong?
def detect_img(self, img):
blob = cv2.dnn.blobFromImage(img, 0.00392 ,(416,416), (0,0,0), True, crop=False)
input_img = self.net.setInput(blob)
output = self.net.forward(self.output)
height, width, channel = img.shape
boxes = []
trusts = []
class_ids = []
for out in output:
for detect in out:
total_scores = detect[5:]
class_id = np.argmax(total_scores)
trust_factor = total_scores[class_id]
if trust_factor > 0.5:
x_center = int(detect[0] * width)
y_center = int(detect[1] * height)
w = int(detect[2] * width)
h = int(detect[3] * height)
x = int(x_center - w / 2)
y = int(x_center - h / 2)
boxes.append([x,y,w,h])
trusts.append(float(trust_factor))
class_ids.append(class_id)
cv2.rectangle(img, (x_center,y_center), (x + w, y + h), (0,255,0), 2)
When I set the trust_factor to 0.8, a lot of the rectangles are gone but there are still rectangles outside the phone, while I only selected the phone itself in labellmg and not the background.
You can use function "non maximum suppression" that it removes rectangles which have less score. I put a code for NMS
def NMS(boxes, overlapThresh = 0.4):
# Return an empty list, if no boxes given
if len(boxes) == 0:
return []
x1 = boxes[:, 0] # x coordinate of the top-left corner
y1 = boxes[:, 1] # y coordinate of the top-left corner
x2 = boxes[:, 2] # x coordinate of the bottom-right corner
y2 = boxes[:, 3] # y coordinate of the bottom-right corner
# Compute the area of the bounding boxes and sort the bounding
# Boxes by the bottom-right y-coordinate of the bounding box
areas = (x2 - x1 + 1) * (y2 - y1 + 1) # We add 1, because the pixel at the start as well as at the end counts
# The indices of all boxes at start. We will redundant indices one by one.
indices = np.arange(len(x1))
for i,box in enumerate(boxes):
# Create temporary indices
temp_indices = indices[indices!=i]
# Find out the coordinates of the intersection box
xx1 = np.maximum(box[0], boxes[temp_indices,0])
yy1 = np.maximum(box[1], boxes[temp_indices,1])
xx2 = np.minimum(box[2], boxes[temp_indices,2])
yy2 = np.minimum(box[3], boxes[temp_indices,3])
# Find out the width and the height of the intersection box
w = np.maximum(0, xx2 - xx1 + 1)
h = np.maximum(0, yy2 - yy1 + 1)
# compute the ratio of overlap
overlap = (w * h) / areas[temp_indices]
# if the actual boungding box has an overlap bigger than treshold with any other box, remove it's index
if np.any(overlap) > treshold:
indices = indices[indices != i]
#return only the boxes at the remaining indices
return boxes[indices].astype(int)

how to find distance between hough lines in openCV?

I am new to opencv-python. I have found the lines in image through houghtransformP. The lines drawn from hough transform are discontinued and are giving multiple lines. I need to draw only one line for the edges and find the 'distance' between lines which are found.
The output image is shown below
"""
Created on Fri Nov 8 11:41:16 2019
#author: romanth.chowan
"""
import cv2
import numpy as np
import math
def getSlopeOfLine(line):
xDis = line[0][2] - line[0][0]
if (xDis == 0):
return None
return (line[0][3] - line[0][1]) / xDis
if __name__ == '__main__':
inputFileName_ =r"C:\Users\romanth.chowan\Desktop\opencv\stent spec\2prox.jpeg"
img = cv2.imread(inputFileName_)
img1=cv2.GaussianBlur(img,(5,5),0)
gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
edges = cv2.Laplacian(gray,cv2.CV_8UC1) # Laplacian Edge Detection
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 300, 10, 10)
print(len(lines))
parallelLines = []
for a in lines:
for b in lines:
if a is not b:
slopeA = getSlopeOfLine(b)
slopeB = getSlopeOfLine(b)
if slopeA is not None and slopeB is not None:
if 0 <= abs(slopeA - slopeB) <= 10:
parallelLines.append({'lineA': a, 'lineB': b})
for pairs in parallelLines:
lineA = pairs['lineA']
lineB = pairs['lineB']
leftx, boty, rightx, topy = lineA[0]
cv2.line(img, (leftx, boty), (rightx, topy), (0, 0, 255), 2)
left_x, bot_y, right_x, top_y = lineB[0]
cv2.line(img, (left_x, bot_y), (right_x, top_y), (0, 0, 255), 2)
cv2.imwrite('linesImg.jpg', img)
output image after drawing lines:
It's mostly geometric task, not specific to OpenCV.
For each line you have two points (x1,y1) and (x2,y2) which are already used in your getSlopeOfLine(line) method.
You can denote each line in form:
ax + by + c = 0
To do that use two known line's points:
(y1 - y2)x + (x2 - x1)y + (x1y2 - x2y1) = 0
Note than parallel lines have same a and b while different c.
And than measure distance between any two of them (distance between non-parallel lines is equal to zero since they have a crossing point) :
d = abs(c2 - c1) / sqrt(a*a + b*b)
In Euclidean geometry line may be denoted in several ways and one may suit specific task better than another.
Currently you evaluate line's slope, from formula above we can get:
y = (-b / a)x - c / b
same to (b has another meaning now)
y = kx + b
Or using two line's points:
y = (x1 - x2) / (y1 - y2) * x + (x1y2 - x2y1)
Where k is line's slope (tan(alpha)) and b is shift.
Now you just match parallel lines (one with close k). You can take in account line's shift to merge several parallel lines into one.

Choosing Lines From Hough Lines

I'm using Hough Lines to do corner detection for this image. i plan to find the intersection of the lines as the corner.
This is the image.
Unfortunately, Hough return lots of lines for each line I expect
How do I tune the Hough Lines so there is only four lines each corresponds to actual line on the image?
OpenCVs hough transform really could use some better Non-Maximum Suppression. Without that, you get this phenomenon of duplicate lines. Unfortunately I know of no easy way to tune that, besides reimplementing your own hough transform. (Which is a valid option. Hough transform is fairly simple)
Fortunately it is easy to fix in post-processing:
For the non-probabilistic hough transform, OpenCv will return the lines in order of their confidence, with the strongest line first. So simply take the first four lines that differ strongly in either rho or theta.
so, add the first line found by HoughLines into a new List: strong_lines
for each line found by HoughLines:
test whether its rho and theta are close to any strong_line (e.g. rho is within 50 pixels and theta is within 10° of the other line)
if not, put it into the list of strong_lines
if you have found 4 strong_lines, break
I implemented the approach described by HugoRune and though I would share my code as an example of how I implemented this. I used a tolerance of 5 degrees and 10 pixels.
strong_lines = np.zeros([4,1,2])
minLineLength = 2
maxLineGap = 10
lines = cv2.HoughLines(edged,1,np.pi/180,10, minLineLength, maxLineGap)
n2 = 0
for n1 in range(0,len(lines)):
for rho,theta in lines[n1]:
if n1 == 0:
strong_lines[n2] = lines[n1]
n2 = n2 + 1
else:
if rho < 0:
rho*=-1
theta-=np.pi
closeness_rho = np.isclose(rho,strong_lines[0:n2,0,0],atol = 10)
closeness_theta = np.isclose(theta,strong_lines[0:n2,0,1],atol = np.pi/36)
closeness = np.all([closeness_rho,closeness_theta],axis=0)
if not any(closeness) and n2 < 4:
strong_lines[n2] = lines[n1]
n2 = n2 + 1
EDIT: The code was updated to reflect the comment regarding a negative rho value
Collect the intersection of all line
for (int i = 0; i < lines.size(); i++)
{
for (int j = i + 1; j < lines.size(); j++)
{
cv::Point2f pt = computeIntersectionOfTwoLine(lines[i], lines[j]);
if (pt.x >= 0 && pt.y >= 0 && pt.x < image.cols && pt.y < image.rows)
{
corners.push_back(pt);
}
}
}
You can google the algorithm to find the intersection of two lines.
Once you collect all the intersection points you can easily determine the min max which will give you top-left and bottom right points. From these two points you can easily get the rectangle.
Here Sorting 2d point array to find out four corners & http://opencv-code.com/tutorials/automatic-perspective-correction-for-quadrilateral-objects/ Refer these two links.
Here is a complete solution written in python 2.7.x using OpenCV 2.4.
It is based on ideas from this thread.
Method: Detect all lines. Assume that the Hough function returns highest ranked lines first. Filter the lines to keep those that are separated by some minimum distance and/or angle.
Image of all Hough lines:
https://i.ibb.co/t3JFncJ/all-lines.jpg
Filtered lines:
https://i.ibb.co/yQLNxXT/filtered-lines.jpg
Code:
http://codepad.org/J57oVIzs
"""
Detect the best 4 lines for a rounded rectangle.
"""
import numpy as np
import cv2
input_image = cv2.imread("image.jpg")
def drawLines(img, lines):
"""
Draw lines on an image
"""
for line in lines:
for rho,theta in line:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img, (x1,y1), (x2,y2), (0,0,255), 1)
input_image_grey = cv2.cvtColor(input_image, cv2.COLOR_BGR2GRAY)
edged = input_image_grey
rho = 1 # 1 pixel
theta = 1.0*0.017 # 1 degree
threshold = 100
lines = cv2.HoughLines(edged, rho, theta, threshold)
# Fix negative angles
num_lines = lines.shape[1]
for i in range(0, num_lines):
line = lines[0,i,:]
rho = line[0]
theta = line[1]
if rho < 0:
rho *= -1.0
theta -= np.pi
line[0] = rho
line[1] = theta
# Draw all Hough lines in red
img_with_all_lines = np.copy(input_image)
drawLines(img_with_all_lines, lines)
cv2.imshow("Hough lines", img_with_all_lines)
cv2.waitKey()
cv2.imwrite("all_lines.jpg", img_with_all_lines)
# Find 4 lines with unique rho & theta:
num_lines_to_find = 4
filtered_lines = np.zeros([1, num_lines_to_find, 2])
if lines.shape[1] < num_lines_to_find:
print("ERROR: Not enough lines detected!")
# Save the first line
filtered_lines[0,0,:] = lines[0,0,:]
print("Line 1: rho = %.1f theta = %.3f" % (filtered_lines[0,0,0], filtered_lines[0,0,1]))
idx = 1 # Index to store the next unique line
# Initialize all rows the same
for i in range(1,num_lines_to_find):
filtered_lines[0,i,:] = filtered_lines[0,0,:]
# Filter the lines
num_lines = lines.shape[1]
for i in range(0, num_lines):
line = lines[0,i,:]
rho = line[0]
theta = line[1]
# For this line, check which of the existing 4 it is similar to.
closeness_rho = np.isclose(rho, filtered_lines[0,:,0], atol = 10.0) # 10 pixels
closeness_theta = np.isclose(theta, filtered_lines[0,:,1], atol = np.pi/36.0) # 10 degrees
similar_rho = np.any(closeness_rho)
similar_theta = np.any(closeness_theta)
similar = (similar_rho and similar_theta)
if not similar:
print("Found a unique line: %d rho = %.1f theta = %.3f" % (i, rho, theta))
filtered_lines[0,idx,:] = lines[0,i,:]
idx += 1
if idx >= num_lines_to_find:
print("Found %d unique lines!" % (num_lines_to_find))
break
# Draw filtered lines
img_with_filtered_lines = np.copy(input_image)
drawLines(img_with_filtered_lines, filtered_lines)
cv2.imshow("Filtered lines", img_with_filtered_lines)
cv2.waitKey()
cv2.imwrite("filtered_lines.jpg", img_with_filtered_lines)
The above approach (proposed by #HugoRune's and implemented by #Onamission21) is correct but has a little bug. cv2.HoughLines may return negative rho and theta upto pi. Notice for example that the line (r0,0) is very close to the line (-r0,pi-epsilon) but they would not be found in the above closeness test.
I simply treated negative rhos by applying rho*=-1, theta-=pi before closeness calculations.

Binary Image Orientation

I'm trying to find the orientation of a binary image (where orientation is defined to be the axis of least moment of inertia, i.e. least second moment of area). I'm using Dr. Horn's book (MIT) on Robot Vision which can be found here as reference.
Using OpenCV, here is my function, where a, b, and c are the second moments of area as found on page 15 of the pdf above (page 60 of the text):
Point3d findCenterAndOrientation(const Mat& src)
{
Moments m = cv::moments(src, true);
double cen_x = m.m10/m.m00; //Centers are right
double cen_y = m.m01/m.m00;
double a = m.m20-m.m00*cen_x*cen_x;
double b = 2*m.m11-m.m00*(cen_x*cen_x+cen_y*cen_y);
double c = m.m02-m.m00*cen_y*cen_y;
double theta = a==c?0:atan2(b, a-c)/2.0;
return Point3d(cen_x, cen_y, theta);
}
OpenCV calculates the second moments around the origin (0,0) so I have to use the Parallel Axis Theorem to move the axis to the center of the shape, mr^2.
The center looks right when I call
Point3d p = findCenterAndOrientation(src);
rectangle(src, Point(p.x-1,p.y-1), Point(p.x+1, p.y+1), Scalar(0.25), 1);
But when I try to draw the axis with lowest moment of inertia, using this function, it looks completely wrong:
line(src, (Point(p.x,p.y)-Point(100*cos(p.z), 100*sin(p.z))), (Point(p.x, p.y)+Point(100*cos(p.z), 100*sin(p.z))), Scalar(0.5), 1);
Here are some examples of input and output:
(I'd expect it to be vertical)
(I'd expect it to be horizontal)
I worked with the orientation sometimes back and coded the following. It returns me the exact orientation of the object. largest_contour is the shape that is detected.
CvMoments moments1,cenmoments1;
double M00, M01, M10;
cvMoments(largest_contour,&moments1);
M00 = cvGetSpatialMoment(&moments1,0,0);
M10 = cvGetSpatialMoment(&moments1,1,0);
M01 = cvGetSpatialMoment(&moments1,0,1);
posX_Yellow = (int)(M10/M00);
posY_Yellow = (int)(M01/M00);
double theta = 0.5 * atan(
(2 * cvGetCentralMoment(&moments1, 1, 1)) /
(cvGetCentralMoment(&moments1, 2, 0) - cvGetCentralMoment(&moments1, 0, 2)));
theta = (theta / PI) * 180;
// fit an ellipse (and draw it)
if (largest_contour->total >= 6) // can only do an ellipse fit
// if we have > 6 points
{
CvBox2D box = cvFitEllipse2(largest_contour);
if ((box.size.width < imgYellowThresh->width) && (box.size.height < imgYellowThresh->height))
{
cvEllipseBox(imgYellowThresh, box, CV_RGB(255, 255 ,255), 3, 8, 0 );
}
}

Resources