How can I color this Koch-Star in with the turtle module? - turtle-graphics

How can I change the colour of my Turtle and of the Koch-Star it draws? I want it to not be black and be coloured in. Maybe have a Blue outline and fill in the shape with a light blue.
trtl : turtle object
The turtle window object to be drawed to
lenght : float
The length of the line Koch is drawed to
currentdepth : integer
The current iteration depth of Koch - 1 is straight line of length run
if currentdepth == depth:
trtl.forward(length)
else:
currentlength = length/3.0
koch_segment(trtl, currentlength,currentdepth + 1)
trtl.left(60)
koch_segment(trtl, currentlength, currentdepth + 1)
trtl.right(120)
koch_segment(trtl, currentlength, currentdepth + 1)
trtl.left(60)
koch_segment(trtl, currentlength, currentdepth + 1)
def setup_turtle(depth):
wn = turtle.Screen()
wx = wn.window_width() * .5
wh = wn.window_height() * .5
base_lgth = 2.0 / math.sqrt(3.0) * wh # is the base length dependant on the screen size
myturtle = turtle.Turtle()
myturtle.speed(0.5 * (depth + 1)) # value between 1 and 10 (fast)
myturtle.penup()
myturtle.setposition(-wx / 2, -wh / 2) # start in the lower left quadrant middle point
myturtle.pendown()
myturtle.left(60)
return myturtle, base_lgth```

Related

Why are there blue arrows on the dots in the Manim ArrowVectorField?

I don't know why this vector field draws blue arrows on the dots the field is built around. I assume its because of the function generator, but I don't understand it well enough to see why it would generate them or what it means in the vector field. The documentation on ArrowVectorField did not address this issue.
The picture shows the small blue arrows on the center dot and on the other three attractor states.
# function generator
# https://github.com/3b1b/videos/blob/436842137ee6b89cbb2aa10fa2d4c2e12361dac8/_2018/div_curl.py#L100
def get_force_field_func(*point_strength_pairs, **kwargs):
radius = kwargs.get("radius", 0.5)
def func(point):
result = np.array(ORIGIN)
for center, strength in point_strength_pairs:
to_center = center - point
norm = np.linalg.norm(to_center)
if norm == 0:
continue
elif norm < radius:
to_center /= radius**3
elif norm >= radius:
to_center /= norm**3
to_center *= -strength
result += to_center
return result
return func
class Test(Scene):
def construct(self):
progenitor = Dot()
self.add(progenitor)
attractor1 = Dot().move_to(RIGHT * 2 + UP * 3)
attractor2 = Dot().move_to(UP * 2 + LEFT * 4)
attractor3 = Dot().move_to(DOWN * 2 + RIGHT * 4)
constrained_func = get_force_field_func(
(progenitor.get_center(), 1),
(attractor1.get_center(), -0.5),
(attractor2.get_center(), -2),
(attractor3.get_center(), -1)
)
constrained_field = ArrowVectorField(constrained_func)
self.add(constrained_field)
Look at the return value of your generator_function at these values, for example at the origin:
>>> constrained_func(np.array([0, 0, 0]))
array([-0.02338674, 0.05436261, 0. ])
This is a small vector pointing to the top left -- which is exactly what the small blue arrow in the origin is.
If you'd like to get rid of that, you could replace continue with return np.array([0., 0., 0.]) -- but that might not be in line with the concept modeled by the generator function.

How to extract area of interest in the image while the boundary is not obvious

Are there ways to just extract the area of interest (the square light part in the red circle in the original image)? That means I need to get the coordinates of the edge and then masking the image outside the boundaries. I don't know how to do that. Could anyone help? Thanks!
#define horizontal and Vertical sobel kernels
Gx = np.array([[-1, 0, 1],[-2, 0, 2],[-1, 0, 1]])
Gy = np.array([[-1, -2, -1],[0, 0, 0],[1, 2, 1]])
#define kernal convolution function
# with image X and filter F
def convolve(X, F):
# height and width of the image
X_height = X.shape[0]
X_width = X.shape[3]
# height and width of the filter
F_height = F.shape[0]
F_width = F.shape[1]
H = (F_height - 1) // 2
W = (F_width - 1) // 2
#output numpy matrix with height and width
out = np.zeros((X_height, X_width))
#iterate over all the pixel of image X
for i in np.arange(H, X_height-H):
for j in np.arange(W, X_width-W):
sum = 0
#iterate over the filter
for k in np.arange(-H, H+1):
for l in np.arange(-W, W+1):
#get the corresponding value from image and filter
a = X[i+k, j+l]
w = F[H+k, W+l]
sum += (w * a)
out[i,j] = sum
#return convolution
return out
#normalizing the vectors
sob_x = convolve(image, Gx) / 8.0
sob_y = convolve(image, Gy) / 8.0
#calculate the gradient magnitude of vectors
sob_out = np.sqrt(np.power(sob_x, 2) + np.power(sob_y, 2))
# mapping values from 0 to 255
sob_out = (sob_out / np.max(sob_out)) * 255
plt.imshow(sob_out, cmap = 'gray', interpolation = 'bicubic')
plt.show()

YOLO object detection opencv drawing a lot of rectangles

I have collected images of S9 phones, added labels with labellmg and trained for a few hours in google colab. I had minimal loss so I thought it is enough. I only selected the rectangles where the phone is displayed and nothing else. What I dont understand is, it draws a lot of rectangles on the phone. I only want 1 or 2 rectangles drawn on the phone itself. Did I do something wrong?
def detect_img(self, img):
blob = cv2.dnn.blobFromImage(img, 0.00392 ,(416,416), (0,0,0), True, crop=False)
input_img = self.net.setInput(blob)
output = self.net.forward(self.output)
height, width, channel = img.shape
boxes = []
trusts = []
class_ids = []
for out in output:
for detect in out:
total_scores = detect[5:]
class_id = np.argmax(total_scores)
trust_factor = total_scores[class_id]
if trust_factor > 0.5:
x_center = int(detect[0] * width)
y_center = int(detect[1] * height)
w = int(detect[2] * width)
h = int(detect[3] * height)
x = int(x_center - w / 2)
y = int(x_center - h / 2)
boxes.append([x,y,w,h])
trusts.append(float(trust_factor))
class_ids.append(class_id)
cv2.rectangle(img, (x_center,y_center), (x + w, y + h), (0,255,0), 2)
When I set the trust_factor to 0.8, a lot of the rectangles are gone but there are still rectangles outside the phone, while I only selected the phone itself in labellmg and not the background.
You can use function "non maximum suppression" that it removes rectangles which have less score. I put a code for NMS
def NMS(boxes, overlapThresh = 0.4):
# Return an empty list, if no boxes given
if len(boxes) == 0:
return []
x1 = boxes[:, 0] # x coordinate of the top-left corner
y1 = boxes[:, 1] # y coordinate of the top-left corner
x2 = boxes[:, 2] # x coordinate of the bottom-right corner
y2 = boxes[:, 3] # y coordinate of the bottom-right corner
# Compute the area of the bounding boxes and sort the bounding
# Boxes by the bottom-right y-coordinate of the bounding box
areas = (x2 - x1 + 1) * (y2 - y1 + 1) # We add 1, because the pixel at the start as well as at the end counts
# The indices of all boxes at start. We will redundant indices one by one.
indices = np.arange(len(x1))
for i,box in enumerate(boxes):
# Create temporary indices
temp_indices = indices[indices!=i]
# Find out the coordinates of the intersection box
xx1 = np.maximum(box[0], boxes[temp_indices,0])
yy1 = np.maximum(box[1], boxes[temp_indices,1])
xx2 = np.minimum(box[2], boxes[temp_indices,2])
yy2 = np.minimum(box[3], boxes[temp_indices,3])
# Find out the width and the height of the intersection box
w = np.maximum(0, xx2 - xx1 + 1)
h = np.maximum(0, yy2 - yy1 + 1)
# compute the ratio of overlap
overlap = (w * h) / areas[temp_indices]
# if the actual boungding box has an overlap bigger than treshold with any other box, remove it's index
if np.any(overlap) > treshold:
indices = indices[indices != i]
#return only the boxes at the remaining indices
return boxes[indices].astype(int)

Pi live video color detection

I'm planning to create an ambilight effect behind my TV. I want to achieve this by using a camera pointed at my TV. I think the easiest way is using a simple ip-camera. I need color detection to detect the colors on the screen and translate this to rgb values on the led strip.
I have a Raspberry Pi as hub in the middle of my house. I was thinking about using it like this
Ip camera pointed at my screen Process the video on the pi and translate it to rgb values and send it to mqtt server. Behind my TV receive the colors on my nodeMCU.
How can I detect colors on a live stream (on multiple points) on my pi?
If you can create any background colour the best approach might be calculating k-means or median to get "the most popular" colours. If the ambient light can be different in different places then using ROI at the image edges you can check what colour is dominant in this area (by comparing number of samples of different colours).
If you have only limited colours (e.g. only R, G and B) then you can simply check which channel has highest intensity in desired region.
I wrote the code with an assumption that you can create any RGB ambient color.
As a test image I use this one:
The code is:
import cv2
import numpy as np
# Read an input image (in your case this will be an image from the camera)
img = cv2.imread('saul2.png ', cv2.IMREAD_COLOR)
# The block_size defines how big the patches around an image are
# the more LEDs you have and the more segments you want, the lower block_size can be
block_size = 60
# Get dimensions of an image
height, width, chan = img.shape
# Calculate number of patches along height and width
h_steps = height / block_size
w_steps = width / block_size
# In one loop I calculate both: left and right ambient or top and bottom
ambient_patch1 = np.zeros((60, 60, 3))
ambient_patch2 = np.zeros((60, 60, 3))
# Create output image (just for visualization
# there will be an input image in the middle, 10px black border and ambient color)
output = cv2.copyMakeBorder(img, 70, 70, 70, 70, cv2.BORDER_CONSTANT, value = 0)
for i in range(h_steps):
# Get left and right region of an image
left_roi = img[i * 60 : (i + 1) * 60, 0 : 60]
right_roi = img[i * 60 : (i + 1) * 60, -61 : -1]
left_med = np.median(left_roi, (0, 1)) # This is an actual RGB color for given block (on the left)
right_med = np.median(right_roi, (0, 1)) # and on the right
# Create patch having an ambient color - this is just for visualization
ambient_patch1[:, :] = left_med
ambient_patch2[:, :] = right_med
# Put it in the output image (the additional 70 is because input image is in the middle (shifted by 70px)
output[70 + i * 60 : 70+ (i + 1) * 60, 0 : 60] = ambient_patch1
output[70 + i * 60 : 70+ (i + 1) * 60, -61: -1] = ambient_patch2
for i in range(w_steps):
# Get top and bottom region of an image
top_roi = img[0 : 60, i * 60 : (i + 1) * 60]
bottom_roi = img[-61 : -1, i * 60: (i + 1) * 60]
top_med = np.median(top_roi, (0, 1)) # This is an actual RGB color for given block (on top)
bottom_med = np.median(bottom_roi, (0, 1)) # and bottom
# Create patch having an ambient color - this is just for visualization
ambient_patch1[:, :] = top_med
ambient_patch2[:, :] = bottom_med
# Put it in the output image (the additional 70 is because input image is in the middle (shifted by 70px)
output[0 : 60, 70 + i * 60 : 70 + (i + 1) * 60] = ambient_patch1
output[-61: -1, 70 + i * 60 : 70 + (i + 1) * 60] = ambient_patch2
# Save output image
cv2.imwrite('saul_output.png', output)
And this gives a result as follows:
I hope this helps!
EDIT:
And the two more examples:

Color over grayscale image

I want to color gray-scale image with only one color. So I have for example pixel: RGB(34,34,34) and I want to color it with color: RGB(200,100,50) to get new RGB pixel. So I new to do this for every pixel in image.
The white pixels get color: RGB(200,100,50), darker pixels get darker color than RGB(200,100,50).
So the result is gray-scale with black and selected color instead of black and white.
I will program this hard core without any built in function.
Similar to this: Image or this:Image
All you need to do is use the ratio of gray to white as a multiplier to your color. I think you'll find that this gives better results than a blend.
new_red = gray * target_red / 255
new_green = gray * target_green / 255
new_blue = gray * target_blue / 255
From what you describe I figure you look for a blending algorithm.
What you need is a blendingPercentage (bP).
new red = red1 * bP + red2 * (1 - bP)
new green = green1 * bP + green2 * (1 - bP)
new blue = blue1 * bP + blue2 * (1 - bP)
Your base color is RGB 34 34 34; Color to blend is RGB 200 100 50
BlendingPercentage for example = 50% -> 0.5
Therefore:
New red = 34 * 0.5 + 200 * (1 - 0.5) = 117
New green = 34 * 0.5 + 100 * (1 - 0.5) = 67
New blue = 34 * 0.5 + 50 * (1 - 0.5) = 42

Resources