LOVE2D, Desync movement of follower-object - lua

I've got a problem in my code. (Love2D framework)
I'm making small follower-object, that follows coordinates in realtime.
The problem is that speed of movement is changing, depending on the vector.
Like move forward is slightly faster, than moving to up-left.
Can somebody help me? Mark my mistake please.
The code:
function love.load()
player = love.graphics.newImage("player.png")
local f = love.graphics.newFont(20)
love.graphics.setFont(f)
love.graphics.setBackgroundColor(75,75,75)
x = 100
y = 100
x2 = 600
y2 = 600
speed = 300
speed2 = 100
end
function love.draw()
love.graphics.draw(player, x, y)
love.graphics.draw(player, x2, y2)
end
function love.update(dt)
print(x, y, x2, y2)
if love.keyboard.isDown("right") then
x = x + (speed * dt)
end
if love.keyboard.isDown("left") then
x = x - (speed * dt)
end
if love.keyboard.isDown("down") then
y = y + (speed * dt)
end
if love.keyboard.isDown("up") then
y = y - (speed * dt)
end
if x < x2 and y < y2 then
x2 = x2 - (speed2 * dt)
y2 = y2 - (speed2 * dt)
end
if x > x2 and y < y2 then
x2 = x2 + (speed2 * dt)
y2 = y2 - (speed2 * dt)
end
if x > x2 and y > y2 then
x2 = x2 + (speed2 * dt)
y2 = y2 + (speed2 * dt)
end
if x < x2 and y > y2 then
x2 = x2 - (speed2 * dt)
y2 = y2 + (speed2 * dt)
end
end

This
if x < x2 and y < y2 then
x2 = x2 - (speed2 * dt)
y2 = y2 - (speed2 * dt)
end
if x > x2 and y < y2 then
x2 = x2 + (speed2 * dt)
y2 = y2 - (speed2 * dt)
end
if x > x2 and y > y2 then
x2 = x2 + (speed2 * dt)
y2 = y2 + (speed2 * dt)
end
if x < x2 and y > y2 then
x2 = x2 - (speed2 * dt)
y2 = y2 + (speed2 * dt)
end
has a bunch of problems.
The one that's causing your speed differences is this: if (x<x2), but x>(x2+speed2*dt), you will run through the first branch (if x < x2 and y < y2 then …). This will change the values so that you will also hit the second branch (if x > x2 and y < y2 then …), which means you move twice in the y direction. Change to if x < x2 and y < y2 then … elseif x > x2 and y < y2 then … so that you cannot fall through into the other branches, or do the math and avoid the whole if-chain.
A thing that you may or may not want and currently have is that it either walks in a certain direction, or it doesn't. Which means there are 8 possible directions that the follower can travel. (Axis-aligned or diagonally – the 4 cases you have plus the situation when dx or dy is (approximately) zero.)
If you want it to move "directly towards you", the follower should be able to move in any direction.
You will have to use the relative distances along the x- and y-direction. There are many common choices of distance definitions, but you probably want the euclidean distance. (The "shape" of the unit circle under that norm / distance definition determines how fast it's moving in either direction, only the euclidean distance is "the same in all directions".)
So replace the above block with
local dx, dy = (x2-x), (y2-y) -- difference in both directions
local norm = 1/(dx^2+dy^2)^(1/2) -- length of the distance
dx, dy = dx*norm, dy*norm -- relative contributions of x/y parts
x2, y2 = x2 - dx*speed*dt, y2 - dy*speed*dt
and you'll get the "normal" notion of distance, which gets you the normal notion of movement speed, which gets you identical-looking speeds when moving towards the player.

Related

Calculate the orientation of bouding Boxes

my input is a rotated image. Sometimes is 90 degrees and sometimes is -90 degrees. I am using PaddleOCR to extract the text and detect Boundingboxes.
With PaddleOCR i can detect the coordinates of every boundingboxes.
the first one should be between 70 and 90 degrees :
[1267.0, 182.0], [1300.0, 182.0], [1300.0, 278.0], [1267.0, 278.0]
And the second one should be -90 :
[[184.0, 173.0], [221.0, 173.0], [221.0, 669.0], [184.0, 669.0]]
from paddleocr import PaddleOCR
import math
from PIL import Image
ocr = PaddleOCR(use_angle_cls=True, lang='en')
image_path = 'test.png'
result = ocr.ocr(image_path, cls=True)
for line in result:
#for i in range(len(line)):
x1, y1 = line[0][0][0]
x2, y2 = line[0][0][1]
x3, y3 = line[0][0][2]
x4, y4 = line[0][0][3]
print(f"x1, y1 = {x1}, {y1}")
print(f"x2, y2 = {x2}, {y2}")
print(f"x3, y3 = {x3}, {y3}")
print(f"x4, y4 = {x4}, {y4}")
# Calculate the vector between points 1 and 3
dx = x3 - x1
dy = y3 - y1
# Calculate the angle in degrees, accounting for orientation
angle = math.degrees(math.atan2(dy, dx))
print(f"Angle: {angle} degrees")
With this code the reuslts is 71° and 85°. The value 85° should be negative.
Can you help me ?

How to convert bounding box (x1, y1, x2, y2) to YOLO Style (X, Y, W, H)

I'm training a YOLO model, I have the bounding boxes in this format:-
x1, y1, x2, y2 => ex (100, 100, 200, 200)
I need to convert it to YOLO format to be something like:-
X, Y, W, H => 0.436262 0.474010 0.383663 0.178218
I already calculated the center point X, Y, the height H, and the weight W.
But still need a away to convert them to floating numbers as mentioned.
for those looking for the reverse of the question (yolo format to normal bbox format)
def yolobbox2bbox(x,y,w,h):
x1, y1 = x-w/2, y-h/2
x2, y2 = x+w/2, y+h/2
return x1, y1, x2, y2
Here's code snipet in python to convert x,y coordinates to yolo format
def convert(size, box):
dw = 1./size[0]
dh = 1./size[1]
x = (box[0] + box[1])/2.0
y = (box[2] + box[3])/2.0
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)
im=Image.open(img_path)
w= int(im.size[0])
h= int(im.size[1])
print(xmin, xmax, ymin, ymax) #define your x,y coordinates
b = (xmin, xmax, ymin, ymax)
bb = convert((w,h), b)
Check my sample program to convert from LabelMe annotation tool format to Yolo format https://github.com/ivder/LabelMeYoloConverter
There is a more straight-forward way to do those stuff with pybboxes. Install with,
pip install pybboxes
use it as below,
import pybboxes as pbx
voc_bbox = (100, 100, 200, 200)
W, H = 1000, 1000 # WxH of the image
pbx.convert_bbox(voc_bbox, from_type="voc", to_type="yolo", image_size=(W,H))
>>> (0.15, 0.15, 0.1, 0.1)
Note that, converting to YOLO format requires the image width and height for scaling.
YOLO normalises the image space to run from 0 to 1 in both x and y directions. To convert between your (x, y) coordinates and yolo (u, v) coordinates you need to transform your data as u = x / XMAX and y = y / YMAX where XMAX, YMAX are the maximum coordinates for the image array you are using.
This all depends on the image arrays being oriented the same way.
Here is a C function to perform the conversion
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <math.h>
struct yolo {
float u;
float v;
};
struct yolo
convert (unsigned int x, unsigned int y, unsigned int XMAX, unsigned int YMAX)
{
struct yolo point;
if (XMAX && YMAX && (x <= XMAX) && (y <= YMAX))
{
point.u = (float)x / (float)XMAX;
point.v = (float)y / (float)YMAX;
}
else
{
point.u = INFINITY;
point.v = INFINITY;
errno = ERANGE;
}
return point;
}/* convert */
int main()
{
struct yolo P;
P = convert (99, 201, 255, 324);
printf ("Yolo coordinate = <%f, %f>\n", P.u, P.v);
exit (EXIT_SUCCESS);
}/* main */
There are two potential solutions. First of all you have to understand if your first bounding box is in the format of Coco or Pascal_VOC. Otherwise you can't do the right math.
Here is the formatting;
Coco Format: [x_min, y_min, width, height]
Pascal_VOC Format: [x_min, y_min, x_max, y_max]
Here are some Python Code how you can do the conversion:
Converting Coco to Yolo
# Convert Coco bb to Yolo
def coco_to_yolo(x1, y1, w, h, image_w, image_h):
return [((2*x1 + w)/(2*image_w)) , ((2*y1 + h)/(2*image_h)), w/image_w, h/image_h]
Converting Pascal_voc to Yolo
# Convert Pascal_Voc bb to Yolo
def pascal_voc_to_yolo(x1, y1, x2, y2, image_w, image_h):
return [((x2 + x1)/(2*image_w)), ((y2 + y1)/(2*image_h)), (x2 - x1)/image_w, (y2 - y1)/image_h]
If need additional conversions you can check my article at Medium: https://christianbernecker.medium.com/convert-bounding-boxes-from-coco-to-pascal-voc-to-yolo-and-back-660dc6178742
For yolo format to x1,y1, x2,y2 format
def yolobbox2bbox(x,y,w,h):
x1 = int((x - w / 2) * dw)
x2 = int((x + w / 2) * dw)
y1 = int((y - h / 2) * dh)
y2 = int((y + h / 2) * dh)
if x1 < 0:
x1 = 0
if x2 > dw - 1:
x2 = dw - 1
if y1 < 0:
y1 = 0
if y2 > dh - 1:
y2 = dh - 1
return x1, y1, x2, y2
There are two things you need to do:
Divide the coordinates by the image size to normalize them to [0..1] range.
Convert (x1, y1, x2, y2) coordinates to (center_x, center_y, width, height).
If you're using PyTorch, Torchvision provides a function that you can use for the conversion:
from torch import tensor
from torchvision.ops import box_convert
image_size = tensor([608, 608])
boxes = tensor([[100, 100, 200, 200], [300, 300, 400, 400]], dtype=float)
boxes[:, :2] /= image_size
boxes[:, 2:] /= image_size
boxes = box_convert(boxes, "xyxy", "cxcywh")
Just reading the answers I am also looking for this but find this more informative to know what happening at the backend.
Form Here: Source
Assuming x/ymin and x/ymax are your bounding corners, top left and bottom right respectively. Then:
x = xmin
y = ymin
w = xmax - xmin
h = ymax - ymin
You then need to normalize these, which means give them as a proportion of the whole image, so simple divide each value by its respective size from the values above:
x = xmin / width
y = ymin / height
w = (xmax - xmin) / width
h = (ymax - ymin) / height
This assumes a top-left origin, you will have to apply a shift factor if this is not the case.
So the answer

Going back to original image from image edges

If we read an image X and apply the simplest edge detector F = [1 0 −1] on it to find Y. Is it possible to go back to retrieve X from Y? Given that Yn = Xn−1−Xn+1, can you express X in terms of Y. Can we design a 3x3 filter G that performs the opposite of F?
Assuming that X-1 and X0 are known (say both 0),
X1= Y0 + X-1
X2= Y1 + X0
X3= Y2 + X1 = Y2 + Y0 + X-1
X4= Y3 + X2 = Y3 + Y1 + X0
X5= Y4 + X3 = Y4 + Y2 + Y0 + X-1
X6= Y5 + X4 = Y5 + Y3 + Y1 + X0
...
This is a recursive filter, which computes the prefix sum of every other pixel.
This will only work if you have the signed values of Y. If only the absolute value is available, you are stuck.

Torch / Lua, how to pair together to arrays into a table?

I need to use the Pearson correlation coefficient in my Torch / Lua program.
This is the function:
function math.pearson(a)
-- compute the mean
local x1, y1 = 0, 0
for _, v in pairs(a) do
x1, y1 = x1 + v[1], y1 + v[2]
print('x1 '..x1);
print('y1 '..y1);
end
-- compute the coefficient
x1, y1 = x1 / #a, y1 / #a
local x2, y2, xy = 0, 0, 0
for _, v in pairs(a) do
local tx, ty = v[1] - x1, v[2] - y1
xy, x2, y2 = xy + tx * ty, x2 + tx * tx, y2 + ty * ty
end
return xy / math.sqrt(x2) / math.sqrt(y2)
end
This function wants an input couple table that can be used in the pairs() function.
I tried to submit to it the right input table but I was not able to get anything working.
I tried with:
z = {}
a = {27, 29, 45, 98, 1293, 0.1}
b = {0.0001, 0.001, 0.32132, 0.0001, 0.0009}
z = {a,b}
But unfortunately it does not work. It will compute pairs between the first elements of b, while I want it to compute the correlation between a and b.
How could I do?
Can you provide a good object that may work as input to the math.pearson() function?
If you are using this implementation, then I think it expects a different parameter structure. You should be passing a table of two-value tables, as in:
local z = {
{27, 0.0001}, {29, 0.001}, {45, 0.32132}, {98, 0.0001}, {1293, 0.0009}
}
print(math.pearson(z))
This prints -0.25304101592759 for me.

Draw Perpendicular line to a line in opencv

I better explain my problem with an Image
I have a contour and a line which is passing through that contour.
At the intersection point of contour and line I want to draw a perpendicular line at the intersection point of a line and contour up to a particular distance.
I know the intersection point as well as slope of the line.
For reference I am attaching this Image.
If the blue line in your picture goes from point A to point B, and you want to draw the red line at point B, you can do the following:
Get the direction vector going from A to B. This would be:
v.x = B.x - A.x; v.y = B.y - A.y;
Normalize the vector:
mag = sqrt (v.x*v.x + v.y*v.y); v.x = v.x / mag; v.y = v.y / mag;
Rotate the vector 90 degrees by swapping x and y, and inverting one of them. Note about the rotation direction: In OpenCV and image processing in general x and y axis on the image are not oriented in the Euclidian way, in particular the y axis points down and not up. In Euclidian, inverting the final x (initial y) would rotate counterclockwise (standard for euclidean), and inverting y would rotate clockwise. In OpenCV it's the opposite. So, for example to get clockwise rotation in OpenCV: temp = v.x; v.x = -v.y; v.y = temp;
Create a new line at B pointing in the direction of v:
C.x = B.x + v.x * length; C.y = B.y + v.y * length;
(Note that you can make it extend in both directions by creating a point D in the opposite direction by simply negating length.)
This is my version of the function :
def getPerpCoord(aX, aY, bX, bY, length):
vX = bX-aX
vY = bY-aY
#print(str(vX)+" "+str(vY))
if(vX == 0 or vY == 0):
return 0, 0, 0, 0
mag = math.sqrt(vX*vX + vY*vY)
vX = vX / mag
vY = vY / mag
temp = vX
vX = 0-vY
vY = temp
cX = bX + vX * length
cY = bY + vY * length
dX = bX - vX * length
dY = bY - vY * length
return int(cX), int(cY), int(dX), int(dY)

Resources