In the last days I did it here:
https://github.com/PerduGames/SoftNoise-GDScript-
and now I can generate my "infinite" maps, however I have problems dealing with the generation of only parts of it as the player moves in a 2D scenario in Godot(GDScript).
I'm trying to paint an area around the player in the tilemap. With this function I take the position of the player:
func check_posChunk(var _posChunk, var _posPlayer):
var pos = $"../TileMap".world_to_map(_posPlayer)
for i in range(0, mapSize, 16):
if pos >= Vector2(i, i) && pos <= Vector2(i + 16, i + 16):
if pos.x > pos.y:
_posChunk = Vector2(i, i) - Vector2(32, 48)
else:
_posChunk = Vector2(i, i) - Vector2(16, 16)
break
return _posChunk
where I store the position in the variable "posChunk" for and I paint here:
func redor(var posPlayer):
posChunk = check_posChunk(posChunk, posPlayer)
for x in range(64):
for y in range(64):
$"../TileMap".set_cell(posChunk.x + x, posChunk.y + y, biomes(elevation_array[posChunk.x + x][posChunk.y + y], umidade_array[posChunk.x + x][posChunk.y + y]))
I can paint around the player when x < y, and when x == y, but when x > y, complications occur, due to this here, even though I check the situation in the above if, there are cases where it will not paint as expected:
https://github.com/godotengine/godot/issues/9284
How to handle the Vector2 comparison correctly?
I was able to find the answer to this case, answered in another forum, comparing Vector2 would not be the best way to do this, using Rect2 (get two Vector2, the first parameter is the position and the second the size) you can check if the player is inside a box, so this code below happens:
https://godotengine.org/qa/17982/how-to-compare-two-rect2?show=17994#c17994
#Verify that the pos that is the player's position
#is inside the rect_chunk rectangle with the has_point function of Rect2.
var rect_chunk = Rect2(Vector2(i, i), Vector2(16, 16))
if(rect_chunk).has_point(pos)):
Related
Problem Statement: I have generated a video from data to images of ANSYS Simulation of vortices formed due to flat plate plunging. The video contains vortices (in simpler terms blobs), which are ever evolving (dissociating and merging).
Sample video
Objective: The vortices are needed to be identified and labelled such that the consistency of the label is maintained. If a certain vortex has been given a label in the previous frame its label remains the same. If it dissociates the larger component (parent) should retain the same label whereas the smaller component gets a new label. If it merges then the label of larger of the two should be given to them.
Attempt: I have written a function which detects object boundary (contour detection) and then finds the center of the identiied contour. Further mapping the centroid to the one closest in the next frame provided the distance is less than certain threshold.
Attempted tracking video
Tracking algorithm:
import math
class EuclideanDistTracker:
def __init__(self):
# Store the center positions of the objects
self.center_points = {}
# Keep the count of the IDs
# each time a new object id detected, the count will increase by one
self.id_count = 0
def update(self, objects_rect):
# Objects boxes and ids
objects_bbs_ids = []
# Get center point of new object
for rect in objects_rect:
x, y, w, h = rect
cx = (x + x + w) // 2
cy = (y + y + h) // 2
# Find out if that object was detected already
same_object_detected = False
for id, pt in self.center_points.items():
dist = math.hypot(cx - pt[0], cy - pt[1])
if dist < 20: # Threshold
self.center_points[id] = (cx, cy)
print(self.center_points)
objects_bbs_ids.append([x, y, w, h, id])
same_object_detected = True
break
# New object is detected we assign the ID to that object
if same_object_detected is False:
self.center_points[self.id_count] = (cx, cy)
objects_bbs_ids.append([x, y, w, h, self.id_count])
self.id_count += 1
# Clean the dictionary by center points to remove IDS not used anymore
new_center_points = {}
for obj_bb_id in objects_bbs_ids:
_, _, _, _, object_id = obj_bb_id
center = self.center_points[object_id]
new_center_points[object_id] = center
# Update dictionary with IDs not used removed
self.center_points = new_center_points.copy()
return objects_bbs_ids
Implementing tracking algorithm to the sample video:
import cv2
import numpy as np
from tracker import *
# Create tracker object
tracker = EuclideanDistTracker()
cap = cv2.VideoCapture("Video Source")
count = 0
while True:
ret, frame = cap.read()
print("\n")
if(count != 0):
print("Frame Count: ", count)
frame = cv2.resize(frame, (0, 0), fx = 1.5, fy = 1.5)
height, width, channels = frame.shape
# 1. Object Detection
hsvFrame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
sensitivity = 20
white_lower = np.array([0,0,255-sensitivity])
white_upper = np.array([255,sensitivity,255])
white_mask = cv2.inRange(hsvFrame, white_lower, white_upper)
# Morphological Transform, Dilation
kernal = np.ones((3, 3), "uint8")
c = 1
contours_w, hierarchy_w = cv2.findContours(white_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
detections = []
for contour_w in contours_w:
area = cv2.contourArea(contour_w)
if area > 200 and area < 100000:
cv2.drawContours(frame, [contour_w], -1, (0, 0, 0), 1)
x,y,w,h = cv2.boundingRect(contour_w)
detections.append([x, y, w, h])
# 2. Object Tracking
boxes_ids = tracker.update(detections)
for box_id in boxes_ids:
x, y, w, h, id = box_id
cv2.putText(frame, str(id), (x - 8, y + 8), cv2.FONT_HERSHEY_PLAIN, 2, (0, 0, 0), 2)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 0), 1)
cv2.imshow("Frame", frame)
count += 1
key = cv2.waitKey(0)
if key == 27:
break
cap.release()
cv2.destroyAllWindows()
Problem: I was able to implement continuous labelling but the problem is that the objective of retaining label for the parent vortex is not maintained. (As seen from time (t) = 0s to 9s, the largest vortex is given a label = 3, whereas from t = 9s it is given label = 9. I want that to remain as label = 3 in Attempted tracking video). Any suggestions would be helpful and guide if I am on the right track or should I use some other standard tracking algorithm or any deep learning algorithm.
PS: Sorry for the excessive text but stackoverflow denied me putting just link of the codes.
I used surface.DrawTexturedRectRotated() to make filled circle but it rotates from center and i want to make it rotate from left.
I tried to rotate it but it makes full circle when its 180 degrees
function draw.FilledCircle( x, y, w, h, ang, color )
for i=1,ang do
draw.NoTexture()
surface.SetDrawColor( color or color_white )
surface.DrawTexturedRectRotated( x,y, w, h, i )
end
end
How do i make it to be rotated from left?
If you want a function that allows you to create pie-chart-like filled circle by specifying the ang parameter, your best bet is probably surface.DrawPoly( table vertices ). You should be able to use it like so:
function draw.FilledCircle(x, y, r, ang, color) --x, y being center of the circle, r being radius
local verts = {{x = x, y = y}} --add center point
for i = 0, ang do
local xx = x + math.cos(math.rad(i)) * r
local yy = y - math.sin(math.rad(i)) * r
table.insert(verts, {x = xx, y = yy})
end
--the resulting table is a list of counter-clockwise vertices
--surface.DrawPoly() needs clockwise list
verts = table.Reverse(verts) --should do the job
surface.SetDrawColor(color or color_white)
draw.NoTexture()
surface.DrawPoly(verts)
end
I have put surface.SetDrawColor() before draw.NoTexture() as this example suggests it.
You may want to use for i = 0, ang, angleStep do instead to reduce the number of vertices, therefore reducing hardware load, however that is viable only for small circles (like the one in your example) so the angle step should be some function of radius to account for every situation. Also, additional computing needs to be done to allow for angles that do not divide by the angle step with remainder of zero.
--after the for loop
if ang % angleStep then
local xx = x + math.cos(math.rad(ang)) * r
local yy = y - math.sin(math.rad(ang)) * r
table.insert(verts, {x = xx, y = yy})
end
As for the texturing, this will be very different from rectangle if your texture is anything else than solid color, but a swift look at the library did not reveal any better way to achieve this.
Demo almost (?) working example: https://ellie-app.com/4h9F8FNcRPya1/1
For demo: Click to draw ray, and rotate camera with left and right to see ray. (As the origin is from the camera, you can't see it from the position it is created)
Context
I am working on an elm & elm-webgl project where I would like to know if the mouse is over an object when clicked. To do is I tried to implement a simple ray cast. What I need is two things:
1) The coordinate of the camera (This one is easy)
2) The coordinate/direction in 3D space of where was clicked
Problem
The steps to get from 2D view space to 3D world space as I understand are:
a) Make coordinates to be in a range of -1 to 1 relative to view port
b) Invert projection matrix and perspective matrix
c) Multiply projection and perspective matrix
d) Create Vector4 from normalised mouse coordinates
e) Multiply combined matrices with Vector4
f) Normalise result
Try so far
I have made a function to transform a Mouse.Position to a coordinate to draw a line to:
getClickPosition : Model -> Mouse.Position -> Vec3
getClickPosition model pos =
let
x =
toFloat pos.x
y =
toFloat pos.y
normalizedPosition =
( (x * 2) / 1000 - 1, (1 - y / 1000 * 2) )
homogeneousClipCoordinates =
Vec4.vec4
(Tuple.first normalizedPosition)
(Tuple.second normalizedPosition)
-1
1
inversedProjectionMatrix =
Maybe.withDefault Mat4.identity (Mat4.inverse (camera model))
inversedPerspectiveMatrix =
Maybe.withDefault Mat4.identity (Mat4.inverse perspective)
inversedMatrix2 =
Mat4.mul inversedProjectionMatrix inversedPerspectiveMatrix
to =
Vec4.vec4
(Tuple.first normalizedPosition)
(Tuple.second normalizedPosition)
1
1
toInversed =
mulVector inversedMatrix2 to
toNorm =
Vec4.normalize toInversed
toVec3 =
vec3 (Vec4.getX toNorm) (Vec4.getY toNorm) (Vec4.getZ toNorm)
in
toVec3
Result
The result of this function is that the rays are too much to the center to where I click. I added a screenshot where I clicked in all four of the top face of the cube. If I click on the center of the viewport the ray will be correctly positioned.
It feels close, but not quite there yet and I can't figure out what I am doing wrong!
After trying other approaches I found a solution:
getClickPosition : Model -> Mouse.Position -> Vec3
getClickPosition model pos =
let
x =
toFloat pos.x
y =
toFloat pos.y
normalizedPosition =
( (x * 2) / 1000 - 1, (1 - y / 1000 * 2) )
homogeneousClipCoordinates =
Vec4.vec4
(Tuple.first normalizedPosition)
(Tuple.second normalizedPosition)
-1
1
inversedViewMatrix =
Maybe.withDefault Mat4.identity (Mat4.inverse (camera model))
inversedProjectionMatrix =
Maybe.withDefault Mat4.identity (Mat4.inverse perspective)
vec4CameraCoordinates = mulVector inversedProjectionMatrix homogeneousClipCoordinates
direction = Vec4.vec4 (Vec4.getX vec4CameraCoordinates) (Vec4.getY vec4CameraCoordinates) -1 0
vec4WorldCoordinates = mulVector inversedViewMatrix direction
vec3WorldCoordinates = vec3 (Vec4.getX vec4WorldCoordinates) (Vec4.getY vec4WorldCoordinates) (Vec4.getZ vec4WorldCoordinates)
normalizedVec3WorldCoordinates = Vec3.normalize vec3WorldCoordinates
origin = model.cameraPos
scaledDirection = Vec3.scale 20 normalizedVec3WorldCoordinates
destination = Vec3.add origin scaledDirection
in
destination
I left it as verbose as possible, if someone finds I use incorrect terminology please make a comment and I will update the answer.
I am sure there are lots of optimisations possible (Multiplying matrices before inverting or combining some of the steps.)
Updated the ellie app here: https://ellie-app.com/4hZ9s8S92PSa1/0
I'm using in OpenCV the method
triangulatePoints(P1,P2,x1,x2)
to get the 3D coordinates of a point by its image points x1/x2 in the left/right image and the projection matrices P1/P2.
I've already studied epipolar geometry and know most of the maths behind it. But what how does this algorithm get mathematically the 3D Coordinates?
Here are just some ideas, to the best of my knowledge, should at least work theoretically.
Using the camera equation ax = PX, we can express the two image point correspondences as
ap = PX
bq = QX
where p = [p1 p2 1]' and q = [q1 q2 1]' are the matching image points to the 3D point X = [X Y Z 1]' and P and Q are the two projection matrices.
We can expand these two equations and rearrange the terms to form an Ax = b system as shown below
p11.X + p12.Y + p13.Z - a.p1 + b.0 = -p14
p21.X + p22.Y + p23.Z - a.p2 + b.0 = -p24
p31.X + p32.Y + p33.Z - a.1 + b.0 = -p34
q11.X + q12.Y + q13.Z + a.0 - b.q1 = -q14
q21.X + q22.Y + q23.Z + a.0 - b.q2 = -q24
q31.X + q32.Y + q33.Z + a.0 - b.1 = -q34
from which we get
A = [p11 p12 p13 -p1 0; p21 p22 p23 -p2 0; p31 p32 p33 -1 0; q11 q12 q13 0 -q1; q21 q22 q23 0 -q2; q31 q32 q33 0 -1], x = [X Y Z a b]' and b = -[p14 p24 p34 q14 q24 q34]'. Now we can solve for x to find the 3D coordinates.
Another approach is to use the fact, from camera equation ax = PX, that x and PX are parallel. Therefore, their cross product must be a 0 vector. So using,
p x PX = 0
q x QX = 0
we can construct a system of the form Ax = 0 and solve for x.
I'm trying to find the orientation of a binary image (where orientation is defined to be the axis of least moment of inertia, i.e. least second moment of area). I'm using Dr. Horn's book (MIT) on Robot Vision which can be found here as reference.
Using OpenCV, here is my function, where a, b, and c are the second moments of area as found on page 15 of the pdf above (page 60 of the text):
Point3d findCenterAndOrientation(const Mat& src)
{
Moments m = cv::moments(src, true);
double cen_x = m.m10/m.m00; //Centers are right
double cen_y = m.m01/m.m00;
double a = m.m20-m.m00*cen_x*cen_x;
double b = 2*m.m11-m.m00*(cen_x*cen_x+cen_y*cen_y);
double c = m.m02-m.m00*cen_y*cen_y;
double theta = a==c?0:atan2(b, a-c)/2.0;
return Point3d(cen_x, cen_y, theta);
}
OpenCV calculates the second moments around the origin (0,0) so I have to use the Parallel Axis Theorem to move the axis to the center of the shape, mr^2.
The center looks right when I call
Point3d p = findCenterAndOrientation(src);
rectangle(src, Point(p.x-1,p.y-1), Point(p.x+1, p.y+1), Scalar(0.25), 1);
But when I try to draw the axis with lowest moment of inertia, using this function, it looks completely wrong:
line(src, (Point(p.x,p.y)-Point(100*cos(p.z), 100*sin(p.z))), (Point(p.x, p.y)+Point(100*cos(p.z), 100*sin(p.z))), Scalar(0.5), 1);
Here are some examples of input and output:
(I'd expect it to be vertical)
(I'd expect it to be horizontal)
I worked with the orientation sometimes back and coded the following. It returns me the exact orientation of the object. largest_contour is the shape that is detected.
CvMoments moments1,cenmoments1;
double M00, M01, M10;
cvMoments(largest_contour,&moments1);
M00 = cvGetSpatialMoment(&moments1,0,0);
M10 = cvGetSpatialMoment(&moments1,1,0);
M01 = cvGetSpatialMoment(&moments1,0,1);
posX_Yellow = (int)(M10/M00);
posY_Yellow = (int)(M01/M00);
double theta = 0.5 * atan(
(2 * cvGetCentralMoment(&moments1, 1, 1)) /
(cvGetCentralMoment(&moments1, 2, 0) - cvGetCentralMoment(&moments1, 0, 2)));
theta = (theta / PI) * 180;
// fit an ellipse (and draw it)
if (largest_contour->total >= 6) // can only do an ellipse fit
// if we have > 6 points
{
CvBox2D box = cvFitEllipse2(largest_contour);
if ((box.size.width < imgYellowThresh->width) && (box.size.height < imgYellowThresh->height))
{
cvEllipseBox(imgYellowThresh, box, CV_RGB(255, 255 ,255), 3, 8, 0 );
}
}