How can I use turtle graphics in maxscript to align the bent boxes - turtle-graphics

struct Turtle (
position_ = [0, 0, 0],
heading_= [0, 1, 0],
rotationQuat_ = quat 0 0 0 1,
amount = 200,
turnAngle = 45,
fn forward = (
c = box pos: position_ wirecolor: red width: 40 length: amount height: 2 lengthSegs: 10
rotate c rotationQuat_
position_ = position_ + amount * heading_
),
fn left = (
q = quat -turnAngle [0, 0, 1]
rotationQuat_ = q * rotationQuat_
invq = inverse q
heading_ = heading_ * invq
c = box pos: position_ wirecolor: red width: 40 length: amount height: 2 lengthSegs: 10
addmodifier c (bend())
c.bend.bendAngle = 45
c.bend.bendAxis = 1
rotate c rotationQuat_
position_ = position_ + amount * heading_
),
fn right = (
q = quat turnAngle [0, 0, 1]
rotationQuat_ = q * rotationQuat_
invq = inverse q
heading_ = heading_ * invq
c = box pos: position_ wirecolor: red width: 40 length: amount height: 2 lengthSegs: 10
addmodifier c (bend())
c.bend.bendangle = -45
c.bend.bendAxis = 1
rotate c rotationQuat_
position_ = position_ + amount * heading_
)
)
fn main = (
delete objects
t = Turtle()
t.left()
t.left()
t.left()
t.left()
t.left()
t.left()
t.left()
t.right()
t.forward()
t.forward()
t.right()
)
main()
I'm trying to create a path using turtle graphics but because the boxes don't line up properly once you bend it I can't get them align. I managed to get it temporarily working by changing the pivot point on each shape but it was hard coded so that only worked in certain scenarios. How can I get each of the boxes to align?

I'd make use of the fact that pivot of a box is at zero in local Z, and switch the orientation of both the box and the bend gizmo so that it bends around its base and not around the center. Since you're bending the box, you also cannot simply move by the box lenght but you have to calculate an offset along the bent arc instead. All in all, this is how I'd do that:
struct Turtle
(
private transform = arbAxis y_axis,
public stepLength = 200,
public turnAngle = 45,
private fn updateTransform angle distance =
if angle == 0 then transform *= transMatrix (transform.row3 * distance)
else transform = rotateYMatrix angle * transMatrix (distance / degToRad angle * [1 - cos angle, 0, sin angle]) * transform,
private fn stride distance bendAngle =
(
local stripe = Box width:40 length:2 height:stepLength widthSegs:1 lengthSegs:1 heightSegs:10 transform:transform wirecolor:red
if bendAngle != 0 do addModifier stripe (Bend bendAxis:2 bendAngle:bendAngle)
updateTransform bendAngle distance
),
public fn forward = stride stepLength 0,
public fn left = stride stepLength turnAngle,
public fn right = stride stepLength -turnAngle
)

Related

how to project a point defined in real world coordinates to image plane and vice-versa?

I prepared a toy experiment to project a point defined in the world frame to the image plane. I'm trying to calculate the 3D point (inverse-projection) with the calculated pixel coordinates. I used the same coordinate frames as the figure below [https://www.researchgate.net/figure/World-and-camera-frame-definition_fig1_224318711]. (world x,y,z -> camera z,-x,-y). Then I will project the same point for different image frames. The problem here is that the 3D point defined as (8,-2.1) is calculated at (6.29, -1.60, 0.7). Since there is no loss of information, I think I need to find the same point. I believe there is a problem with depth. I couldn't find where I missed.
import numpy as np
#3d point at 8 -2 1 wrt world frame
P_world = np.array([8, -2, 1, 1]).reshape((4,1))
T_wc = np.array([
[ 0, -1, 0, 0],
[ 0, 0, -1, 0],
[ 1, 0, 0, 0],
[ 0, 0, 0, 1]])
pose0 = np.eye(4)
pose0[:3,-1] = [1, 0, .6]
pose0 = np.matmul(T_wc, pose0)
pose1 = np.eye(4)
pose1[:3,-1] = [3, 0, .6]
pose1 = np.matmul(T_wc, pose1)
depth0 = np.linalg.norm(P_world[:3].flatten() - np.array([1, 0, .6]).flatten())
depth1 = np.linalg.norm(P_world[:3].flatten() - np.array([3, 0, .6]).flatten())
K = np.array([
[173.0, 0 , 173.0],
[ 0 , 173.0, 130.0],
[ 0 , 0 , 1 ]])
uv1 = np.matmul(np.matmul(K, pose0[:3]), P_world)
uv1 = (uv1 / uv1[-1])[:2]
uv2 = np.matmul(np.matmul(K, pose1[:3]), P_world)
uv2 = (uv2 / uv2[-1])[:2]
img0 = np.zeros((260,346))
img0[int(uv1[0]), int(uv1[1])] = 1
img1 = np.zeros((260,346))
img1[int(uv2[0]), int(uv2[1])] = 1
#%% Op
pix_coord = np.array([int(uv1[0]), int(uv1[1]), 1])
pt_infilm = np.matmul(np.linalg.inv(K), pix_coord.reshape(3,1))
pt_incam = depth0*pt_infilm
pt_incam_hom = np.append(pt_incam, 1)
pt_inworld = np.matmul(np.linalg.inv(pose0), pt_incam_hom)

How to extract area of interest in the image while the boundary is not obvious

Are there ways to just extract the area of interest (the square light part in the red circle in the original image)? That means I need to get the coordinates of the edge and then masking the image outside the boundaries. I don't know how to do that. Could anyone help? Thanks!
#define horizontal and Vertical sobel kernels
Gx = np.array([[-1, 0, 1],[-2, 0, 2],[-1, 0, 1]])
Gy = np.array([[-1, -2, -1],[0, 0, 0],[1, 2, 1]])
#define kernal convolution function
# with image X and filter F
def convolve(X, F):
# height and width of the image
X_height = X.shape[0]
X_width = X.shape[3]
# height and width of the filter
F_height = F.shape[0]
F_width = F.shape[1]
H = (F_height - 1) // 2
W = (F_width - 1) // 2
#output numpy matrix with height and width
out = np.zeros((X_height, X_width))
#iterate over all the pixel of image X
for i in np.arange(H, X_height-H):
for j in np.arange(W, X_width-W):
sum = 0
#iterate over the filter
for k in np.arange(-H, H+1):
for l in np.arange(-W, W+1):
#get the corresponding value from image and filter
a = X[i+k, j+l]
w = F[H+k, W+l]
sum += (w * a)
out[i,j] = sum
#return convolution
return out
#normalizing the vectors
sob_x = convolve(image, Gx) / 8.0
sob_y = convolve(image, Gy) / 8.0
#calculate the gradient magnitude of vectors
sob_out = np.sqrt(np.power(sob_x, 2) + np.power(sob_y, 2))
# mapping values from 0 to 255
sob_out = (sob_out / np.max(sob_out)) * 255
plt.imshow(sob_out, cmap = 'gray', interpolation = 'bicubic')
plt.show()

Is there a way to extract single line from Hough's Trasnform in OpenCV?

Am using Hough's Transform to detect straight lines in an image. Transformation is done after a Canny edge detection and am able to get the lines, however, i need to display only the Left most line. Here is the section of code
cv::Mat Final, Canned;
HoughLines(Canned, lines, 1, CV_PI / 180, 150, 0, 0);
for (size_t i = 0; i < lines.size(); i++)
{
float rho = lines[i][0], theta = lines[i][1];
cv::Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000 * (-b));
pt1.y = cvRound(y0 + 1000 * (a));
pt2.x = cvRound(x0 - 1000 * (-b));
pt2.y = cvRound(y0 - 1000 * (a));
line(Final,pt1, pt2, Scalar(0, 0, 255), 1, CV_AA);
}
imshow("detected lines", Final);
Am attaching the image generated after applying Hough Transform
I need to display only the Left most line
here is the elements of Line vector.
[-386, 3.12414]
[-332, 3.08923]
[-381, 3.12414]
[-337, 3.10669]
[386, 0]
[-323, 3.05433]
[-339, 3.10669]
[-335, 3.08923]
[-330, 3.07178]
[383, 0]
[-317, 3.08923]
You can iterate though the lines and find the left most line by averaging the x-coordinates of the expoints of the line.
# Iterate through lines and find the x-position
xPositions = []
for line in lines:
cdst, pt1, pt2 = draw_line(line, cdst, (0,0,255))
xPositions.append((pt1[0]+pt2[0])/2)
Here is the output image:
Here is the source image:
Here is the complete code:
import math
import cv2 as cv
import numpy as np
src = cv.imread('/home/stephen/Desktop/lines.png', cv.IMREAD_GRAYSCALE)
dst = cv.Canny(src, 50, 200, None, 3)
# Copy edges to the images that will display the results in BGR
cdst = cv.cvtColor(dst, cv.COLOR_GRAY2BGR)
cdstP = np.copy(cdst)
# Find lines
lines = cv.HoughLines(dst, 1, np.pi / 180, 100, None, 0, 0)
# Function that draws line
def draw_line(line, img, color):
rho = line[0][0]
theta = line[0][1]
a = math.cos(theta)
b = math.sin(theta)
x0 = a * rho
y0 = b * rho
pt1 = (int(x0 + 1000*(-b)), int(y0 + 1000*(a)))
pt2 = (int(x0 - 1000*(-b)), int(y0 - 1000*(a)))
cv.line(cdst, pt1, pt2, color, 3, cv.LINE_AA)
return img, pt1, pt2
# Iterate through lines and find the x-position
xPositions = []
for line in lines:
cdst, pt1, pt2 = draw_line(line, cdst, (0,0,255))
xPositions.append((pt1[0]+pt2[0])/2)
# Find the left most line
leftMost = xPositions.index(min(xPositions))
# Draw only the left most line
cdst, pt1, pt2 = draw_line(lines[leftMost], cdst, (0,255,0))
cv.imshow('lines', cdst)
cv.waitKey()
cv.destroyAllWindows()

How to anchor object inside a rectangle in corona sdk?

I am trying to add a little rectangle inside of a big rectangle as seen in the images below but nothing seems to be working. I want to use anchors but I do not know how to proceed. I am trying to put the little rectangle in the top right corner of the bigger rectangle.Any advice would be extremely helpful!
local bigRectangle = display.newRect(200,200,320,400)
bigRectangle:setFillColor(0,0,1)
bigRectangle.x = _X
bigRectangle.y = _Y
local smallRectangle = display.newRect(200,200,20,20)
bigRectangle:setFillColor(255/255,255/255,0/255)
what I am trying to accomplish:
It can be accomplish in many ways. The simplest way is to change anchor point to (1, 0). It requires that both objects have the same x and y coords:
local bigRectangle = display.newRect( 200, 200, 320, 400 )
bigRectangle.anchorX, bigRectangle.anchorY = 1, 0
bigRectangle:setFillColor( 0, 0, 1 )
local smallRectangle = display.newRect( 200, 200, 20, 20 )
smallRectangle.anchorX, smallRectangle.anchorY = 1, 0
smallRectangle:setFillColor( 255 / 255, 255 / 255, 0 / 255 )
More universal method use bounds property of display objects:
local bigRectangle = display.newRect( 200, 200, 320, 400 )
bigRectangle:setFillColor( 0, 0, 1 )
bigRectangle.x = _X
bigRectangle.y = _Y
local smallRectangle = display.newRect( 200, 200, 20, 20 )
smallRectangle:setFillColor( 255 / 255, 255 / 255, 0 / 255 )
local bounds = bigRectangle.contentBounds
smallRectangle.x = bounds.xMax - smallRectangle.width * smallRectangle.anchorX
smallRectangle.y = bounds.yMin + smallRectangle.height * smallRectangle.anchorY

Rotate image around x, y, z axis in OpenCV

I need to calculate a warp matrix in OpenCV, representing the rotation around a given axis.
Around Z axis -> straightforward: I use the standard rotation matrix
[cos(a) -sin(a) 0]
[sin(a) cos(a) 0]
[ 0 0 1]
This is not so obvious for other rotations, so I tried building a homography, as explained on Wikipedia:
H = R - t n^T / d
I tried with a simple rotation around the X axis, and assuming the distance between the camera and the image is twice the image height.
R is the standard rotation matrix
[1 0 0]
[0 cos(a) -sin(a)]
[0 sin(a) cos(a)]
n is [0 0 1]because the camera is looking directly at the image from (0, 0, z_cam)
t is the translation, that should be [0 -2h*(sin(a)) -2h*(1-cos(a))]
d is the distance, and it's 2h per definition.
So, the final matrix is:
[1 0 0]
[0 cos(a) 0]
[0 sin(a) 1]
which looks quite good, when a = 0 it's an identity, and when a = pi it's mirrored around the x axis.
And yet, using this matrix for a perspective warp doesn't yield the expected result, the image is just "too warped" for small values of a, and disappears very quickly.
So, what am I doing wrong?
(note: I've read many questions and answers about this subject, but all are going in the opposite direction: I don't want to decompose a homography matrix, but rather to build one, given a 3d transformation, and a "fixed" camera or a fixed image and a moving camera).
Thank you.
Finally found a way, thanks to this post: https://plus.google.com/103190342755104432973/posts/NoXQtYQThgQ
I let OpenCV calculate the matrix for me, but I'm doing the perspective transform myself (found it easier to implement than putting everything into a cv::Mat)
float rotx, roty, rotz; // set these first
int f = 2; // this is also configurable, f=2 should be about 50mm focal length
int h = img.rows;
int w = img.cols;
float cx = cosf(rotx), sx = sinf(rotx);
float cy = cosf(roty), sy = sinf(roty);
float cz = cosf(rotz), sz = sinf(rotz);
float roto[3][2] = { // last column not needed, our vector has z=0
{ cz * cy, cz * sy * sx - sz * cx },
{ sz * cy, sz * sy * sx + cz * cx },
{ -sy, cy * sx }
};
float pt[4][2] = {{ -w / 2, -h / 2 }, { w / 2, -h / 2 }, { w / 2, h / 2 }, { -w / 2, h / 2 }};
float ptt[4][2];
for (int i = 0; i < 4; i++) {
float pz = pt[i][0] * roto[2][0] + pt[i][1] * roto[2][1];
ptt[i][0] = w / 2 + (pt[i][0] * roto[0][0] + pt[i][1] * roto[0][1]) * f * h / (f * h + pz);
ptt[i][1] = h / 2 + (pt[i][0] * roto[1][0] + pt[i][1] * roto[1][1]) * f * h / (f * h + pz);
}
cv::Mat in_pt = (cv::Mat_<float>(4, 2) << 0, 0, w, 0, w, h, 0, h);
cv::Mat out_pt = (cv::Mat_<float>(4, 2) << ptt[0][0], ptt[0][1],
ptt[1][0], ptt[1][1], ptt[2][0], ptt[2][1], ptt[3][0], ptt[3][1]);
cv::Mat transform = cv::getPerspectiveTransform(in_pt, out_pt);
cv::Mat img_in = img.clone();
cv::warpPerspective(img_in, img, transform, img_in.size());

Resources