ROS gmapping laser scan wrong range problem - ros

I found the requirement of laser scan is,
-pi to pi, positive step, and 0 degree is the car's moving forward direction. i.e.
angle_min= -135 * (pi/180); //angle correspond to FIRST beam in scan ( in rad)
angle_max= 135 * (pi/180); //angle correspond to LAST beam in scan ( in rad)
angle_increment =0.25 * (pi/180); // Angular resolution i.e angle between 2 beams
https://answers.ros.org/question/198843/need-explanation-on-sensor_msgslaserscanmsg/
now my lidar is 0 to 180 degree. negative step, and 90 degree is the car's moving forward direction.
angle_min= 0;
angle_max= (pi/180);
angle_increment = -0.25 * (pi/180);
and it did not work!
I scanned a map. Then I used AMCL to locate the robot.
Even the laser scan are matched in below pic. But the direction is wrong. the yellow arrow is correct one, the red arrow was wrongly estimated by AMCL.
How can I resolve this direction conflict?
thank you.

You are setting the scan angles incorrectly
angle_min= 0;
angle_max= pi;
angle_increment = 0.25 * (pi/180);
For your robot base to laser tf, laser scan rays must match like below
angle of ray 0 = 0 // positive y-axis of robot base
angle of ray 1 = angle_min + (angle_increment) * 1
angle of ray 2 = angle_min + (angle_increment) * 2
.
.
angle of ray 360 = angle_min + (angle_increment) * 360 // positive x-axis of robot base
.
.
angle of ray 720 = angle_min + (angle_increment) * 720 // negative y-axis of robot base

I don't know why you need explicitly matching moving direction with Rviz, but from what I can see on the screenshot TF node is used. It might affect your robot base frame orientation. You can experiment with its parameters, so you will be able to rotate your robot base frame as #MRFalcon mentioned. Here is a couple of tutorials, should be enough to solve your problem.

Related

How do I find specific points on a image, regardless of the rotate?

I have such a picture.
This is the numbers of the dots on the picture. I want the dot numbers to stay the same even if I turn the picture upside down. This rotation can be any angle value.
For example
Head up
Upside down
90 degree
45 degree
I tried a few things like Opencv camera calibration and Pose Estimation but I couldn't get the point numbers to be fixed.
I don't think this is a programming problem, but a basic geometry one.
You've got 7 points in that configuration and you want to find some geometric properties which are robust to rotation. The first thing I can think of is distance.
Compute the squared distance (SD) between all points, then find the one which maximizes the sum of squared distances (SSD) to all others. That point is P7.
Find the one with maximum SD from P7. That point is P1.
Find the one with minimum SD from P7. That point is P6.
Find the one with second minimum SD from P7. That point is P5.
Find the one with minimum SD from P1. That point is P2.
In order to find P3 and P4 you need to detect a side (left/right) with respect to a direction. Let's call Q and R the two unknown points and a direction we are already sure of, such as the one from 1 to 7. You can now look at the sign of the cross product between vector from P1 to P7 and the one from P1 to Q. If it's at the right, the value will be positive. So we are able to find its identity.
if ( (Q.x - P1.x) * (P2.y - P1.x) - (Q.y - P1.y) * (P2.x - P1.x) > 0) {
P3 = Q;
P4 = R;
}
else {
P3 = R;
P4 = Q;
}

Delphi GDI+ find point on an arc using the known rectangle and angle

Using GDI+ in Delphi 10.2.3: I have an elliptical (not circular) arc drawn from a rectangular RectF and defined start and swept angles using DrawArcF. I need to be able to find any point along the centerline of the arc (regardless of pen width) based just on the degrees of the point - e.g., if the arc starts at 210 for 120 degrees, I need to find the point at, say, 284 degrees, relative to the RectF.
In this case, the aspect ratio of the rectangle remains constant regardless of its size, so the shape of the arc should remain consistent as well, if that makes a difference.
Any ideas on how to go about this?
Parametric equation for axis-aligned ellipse centered at cx, cy with semiaxes a,b against angle Fi is:
t = ArcTan2(a * Sin(Fi), b * Cos(Fi))
x = cx + a * Cos(t)
y = cy + b * Sin(t)
(I used atan2 to get rid off atan range limitation/sign issues)
Note that parameter t runs through the same range 0..2*Pi but differs from true angle Fi (they coincide at angles k*Pi/2).
Picture of Fi/t ratio for b/a=0.6 from Mathworld (near formula 58)

Getting a point in the local frame of a simulated robot

This is for a robot simulation that I'm currently working on.
If you look at the diagram provided, you'll see two co-ordinate frames. Frame A, Frame B; and you will find a point p.
Co-ordinate frame A is the world-frame, and frame B is the local frame for my robot (where the x-axis is the heading-direction of the robot, as per convention). The robot is able to rotate and drive around in the world.
My goal here is to find the point p, expressed in terms of frame A, and re-express it in terms of the frame B.
The standard equation that I would use to implement this would be as follows:
point_in_frameB_x = point_in_frameA_x * cos(theta) - point_in_frameA_y * sin(theta) + t_x
point_in_frameB_y = point_in_frameA_x * sin(theta) + point_in_frameA_y * cos(theta) + t_y
Where t_x and t_y make up the translation transformation of frame B to frame A.
However, there are some complications here that prevent me from getting my desired results:
Since the robot can rotate around (with its default pose being with a heading direction north--and this has a rotation of 0 radians), I don't know how I would define t_x and t_y in the above code. Because if the robot has a heading direction (i.e. x-axis) parallel to the y-axis of frame A, the translation vector would be different from the situation where the robot has a heading direction parallel to the x-axis of frame A.
You would notice that the transformation from frame A to frame B isn't straightforward. I'm using this convention simply because I'm implementing this simulator which uses this convention for its image-frame.
Any help would be greatly appreciated. Thanks in advance.
Follow right hand rule for assigning coordinate frames. In your picture axes of either frame A or frame B must be changed.

Moving multiple sprites in elliptical path with uniform speed

I'm trying to move multiple sprites (images) in an elliptical path such that distance (arc distance) remains uniform.
I have tried
Move each sprite angle by angle, however the problem with this is that distance moved while moving unit angle around major axis is different than that while moving unit angle around minor axis - hence different distance moved.
Move sprites with just changing x-axis uniformly, however it again moves more around major axis.
So any ideas how to move sprites uniformly without them catching-up/overlapping each other?
Other info:
it will be called in onMouseMove/onTouchMoved so i guess it shouldn't
be much CPU intensive.
Although its a general algorithm question but
if it helps I'm using cocos2d-x
So this is what i ended up doing (which solved it for me):
I moved it in equation of circle and increased angle by 1 degree. Calculated x and y using sin/cos(angle) * radius. And to make it into an ellipse I multiplied it by a factor.
Factor was yIntercept/xIntercept.
so it looked like this in end
FACTOR = Y_INTERCEPT / X_INTERCEPT;
//calculate previous angle
angle = atan((prev_y/FACTOR)/prev_x);
//increase angle by 1 degree (make sure its not radians in your case)
angle++;
//new x and y
x = cos(newangle) * X_INTERCEPT;
y = sin(newangle) * X_INTERCEPT * FACTOR;
I have written a function named getPointOnEllipse that allows you to move your sprites pixel-by-pixel in an elliptical path. The function determines the coordinates of a particular point in the elliptical path, given the coordinates of the center of the ellipse, the lengths of the semi-major axis and the semi-minor axis, and finally the offset of the point into the elliptical path, all in pixels.
Note: To be honest, unfortunately, the getPointOnEllipse function skips (does not detect) a few of the points in the elliptical path. As a result, the arc distance is not exactly uniform. Sometimes it is one pixel, and sometimes two pixels, but not three or more! In spite of the fault, changes in speed will be really "faint", and IMO, your sprites will move pretty smoothly.
Below is the getPointOnEllipse function, along with another function named getEllipsePerimeter, which is used to determine an ellipse's perimeter through Euler's formula. The code is written in JScript.
function getEllipsePerimeter(rx, ry)
{
with (Math)
{
// You'll need to floor the return value to obtain the ellipse perimeter in pixels.
return PI * sqrt(2 * (rx * rx + ry * ry));
}
}
function getPointOnEllipse(cx, cy, rx, ry, d)
{
with (Math)
{
// Note: theta expresses an angle in radians!
var theta = d * sqrt(2 / (rx * rx + ry * ry));
//var theta = 2 * PI * d / getEllipsePerimeter(rx, ry);
return {x:floor(cx + cos(theta) * rx),
y:floor(cy - sin(theta) * ry)};
}
}
The following figure illustrates the parameters of this function:
cx - the x-coordinate of the center of the ellipse
cy - the y-coordinate of the center of the ellipse
rx - the length of semi-major axis
ry - the length of semi-minor axis
d - the offset of the point into the elliptical path (i.e. the arc length from the vertex to the point)
The unit of all parameters is pixel.
The function returns an object containing the x- and y-coordinate of the point of interest, which is represented by a purple ball in the figure.
d is the most important parameter of the getPointOnEllipse function. You should call this function multiple times. In the first call, set d to 0, and then place the sprite at the point returned, which causes the sprite to be positioned on the vertex. Then wait a short period (e.g. 50 milliseconds), and call the function again, setting d parameter to 1. This time, by placing the sprite at the point returned, it moves 1 pixel forward in the ellipse path. Then repeat doing so (wait a short period, call the function with increased d value, and position the sprite) until the value of d reaches the perimeter of the ellipse. You can also increase d value by more than one, so that the sprite moves more pixels forward in each step, resulting in faster movement.
Moreover, you can modify the getEllipsePerimeter function in the code to use a more precise formula (like Ramanujan's formula) for getting ellipse perimeter. But in that case, be sure to modify the getPointOnEllipse function as well to use the second version of theta variable (which is commented in the code). Note that the first version of theta is just a simplified form of the second version for the sake of optimization.

Determining camera motion with fundamental matrix opencv

I tried determining camera motion from fundamental matrix using opencv. I'm currently using optical flow to track movement of points in every other frame. Essential matrix is being derived from fundamental matrix and camera matrix. My algorithm is as follows
1 . Use goodfeaturestotrack function to detect feature points from frame.
2 . Track the points to next two or three frames(Lk optical flow), during which calculate translation and rotation vectorsusing corresponding points
3 . Refresh points after two or three frame (use goodfeaturestotrack). Once again find translation and rotation vectors.
I understand that i cannot add the translation vectors to find the total movement from the beginning as the axis keep changing when I refresh points and start fresh tracking all over again. Can anyone please suggest me how to calculate the summation of movement from the origin.
You are asking is a typical visual odometry problem. concatenate the transformation matrix SE3 of the Lie-Group.
You just multiply the T_1 T_2 T_3 till you get T_1to3
You can try with this code https://github.com/avisingh599/mono-vo/blob/master/src/visodo.cpp
for(int numFrame=2; numFrame < MAX_FRAME; numFrame++)
if ((scale>0.1)&&(t.at<double>(2) > t.at<double>(0)) && (t.at<double>(2) > t.at<double>(1))) {
t_f = t_f + scale*(R_f*t);
R_f = R*R_f;
}
Its simple math concept. If you feel difficult, just look at robotics forward kinematic for easier understanding. Just the concatenation part, not the DH algo.
https://en.wikipedia.org/wiki/Forward_kinematics
write all of your relative camera position in a 4x4 transformation matrix and then multiply each matrix one after another. For example:
Frame 1 location with respect to origin coordinate system = [R1 T1]
Frame 2 location with respect to Frame 1 coordinate system = [R2 T2]
Frame 3 location with respect to Frame 2 coordinate system = [R3 T3]
Frame 3 location with respect to origin coordinate system =
[R1 T1] * [R2 T2] * [R3 T3]

Resources