Jacobian of a planar manipulator with 4 links - robotics

I have a mobile manipulator that moves on the plane. The base moves in SE(2) and the manipulator is also planar with 4 links. The mobile base will be a differential drive robot and its motion is defined as:
The manipulator is attached to the center of our mobile base. How do I compute the Jacobian of the mobile manipulator?
I know that for the mobile manipulator I have something of the form:
The middle matrix is what I need to compute.

The jacobian is a linear mapping of the forward kinematics function for the degrees of freedom of your system. Therefore, you should think of it as the movement of your end effector with respect to the world. In that case, the movement of the end effector is just the combination of the movement of your body wrt world and movement of ee wrt body. So Jb is equivalent to your first statement and Jm is equivalent to the arm jacobian rotated into the base frame.
With the way you have this setup, that would be a 3x6 matrix by the way. Just use v and omega as you did in the first equation and leave the first 3x2 block as is.

Related

Using PyDrake AutoDiff to compute the Jacobian of a vector in some frame

Suppose I have a vector n_F which is fixed in a frame F (for example, a normal vector on the surface of a fingertip that is fixed in the local fingertip frame). The vector n_W(q) depends on the configuration via the expression n_W = R_WF # n_F, where the rotation matrix R_WF depends on the configuration via the forward kinematics map.
My question is how to recover the Jacobian Dn_W of n_W with respect to q (which will be some 3 by n matrix) using Drake's AutoDiff. I figure there must be some internal implementation of this since constraints between two angles in different frames are enforceable during IK, and I'm assuming IK is solved with a gradient-based solver. However, I'm having trouble working with this since AutoDiffXd seems to only work on scalar functions.
Alternatively, if there's some simple way to express this Jacobian using available Drake functions, that would also suffice for my application - I haven't been able to work out a clean expression for this by hand.
I have a (non-AutoDiff) solution currently implemented that does not involve rotations.
Let J0 and J1 denote the translational velocity Jacobians of the origin of the frame F and of the vector n_F both expressed in frame F, where the velocities are measured and expressed in the world frame. Then, The desired Jacobian Dn_W is simply J1 - J0.

Base component of Spatial Velocity Jacobian for a Free-Floating Base Robot is all Zeros

I am trying to find the end-effector spatial velocity Jacobian for a robot with a free-floating base. Due to the free-floating base, the jacobian should contain a base component and a manipulator comment (see https://spart.readthedocs.io/en/latest/Tutorial_Kinematics.html#jacobians)
V_ee = end-effector spatial velocity
J_b = base jacobian component
J_m = manipulator jacobian component
v = generalized velocities
V_ee = [J_b, J_m] v
Until now, I was using SPART toolbox to do this in Matlab (https://github.com/NPS-SRL/SPART) and now I am moving to Drake. I tried using CalcJacobianSpatialVelocity in the MultiBodyPlant and the manipulator Jacobian is correct when compared to SPART. However, the base component of the Jacobian is all zeros. This is different from what I expected and from SPART as for a free-floating base, the base velocities contribute to the end-effector spatial velocities.
An example reproduction of this issue can be found here: https://colab.research.google.com/github/vyas-shubham/DrakeTests/blob/main/freeFloating/computeJacobian.ipynb
I think I'm either doing one of these wrong while using Drake:
Using the CalcJacobianSpatialVelocity wrong. This is unlikely as the manipulator jacobian is correct and the base frame is also correct (only 1 frame in URDF).
Making a wrong URDF for calculating Jacobians for Free-Floating base. Maybe I need to specify differently in URDF a floating-base for Drake to include this in the Jacobian computation?
Your code is taking the Jacobian of the chaser relative to the target; there is no floating base between them (so the jacobian wrt to the floating base should, indeed, be zero). I think, perhaps, that you want to make frame_A=world_frame?
postCaptureTargetJacobian = postContactPlant.CalcJacobianSpatialVelocity(context=postCaptureContext,
with_respect_to=JacobianWrtVariable.kV,
frame_B=target_frame,
p_BP=np.zeros(3),
frame_A=world_frame,
frame_E=world_frame)
PS - I added this cell to your notebook to quickly inspect your kinematic tree
from IPython.display import display, SVG
import pydot
display(SVG(pydot.graph_from_dot_data(postContactPlant.GetTopologyGraphvizString())[0].create_svg()))
PPS - Thanks for curating the example in the notebook. That made it easy for me to take a look.

How do I determine if a ROS robot is going frontwards or backwards?

ROS provides an odometry message, which tells me the following in reference to an xy plane.
The x component of the robot’s speed in (m/sec).
The y component of the robot’s speed in (m/sec).
The robot’s angular orientation represented as a quaternion (z and w components in radians)
ROS provides the following additional C++ libraries
Quaternion API:
http://docs.ros.org/jade/api/tf/html/c++/classtf_1_1Quaternion.html
Vector 3 API: http://docs.ros.org/jade/api/tf/html/c++/classtf_1_1Vector3.html
I have an inelegant giant if-statement that uses the quadrant the robot is facing together with the direction of the x and y components. I would rather learn how to leverage quaternions.
I do not see a need to work with quaternions here. The problem seems much simpler.
From nav_msgs/Odometry: "The twist in this message should be specified in the coordinate frame given by the child_frame_id".
So the twist expresses velocity (both linear and angular), with respect to child_frame_id. In most robot setups, child_frame_id will be a coordinate frame fixed to the robot, for example "base_link". So the velocity in the twist is given with respect to a frame fixed to the robot -- you can simply check if the vector in twist.linear is pointing towards the positive half plane, something like:
if (odom_msg.twist.linear.x >= 0.0 && odom_msg.twist.linear.y >= 0.0)
// robot going forward

Measure distance to object with a single camera in a static scene

let's say I am placing a small object on a flat floor inside a room.
First step: Take a picture of the room floor from a known, static position in the world coordinate system.
Second step: Detect the bottom edge of the object in the image and map the pixel coordinate to the object position in the world coordinate system.
Third step: By using a measuring tape measure the real distance to the object.
I could move the small object, repeat this three steps for every pixel coordinate and create a lookup table (key: pixel coordinate; value: distance). This procedure is accurate enough for my use case. I know that it is problematic if there are multiple objects (an object could cover an other object).
My question: Is there an easier way to create this lookup table? Accidentally changing the camera angle by a few degrees destroys the hard work. ;)
Maybe it is possible to execute the three steps for a few specific pixel coordinates or positions in the world coordinate system and perform some "calibration" to calculate the distances with the computed parameters?
If the floor is flat, its equation is that of a plane, let
a.x + b.y + c.z = 1
in the camera coordinates (the origin is the optical center of the camera, XY forms the focal plane and Z the viewing direction).
Then a ray from the camera center to a point on the image at pixel coordinates (u, v) is given by
(u, v, f).t
where f is the focal length.
The ray hits the plane when
(a.u + b.v + c.f) t = 1,
i.e. at the point
(u, v, f) / (a.u + b.v + c.f)
Finally, the distance from the camera to the point is
p = √(u² + v² + f²) / (a.u + b.v + c.f)
This is the function that you need to tabulate. Assuming that f is known, you can determine the unknown coefficients a, b, c by taking three non-aligned points, measuring the image coordinates (u, v) and the distances, and solving a 3x3 system of linear equations.
From the last equation, you can then estimate the distance for any point of the image.
The focal distance can be measured (in pixels) by looking at a target of known size, at a known distance. By proportionality, the ratio of the distance over the size is f over the length in the image.
Most vision libraries (including opencv) have built in functions that will take a couple points from a camera reference frame and the related points from a Cartesian plane and generate your warp matrix (affine transformation) for you. (some are fancy enough to include non-linearity mappings with enough input points, but that brings you back to your time to calibrate issue)
A final note: most vision libraries use some type of grid to calibrate off of ie a checkerboard patter. If you wrote your calibration to work off of such a sheet, then you would only need to measure distances to 1 target object as the transformations would be calculated by the sheet and the target would just provide the world offsets.
I believe what you are after is called a Projective Transformation. The link below should guide you through exactly what you need.
Demonstration of calculating a projective transformation with proper math typesetting on the Math SE.
Although you can solve this by hand and write that into your code... I strongly recommend using a matrix math library or even writing your own matrix math functions prior to resorting to hand calculating the equations as you will have to solve them symbolically to turn it into code and that will be very expansive and prone to miscalculation.
Here are just a few tips that may help you with clarification (applying it to your problem):
-Your A matrix (source) is built from the 4 xy points in your camera image (pixel locations).
-Your B matrix (destination) is built from your measurements in in the real world.
-For fast recalibration, I suggest marking points on the ground to be able to quickly place the cube at the 4 locations (and subsequently get the altered pixel locations in the camera) without having to remeasure.
-You will only have to do steps 1-5 (once) during calibration, after that whenever you want to know the position of something just get the coordinates in your image and run them through step 6 and step 7.
-You will want your calibration points to be as far away from eachother as possible (within reason, as at extreme distances in a vanishing point situation, you start rapidly losing pixel density and therefore source image accuracy). Make sure that no 3 points are colinear (simply put, make your 4 points approximately square at almost the full span of your camera fov in the real world)
ps I apologize for not writing this out here, but they have fancy math editing and it looks way cleaner!
Final steps to applying this method to this situation:
In order to perform this calibration, you will have to set a global home position (likely easiest to do this arbitrarily on the floor and measure your camera position relative to that point). From this position, you will need to measure your object's distance from this position in both x and y coordinates on the floor. Although a more tightly packed calibration set will give you more error, the easiest solution for this may simply be to have a dimension-ed sheet(I am thinking piece of printer paper or a large board or something). The reason that this will be easier is that it will have built in axes (ie the two sides will be orthogonal and you will just use the four corners of the object and used canned distances in your calibration). EX: for a piece of paper your points would be (0,0), (0,8.5), (11,8.5), (11,0)
So using those points and the pixels you get will create your transform matrix, but that still just gives you a global x,y position on axes that may be hard to measure on (they may be skew depending on how you measured/ calibrated). So you will need to calculate your camera offset:
object in real world coords (from steps above): x1, y1
camera coords (Xc, Yc)
dist = sqrt( pow(x1-Xc,2) + pow(y1-Yc,2) )
If it is too cumbersome to try to measure the position of the camera from global origin by hand, you can instead measure the distance to 2 different points and feed those values into the above equation to calculate your camera offset, which you will then store and use anytime you want to get final distance.
As already mentioned in the previous answers you'll need a projective transformation or simply a homography. However, I'll consider it from a more practical view and will try to summarize it short and simple.
So, given the proper homography you can warp your picture of a plane such that it looks like you took it from above (like here). Even simpler you can transform a pixel coordinate of your image to world coordinates of the plane (the same is done during the warping for each pixel).
A homography is basically a 3x3 matrix and you transform a coordinate by multiplying it with the matrix. You may now think, wait 3x3 matrix and 2D coordinates: You'll need to use homogeneous coordinates.
However, most frameworks and libraries will do this handling for you. What you need to do is finding (at least) four points (x/y-coordinates) on your world plane/floor (preferably the corners of a rectangle, aligned with your desired world coordinate system), take a picture of them, measure the pixel coordinates and pass both to the "find-homography-function" of your desired computer vision or math library.
In OpenCV that would be findHomography, here an example (the method perspectiveTransform then performs the actual transformation).
In Matlab you can use something from here. Make sure you are using a projective transformation as transform type. The result is a projective tform, which can be used in combination with this method, in order to transform your points from one coordinate system to another.
In order to transform into the other direction you just have to invert your homography and use the result instead.

Compose Rotation Matrix from XYZ (Gravity/Acceleration)

I've been playing around with both Matlab & Apples documentation in regards to CMRotationMatrix for weeks.
I've found that I could easily re-create CMRotationMatrix by calculating it with Roll, Yaw & Pitch.
However, I've found no resources/documentation on how to create a Rotation Matrix from XYZ rotations from either gravity or userAcceleration.
All I found was how they create a 4x4 matrix in their VideoSnake demo.
So my question is, does anyone have any input of how to create a 3x3 matrix from XYZ rotations?
To begin with rotation matrix has vast applications in Physics, Geometry and Computer Graphics according to Wikipedia. Now looking at it from this angle in relation to your question where you made mention of gravity and userAcceleration we are seeing a synergy between principles in relation to physics where we can make mention of spacecraft exploration which depends 100 percent on gravity.
Now getting to the meat of the matter on XYZ rotations in relation to Rotation Matrix there is an abstract figure which is denoted on the origin point of the XYZ axes without any specifics to a particular angle as a starting point.
Now this is the part you have to understand, since we are using abstract and arbitrary figures we need to convert this XYZ axis point into direction vectors which can then be understood in real life world coordinates.
Only then we will be able to synergistically relate Rotation Matrix and XYZ coordinate points
Now to conclude
The essence of using this direction vector is to convert the direction into equivalent direction in cognisance with the rotation matrix which can then be effectively utilised and expressed on the platform-local coordinates

Resources