Question about the composite body algorithm implementation in Drake - drake

I am trying to relate algorithm 9.3 in Jain to the implementation of the composite body algorithm in Drake.
The documentation mentions that the hinge matrix is the transpose of that used in Jain. Looking in Featherstone (2008), it seemed this implementation was more in the spirit of section 6.3 of featherstone, i.e. S*=H, and the matrix is calculated as columns rather than blockwise. I wanted to double check if this was the case?

Yes, that's right. Although we use Jain's formulation and terminology (mostly) we like to think of the joint motion matrix as a small Jacobian as Featherstone does with his "S" (∂V/∂v, i.e. partial of spatial velocity V across the joint w.r.t. generalized velocities v of that joint). That way our H matrix follows the same orientation convention as does our full robotics Jacobian (where J maps velocites and Jᵀ maps forces).
We don't necessarily write down the H matrix for a given joint but rather have that joint (technically, mobilizer) provide functions for multiplying by H and Hᵀ.

Related

Eigenvalues of symmetric band matrix using Accelerate framework

In a macOS/iOS code base, I've got a real symmetric band matrix that can be anywhere from 10 × 10 to about 500 × 500, and I need to compute whether all its eigenvalues are greater than (or equal to) a certain threshold. So I only strictly need to know the lowest eigenvalue, in case that helps.
Is there any function or set of functions in Apple's Accelerate framework that can provide a full or partial solution to this? Ideally with a cost proportional to the number of non-zero entries.
Based on this, it appears there's a set of LAPACK functions that compute eigenvalues efficiently for banded symmetric matrices. (LAPACK is implemented a part of the Accelerate framework.)
As I understand it, ssbtrd followed by ssterf should do the trick.
SSBTRD reduces a real symmetric band matrix A to symmetric
tridiagonal form T by an orthogonal similarity transformation:
Q**T * A * Q = T.
SSTERF computes all eigenvalues of a symmetric tridiagonal matrix
using the Pal-Walker-Kahan variant of the QL or QR algorithm.

Composing multiple Essential matrices

I have computed the Essential matrices between Frame [a,b], [b,c] and [c,d]. I have now E_ab, E_bc and E_cd. Is it possible to compute E_ad directly without matching? I am thinking of sth like 3D transformation composing where:
T04 = T01 * T12 * T23 * T34
The solution that I found is to decompose each one it into R|T and construct T_4*4 from R|T and then use the previouse notation of composing T matrices and then convert the final R|T to E. However, I was wondering if there is a direct way without decomposing for two reasons:
Decomposing and re composing (at least in OpenCV) changes the essential matrix a bit
Performance sake
So, How can I combine two (or more) essential matrices directly?
Before answering the original question, I think it might be important to discuss this point from your comment:
Ultimate goal is Bundle Adjustment on the Essential Matrices directly instead of on RT. I am using Single Calibrated Camera
It might be that I'm missing critical details about your project, but there seems to be a few issues with that method:
Minimal representation. Usually, it's better (for both performance and accuracy reasons) to use minimal parametrisations in your estimations. The Essential matrix has six degrees of freedom (which can be seen from its decomposition as [T]_xR). If you estimate your Essential directly, you're optimizing eight parameters instead of six.
Less constrained problem. The Essential matrix imposes coplanarity constraints and doesn't penalize some wrong configurations that would be penalized by reprojection errors that are used in usual bundle adjustment. If you consider two cameras C_1 and C_2 such that a point X_2 expressed in C_2 can be expressed in C_1 via RX_2+T, and that we denote the corresponding point in C_1 as X_1, then the constraint X_1.transpose()*E*X_2=0 imposes that the three vectors T, X_1 and RX_2+T remain coplanar. So, the point X_2 is free to move as long as it reprojects to the epiploar line, even if its ray diverges from X_1:
As you can see in the figure, the red and green points would respect the Essential matrix constraint, however they would have different reprojection errors.
Time complexity. Computing Essentials is really slow compared to alternatives, and so is the recovery of rotation and translations from Essentials, compared to the inverse problem.
Now to come back to the initial problem from the question, if you consider the three cameras C_1, C_2, C_3 and respectively denote their poses as Identity, R_1, R_2, and note the Essentials between C_1 and C_2 as E_1 and the one between C_2 and C_3 as E_2, then you can easily see that the Essential from C_3 to C_1, that I'll note E_3, will be given by
E_3=[R_1T_2+T_1]_x R_1R_2
which with a little bit of reorganization leads to
E_3=R_1E_2 + E_1R_2
This relation which establishes the link between the three Essential matrices seems to indicate that you still will need to extract R_1 and R_2 from E_1 and E_2.
Edit: I derived the relationship above like this: assume y is a 3x1 vector, then
E_3y = [R_1T_2+T_1]_x R_1R_2y
= (R_1T_2+T_1) x (R_1R_2y)
= R_1T_2 x R_1R_2y + T_1 x R_1R_2y #for the next line, note that crossproduct is invariant under rotation
= R_1(T_2 x R_2 y) + T_1 x R_1R_2 y
= R_1[T_2]_x R_2 y + [T_1]x_x R_1 R_2 y
= R_1 E_2 y + E_1 R_2 y
= (R_1E_2 + E_1R_2) y
Since this holds for any arbitrary y, this means that E_3=R_1E_2+E_1R_2.

Drake Rigid_body_tree calculating the jacobian question

I am currently trying to calculate the jacobian of a kuka arm using the "rigid_body_tree.cc" file for the equation: Tau = J^T*F, where Tau is the 7 joint torques of the kuka arm, F is the cartesian forces and torques at the end-effector, and J^T is the jacobian transposed.
There exists a function in drake called transformPointsJacobian which takes in a cache, points, from_body_or_frame_ind, to_body_or_frame_ind, and in_terms_of_qdot.
The function first calculates the geometric Jacobian which outputs a 6x7 matrix (kuka has 7 joints)
Then, it takes that matrix and uses it to determine a 3x7 jacobian which is calculated below:
J.template block<kSpaceDimension, 1>(row_start, *it) = Jv.col(col);
J.template block<kSpaceDimension, 1>(row_start, *it).noalias() += Jomega.col(col).cross(points_base.col(i));
This shrinks down the 6x7 geometric jacobian into a 3x7 jacobian where the first 3 rows were calculated by Jv + Jw*Transformation.
This code definitely works, but I don't seem to understand why this step works. Also, since I will need the torques in the cartesian end-effector space, I will need the full 6x7 jacobian.
In order to get the last 3 rows of the jacobian, how can I use the output of the geometric jacobian so that it will be valid in the equation, Tau = J^T*F?
Thanks!
Please consider switching to the supported class MultibodyPlant rather than the soon-to-be-deprecated RigidBodyTree in the Drake attic. The Jacobian documentation is much better there -- the group of methods is here. An example (with lots of documentation) is here; that one produces the 6x7.
Is there a reason you need to use the old code?

Dimensions of LSTM variant in Deep Mind's Differentiable Neural Computer (DNC)

I'm trying to implement Deep Mind's DNC - Nature paper- with PyTorch 0.4.0.
When implementing the variant of LSTM they used I encountered some troubles with dimensions.
To simplify suppose BATCH=1.
The equations they list in the paper are these:
where [x;h] means a concatenation of x and h into one single vector, and i, f and o are column vectors.
My question is about how the state s_t is computed.
The second addendum is obtained by multiplying i with a column vector and so the result is either a scalar (transpose i first, then do scalar product) or wrong (two column vectors multiplied).
So the state results in a single scalar...
With the same reasoning the hidden state h_t is a scalar too, but it has to be a column vector.
Obviously I'm wrong somewhere, but I can't figure out where.
By looking at Wikipedia LSTM Article I think I figured it out.
This is the formal implementation of standard LSTM found in the article:
The circle represents element-by-element product.
By using this product in the corresponding parts of DNC equations (s_t and o_t) the dimensions work.

SVM - Can I normalize W vector?

In SVM, there is something wrong with normalize W vector such:
for each i W_i = W_i / norm(W)
I confused. At first sight it seems that the result sign(<W, x>) will be same. But if so, in the loss function norm(W)^2 + C*Sum(hinge_loss) we can minimize W just by do W = W / (large number).
So, where am I wrong?
I suggest you to read either my minimal 5 ideas of SVMs or better
[Bur98] C. J. Burges, “A tutorial on support vector machines for pattern recognition”, Data mining and knowledge discovery, vol. 2, no. 2, pp. 121–167, 1998.
To answer your question: SVMs define a hyperplane to seperate data. Hyperplanes are defined by a normal vector w and a bias b:
If you change only w, this will give another hyperplane. However, SVMs do more tricks (see my 5 ideas) and the weight vector actually is normalized to be in a relationship to the margin between the two classes.
I think you are missing out on the constraint that
r(wTx+w0)>=1 for all examples, thus normalizing the weight vector will violate this constraint.
In fact this constraint is introduced in the SVM in the first place to actually achieve a unique solution like else you mentioned there are infinite solutions possible just by scaling the weight vector.

Resources