TrajectorySource with PiecewisePose does not give expected rotation result - drake

I am playing with PiecewisePose. I want to create a linear interpolation trajectory from an inital pose to the goal pose (that is, my interpolation trajectory only contains one segment). After creating the trajectory, I add it to the TrajectorySource and use LogVectorOutput to save the log. Here is my code:
sample_times = [0., 3.0]
X_0 = RigidTransform()
X_1 = RigidTransform(RotationMatrix.MakeXRotation(np.pi/2), [1, 1, 1])
Xs = [X_0, X_1]
traj = PiecewisePose.MakeLinear(sample_times, Xs)
builder = DiagramBuilder()
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, 0.1)
plant.Finalize()
traj_R = traj.get_orientation_trajectory()
traj_p = traj.get_position_trajectory()
R_source = builder.AddSystem(TrajectorySource(traj_R))
p_source = builder.AddSystem(TrajectorySource(traj_p))
logger_R = LogVectorOutput(R_source.get_output_port(), builder)
logger_p = LogVectorOutput(p_source.get_output_port(), builder)
diagram = builder.Build()
simulator = Simulator(diagram)
context = simulator.get_context()
simulator.AdvanceTo(3.0)
To see the result, I print the log:
log_R = logger_R.FindLog(context)
print(log_R.data())
However, the result is quiet surprising:
[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0.]]
The quaternion is always (1, 0, 0, 0)!
I then check the translation log:
log_p = logger_p.FindLog(context)
print(log_p.data())
which is correct:
[[0. 0.03333333 0.06666667 0.1 0.13333333 0.16666667
0.2 0.23333333 0.26666667 0.3 0.33333333 0.36666667
0.4 0.43333333 0.46666667 0.5 0.53333333 0.56666667
0.6 0.63333333 0.66666667 0.7 0.73333333 0.76666667
0.8 0.83333333 0.86666667 0.9 0.93333333 0.96666667
1. ]
[0. 0.03333333 0.06666667 0.1 0.13333333 0.16666667
0.2 0.23333333 0.26666667 0.3 0.33333333 0.36666667
0.4 0.43333333 0.46666667 0.5 0.53333333 0.56666667
0.6 0.63333333 0.66666667 0.7 0.73333333 0.76666667
0.8 0.83333333 0.86666667 0.9 0.93333333 0.96666667
1. ]
[0. 0.03333333 0.06666667 0.1 0.13333333 0.16666667
0.2 0.23333333 0.26666667 0.3 0.33333333 0.36666667
0.4 0.43333333 0.46666667 0.5 0.53333333 0.56666667
0.6 0.63333333 0.66666667 0.7 0.73333333 0.76666667
0.8 0.83333333 0.86666667 0.9 0.93333333 0.96666667
1. ]]
After I change the goal pose to X_1 = RigidTransform(RollPitchYaw(np.pi/3, np.pi/3, 0), [1, 1, 1]), I seems to be able to get a variable trajectory for orientation:
[[ 1. 0.99933683 0.99734887 0.99404073 0.98942009 0.98349767
0.97628722 0.96780548 0.95807213 0.94710977 0.93494383 0.92160256
0.90711693 0.89152055 0.87484963 0.85714286 0.83844134 0.81878847
0.79822988 0.77681329 0.75458839 0.73160678 0.7079218 0.68358843
0.65866314 0.63320379 0.60726947 0.58092037 0.55421767 0.52722332
0.5 ]
[ 0. -0.01770677 -0.03437769 -0.04997407 -0.06445971 -0.07780097
-0.08996691 -0.10092927 -0.11066262 -0.11914437 -0.12635481 -0.13227723
-0.13689787 -0.14020602 -0.14219398 -0.14285714 -0.14219398 -0.14020602
-0.13689787 -0.13227723 -0.12635481 -0.11914437 -0.11066262 -0.10092927
-0.08996691 -0.07780097 -0.06445971 -0.04997407 -0.03437769 -0.01770677
0. ]
[ 0. -0.03181766 -0.0641358 -0.09687939 -0.12997243 -0.16333811
-0.19689899 -0.23057717 -0.26429449 -0.29797267 -0.33153355 -0.36489923
-0.39799227 -0.43073586 -0.46305399 -0.49487166 -0.52611501 -0.55671151
-0.58659017 -0.61568162 -0.64391833 -0.67123478 -0.69756756 -0.72285555
-0.74704004 -0.77006491 -0.79187672 -0.81242483 -0.83166156 -0.84954225
-0.8660254 ]
[ 0. 0.01870152 0.03835438 0.05891297 0.08032957 0.10255447
0.12553607 0.14922105 0.17355443 0.19847972 0.22393907 0.24987339
0.27622248 0.30292519 0.32991953 0.35714286 0.38453197 0.41202331
0.43955305 0.4670573 0.49447223 0.52173419 0.54877992 0.57554663
0.6019722 0.62799529 0.6535555 0.67859351 0.70305119 0.72687179
0.75 ]]
However, the returned quaternion is not right. To see it
print(X_1.rotation().ToQuaternion())
returns
Quaternion_[float](w=0.8660254037844388, x=0.0, y=0.0, z=0.5)
While the last column in above returned result is (0.5, 0, -0.8660254, 0.75) which is not even a unit quaternion.
In order to find what happend when
X_1 = RigidTransform(RotationMatrix.MakeXRotation(np.pi/2), [1, 1, 1])
I checked the quaternion slerp traj_R, and it seems that the traj_R is correct:
print(traj_R.orientation(0.0))
print(traj_R.orientation(1.0))
print(traj_R.orientation(2.0))
print(traj_R.orientation(3.0))
Quaternion_[float](w=1.0, x=0.0, y=0.0, z=0.0)
Quaternion_[float](w=0.9659258262890684, x=0.2588190451025208, y=0.0, z=0.0)
Quaternion_[float](w=0.8660254037844387, x=0.5, y=0.0, z=0.0)
Quaternion_[float](w=0.7071067811865476, x=0.7071067811865476, y=0.0, z=0.0)
So it seems that PiecewisePose returns a correct trajectory, but something wrong happens with TrajectorySource. Is it a bug? Thank you!
Another question I have is that in order to control the number of data logged, I have to add a MultibodyPlant, which serves no other purpose but to define the timestep size. Is there another way to control the communication frequency between the TrajectorySource and the vector logger? Thank you.

Seems like a bug in PiecewiseQuaternion to me. TrajectorySource (with 0-order derivative) just evaluates the the trajectory at the the sample times by invoking Trajectory::value(). However, PiecewiseQuaternion spits out rotation matrices as values instead of quarternions. Therefore the first column of the rotation matrix and the first entry in the second column is being copied to the Trajectory source, which is garbage.
The documentation of Trajectory says that the size of the output returned by value() should have rows() and cols(). For PiecewiseQuaternion, rows and cols are 4 and 1 respectively, but value() spits out a 3x3 rotation matrix.
Please consider posting an issue.
To your second question: LogVectorOutput has an optional argument that lets you set the publish period. See here.

Related

Different results in cv2.omnidir.projectPoints when projecting target-to-image and camera-to-image

I am using a checkerboard target to extrinsically calibrate an omnidirectional camera with another sensor. For that purpose, I utilize the function cv2.omnidir.projectPoints twice, and it unexpectedly returns different results.
Here I show a projection of the outer corners of the checkerboard to the image. In the first case the projection is done from the board coordinate system (CS) to the image CS. In the second case, I manually project the corners from board CS to the camera CS, and then project it to the image, using rvec=tvec=(vector of zeros).
import numpy as np
import cv2
# --- Camera parameters
K = np.array([[939.265, 0. , 965.693],
[ 0. , 942.402, 645.578],
[ 0. , 0. , 1. ]])
D = np.array([-0.156, -0.03 , 0. , 0.001], dtype=float32)
# --- Corners in board coordinate system
outer_corners_bcs = np.array([[-0.175, -0.17 , 0. ],
[ 0.8 , -0.17 , 0. ],
[-0.175, 0.67 , 0. ],
[ 0.8 , 0.67 , 0. ]])
# --- Rotation & Translation vectors from checkerboard target to camera
rvec = np.array([[-1.138, -1.421, 2.827]])
tvec = np.array([[-6.852, -5.473, 4.549]])
# --- Project from board to image
uv, _ = cv2.omnidir.projectPoints(outer_corners_bcs.reshape(1, -1, 3).astype('float64'),
rvec.reshape(1, 3),
tvec.reshape(1, 3), K, 1, D)
print(uv)
[[[556.417 320.275]
[504.347 320.397]
[562.968 272.863]
[509.031 272.395]]]
# --- Convert to camera coordinate system and project to image
R = eulerAnglesToRotationMatrix(rvec)
outer_corners_ccs = (R # outer_corners_bcs.T + tvec).T
uv, _ = cv2.omnidir.projectPoints(outer_corners_ccs.reshape(1, -1, 3),
np.zeros(3).reshape(1, 3),
np.zeros(3).reshape(1, 3), K, 1, D)
print(uv)
[[[699.384 304.698]
[639.438 180.463]
[876.787 527.013]
[752.581 334.091]]]
My expectation is that the results will be identical.
Thanks for any help!
I figured this out:
Instead of:
R = eulerAnglesToRotationMatrix(rvec)
(taken from here btw)
It should be written:
R, _ = cv2.Rodrigues(rvec)

Matrix Transformation for image

I am working on an image processing project in python in which I am required to change the coordinate system
I thought it is analogous to matrix transformation and tried but it is not working, I have taken the coordinates of the red dots
Simply subtract by 256 and divide by 512. The connection is that you see values of 256 get mapped to 0. Therefore, 0 gets mapped to -256, 256 gets mapped to 0 and 512 gets mapped to 256. However, you further need the values to be in the range of [-0.5, 0.5]. Dividing everything by 512 finishes this off.
Therefore the relationship is:
out = (in - 256) / 512 = (in / 512) - 0.5
Try some values from your example input above to convince yourself that this is the correct relationship.
If you want to form this as a matrix multiplication, this can be interpreted as an affine transform with scale and translation, but no rotation:
[ 1/512 0 -0.5 ]
K = [ 0 1/512 -0.5 ]
[ 0 0 1 ]
Take note that you will need to use homogeneous coordinates to achieve the desired result.
For example:
(x, y) = (384, 256)
[X] [ 1/512 0 -0.5 ][384]
[Y] = [ 0 1/512 -0.5 ][256]
[1] [ 0 0 1 ][ 1 ]
[X] [384/512 - 0.5] [ 0.25 ]
[Y] = [256/512 - 0.5] = [ 0 ]
[1] [ 1 ] [ 1 ]
Simply remove the last coordinate to get the final answer of (0.25, 0).

pytorch LSTM to map series of feature vectors to their labels

currently I have input X with shape (50, 25), where there are 50 feature vectors and each vector has 25 dimensions. The data of X is, for example, like follows:
X = [[0. 0. 0. ... 1. 1. 1.]
[0. 0. 0. ... 1. 1. 1.]
[0. 0. 0. ... 1. 1. 1.]
...
[0. 0. 0. ... 1. 1. 1.]
[0. 0. 0. ... 1. 1. 1.]
[0. 0. 0. ... 1. 1. 1.]]
And the output label y is [0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0], of length 50. I.e. each feature vector has a label which corresponds to an element in y.
how can I construct a pytorch LSTM, reshape the input object to 3 dimensions, and properly interpret the output object? Thanks so much for the help in advance.
Currently I have a template for LSTM like this, since my input is already numerical, I was thinking to get rid of the encoder / decoder part, is that correct?
class RNNModel(nn.Module):
"""Container module with an encoder, a recurrent module, and a decoder."""
def __init__(self, rnn_type, ntoken, ninp, nhid, nlayers, dropout=0, tie_weights=False):
super(RNNModel, self).__init__()
self.drop = nn.Dropout(dropout)
self.ntoken = ntoken
self.decoder = nn.Linear(nhid, self.ntoken)
if rnn_type in ['LSTM', 'GRU']:
self.rnn = getattr(nn, rnn_type)(ninp, nhid, nlayers, dropout=dropout)
else:
try:
nonlinearity = {'RNN_TANH': 'tanh', 'RNN_RELU': 'relu'}[rnn_type]
except KeyError:
raise ValueError( """An invalid option for `--model` was supplied,
options are ['LSTM', 'GRU', 'RNN_TANH' or 'RNN_RELU']""")
self.rnn = nn.RNN(ninp, nhid, nlayers, nonlinearity=nonlinearity, dropout=dropout)
self.init_weights()
self.rnn_type = rnn_type
self.nhid = nhid
self.nlayers = nlayers
def init_weights(self):
initrange = 0.1
nn.init.zeros_(self.decoder.weight)
nn.init.uniform_(self.decoder.weight, -initrange, initrange)
def forward(self, input, hidden):
emb = self.drop(input)
emb = emb.transpose(1, 0)
output, hidden = self.rnn(emb, hidden) #output of shape (length, batchsize, nhid)
output = self.drop(output)
output = output[-1, :, :] #shape (batchsize, nhid)
decoded = self.decoder(output) #shape (batchsize, ntoken)
return F.log_softmax(decoded, dim=1), hidden
def init_hidden(self, bsz):
weight = next(self.parameters())
if self.rnn_type == 'LSTM':
return (weight.new_zeros(self.nlayers, bsz, self.nhid),
weight.new_zeros(self.nlayers, bsz, self.nhid))
else:
return weight.new_zeros(self.nlayers, bsz, self.nhid)
Currently the train I wrote is
X = X.reshape((1, 50, 25))
hidden = self.model.init_hidden(1)
for iter in range(0, self.epochs):
data = torch.from_numpy(X)
target = torch.LongTensor(y.reshape((1, torch.LongTensor(y).size(0))))
self.model.zero_grad()
self.optimizer.zero_grad()
hidden = self.repackage_hidden(hidden)
output, hidden = self.model(data.float(), hidden)
loss = self.criterion(output, target)
loss.backward()
torch.nn.utils.clip_grad_norm_(self.model.parameters(), 0.25)
self.optimizer.step()
self.model.train()
But I got the error: RuntimeError: multi-target not supported at /tmp/pip-req-build-4baxydiv/aten/src/THNN/generic/ClassNLLCriterion.c:22
Output of rnn take shape of (length, batchsize, nhid), base on your label (1 number per sample) I assume you're doing classification, so usually we will give the classifier (self.decoder) output features of the last timestep. Here I changed your forward method to this and got output of shape (batchsize, ntoken) which fit the shape of your label.
def forward(self, input, hidden):
emb = self.drop(self.encoder(input))
emb = emb.transpose(1, 0) #(batchsize, length, ninp) => (length, batchsize, ninp)
output, hidden = self.rnn(emb, hidden) #output of shape (length, batchsize, nhid)
output = self.drop(output)
output = output[-1, :, :] #shape (batchsize, nhid)
decoded = self.decoder(output) #shape (batchsize, ntoken)
return F.log_softmax(decoded, dim=1), hidden
About getting rid of self.encoder, it is a embedding layer which take a array of indices and replace each of them with a vector. If your input includes indices (int/long) of something, you may use it, otherwise (it is not index but some number as float like temperature,...) you should get rid of it (because it's wrong). Sorry if my English is confusing.

Increase SEQUENCE by decimal values

Explanation
I'm trying to automatically fill a column based in a variable step value, but whenever the step is a decimal number, it rounds down.
Example
If a write =SEQUENCE(4, 1, 0, A1) where A1=0.5, it should return me:
Desired values
0.0
0.5
1.0
1.5
But it rounds down the step to 0 and returns:
Return
0.0
0.0
0.0
0.0
Is it right? If it is, is there any other way to accomplish this?
Perhaps you can use:
=ARRAYFORMULA(SEQUENCE(4, 1, 0, 1)*A1)
Or slightly shorter:
=ARRAYFORMULA(SEQUENCE(4, 1, 0)*A1)
How about:
=arrayformula(A1*(sequence(4)-1))
decimals are not supported under SEQUENCE. use:
=INDEX(SEQUENCE(4, 1, 0, 1)/2)

How to normalize Pearson Correlation between 0 and 1?

I came across the formula for the Pearson Correlation but it gives values between -1 and 1. How would I modify the formula so that it gives values between 0 and 1?
To normalize any set of numbers to be between 0 and 1, subtract the minimum and divide by the range.
In your case the min is -1 and the range is 2, so if a value in your set is -0.5, that value becomes:
(-0.5 - (-1)) / 2
= 0.25

Resources