How to add JointActuator for PlanarJoint? - drake

I am working on a simple 2D peg-in-hole example in which I want to directly control the peg (which is just a box) in 2D. In order to do that, I add a planar joint to attach the peg to the world as in the Manipulation course's example.
planar_joint_frame = plant.AddFrame(FixedOffsetFrame("planar_joint_frame", plant.world_frame(),RigidTransform(RotationMatrix.MakeXRotation(np.pi/2))))
peg_planar_joint_frame = plant.AddFrame(FixedOffsetFrame("peg_planar_joint_frame", peg.body_frame(),RigidTransform(RotationMatrix.MakeXRotation(np.pi/2), [0, 0, 0.025])))
peg_planar_joint = plant.AddJoint(PlanarJoint("peg_planar_joint", planar_joint_frame,peg_planar_joint_frame, damping=[0, 0, 0]))
Then I want to add actuator to the peg_planar_joint so that I can directly apply planar force fx, fy and torque tau_z.
plant.AddJointActuator("peg_planar_joint", peg_planar_joint)
However, I encounter the following error
SystemExit: Failure at bazel-out/k8-opt/bin/multibody/plant/_virtual_includes/multibody_plant_core/drake/multibody/plant/multibody_plant.h:1131 in AddJointActuator(): condition 'joint.num_velocities() == 1' failed.
It seems that JointActuator can only be added to PrismaticJoint or RevoluteJoint that is with 1 degree of freedom. So my question is can I add JointActuator seperately to the x, y and RotZ of PlanarJoint? If not, what is the workaround method? Thank you!

Right now you have to add the prismatic and revolute joint's yourself to add actuators. Here is a snippet from https://github.com/RussTedrake/manipulation/blob/f868cd684a35ada15d6063c70daf1c9d61000941/force.ipynb
# Add a planar joint the old fashioned way (so that I can have three actuators):
gripper_false_body1 = plant.AddRigidBody(
"false_body1", gripper,
SpatialInertia(0, [0, 0, 0], UnitInertia(0, 0, 0)))
gripper_false_body2 = plant.AddRigidBody(
"false_body2", gripper,
SpatialInertia(0, [0, 0, 0], UnitInertia(0, 0, 0)))
gripper_x = plant.AddJoint(
PrismaticJoint("gripper_x", plant.world_frame(),
plant.GetFrameByName("false_body1"), [1, 0, 0], -.3, .3))
plant.AddJointActuator("gripper_x", gripper_x)
gripper_z = plant.AddJoint(
PrismaticJoint("gripper_z", plant.GetFrameByName("false_body1"),
plant.GetFrameByName("false_body2"), [0, 0, 1], 0.0,
0.5))
gripper_z.set_default_translation(0.3)
plant.AddJointActuator("gripper_z", gripper_z)
gripper_frame = plant.AddFrame(
FixedOffsetFrame(
"gripper_frame", plant.GetFrameByName("body", gripper),
RigidTransform(RotationMatrix.MakeXRotation(np.pi / 2))))
gripper_theta = plant.AddJoint(
RevoluteJoint("gripper_theta", plant.GetFrameByName("false_body2"),
gripper_frame, [0, -1, 0], -np.pi, np.pi))
plant.AddJointActuator("gripper_theta", gripper_theta)

It depends on what you want to accomplish really. If all you need is the sim of a 2D system with externally applied forces, then I'd say you look into MultibodyPlant::get_applied_generalized_force_input_port().

Related

how to set condition in objective function in cvxpy

I have a brute force optimization algorithm with the objective function of the form:
np.clip(x # M, a_min=0, a_max=1) # P
where x is a Boolean decision vector, M is a Boolean matrix/tensor and P is a probability vector. As you can guess, x # M as an inner product can have values higher than 1 where is not allowed as the obj value should be a probability scalar or vector (if M is a tensor) between 0 to 1. So, I have used numpy.clip to fix the x # M to 0 and 1 values. How can I set up a mechanism like clip in cvxpy to achieve the same result? I have spent ours on internet with no lock so I appreciate any hint. I have been trying to use this to replicate clip but it raises Exception: Cannot evaluate the truth value of a constraint or chain constraints, e.g., 1 >= x >= 0. As a side note, since cvxpy cannot handle tensors, I loop through tensor slices with M[s].
n = M.shape[0]
m = M.shape[1]
w = M.shape[2]
max_budget_of_decision_variable = 7
x = cp.Variable(n, boolean=True)
obj = 0
for s in range(m):
for w in range(w):
if (x # M[s])[w] >= 1:
(x # M[s])[w] = 1
obj += x # M[s] # P
objective = cp.Maximize(obj)
cst = []
cst += [cp.sum(y) <= max_budget_of_decision_variable ]
prob = cp.Problem(objective, constraints = cst)
As an example, consider M = np.array([ [1, 0, 0, 1, 1, 0], [0, 0, 1, 0, 1, 0], [1, 1, 1, 0, 1, 0]]) and P = np.array([0.05, 0.15, 0.1, 0.15, 0.5, 0.05]).

How can I implement Max Pooling in Arrayfire in rust without resorting to writing my own cuda code

I'm trying to figure out how to implement max pooling on Arrayfire. My current best approach involves iterating over each convolved output and apply a function which applies four kernels, [1 0 0 0], [0 1 0 0], [0 0 1 0], [0 0 0 1], and produces four outputs, which I can then compare for the maximum value at each pixel.
My issue with that is it seems terribly slow and incorrect to be looping like that in a tensor library, but I haven't been able to come up with a better solution
You can use wrap and unwrap to perform this, perhaps more efficiently.
here is the logic:
Unwrap the window of size 2x2 into columns
Perform max along columns
Wrap back to original image shape
I would think this could potentially be more faster indexing offset locations which can result less than ideal memory reads.
Here are the links to relevant documentation for above mentioned functions
unwrap - https://arrayfire.org/arrayfire-rust/arrayfire/fn.unwrap.html
wrap - https://arrayfire.org/arrayfire-rust/arrayfire/fn.wrap.html
Although I did write an example in rust documentation, image illustrations on C++ documentation are far better in terms of understanding whats happening I think. Given below are those links
unwrap - https://arrayfire.org/docs/group__image__func__unwrap.htm
wrap - https://arrayfire.org/docs/group__image__func__wrap.htm
Hopefully, this helps
I have settled on the following:
index the quandrants with seq, then grab the maximums
#[test]
fn maxfilt____() {
let fourxfour = Array::new(&(0..16).into_iter().collect::<Vec<_>>(), dim4!(4, 4, 1, 1));
let dim0 = fourxfour.dims()[0] as i32;
let dim1 = fourxfour.dims()[1] as i32;
let q1_indices = &[seq!(0, dim0 - 1, 2), seq!(0, dim1 - 1, 2), seq!(), seq!()];
let q2_indices = &[seq!(0, dim0 - 1, 2), seq!(1, dim1 - 1, 2), seq!(), seq!()];
let q3_indices = &[seq!(1, dim0 - 1, 2), seq!(0, dim1 - 1, 2), seq!(), seq!()];
let q4_indices = &[seq!(1, dim0 - 1, 2), seq!(1, dim1 - 1, 2), seq!(), seq!()];
let q1s = index(&fourxfour, q1_indices);
let q2s = index(&fourxfour, q2_indices);
let q3s = index(&fourxfour, q3_indices);
let q4s = index(&fourxfour, q4_indices);
let max = maxof(&q1s, &maxof(&q2s, &maxof(&q3s, &q4s, false), false), false);
af_print!("max", max);
}

How can I combine blocks of the same type in rectangular prisms in an efficient mannor?

I'm making a block-based city building game and I'm trying to figure out an efficient way of merging multiple blocks of the same type into boxes for optimization purposes. So let's say there's a wall of a building and it's 16x30x16 blocks of brick. Rather than draw the 7,680 blocks, I can draw them as one giant flat rectangular prism with a repeating texture, which would be eons more efficient.
I started on this by creating strips, which I intended to further combine into panes, but it seems that this method is already too slow, as it has to loop through every block in a plot(chunk), check if it can be merged into the current strip, and then merge it if yes.
Thanks in advance
local function Draw(plot)
local blocks = plot.blocks
local slabs = NewAutotable(3)
local slabList = {}
for y = 1, 32 do
for x = 1, 16 do
local currentSlab = NewSlab(x, y, 1, 1, 1, 1, blocks[x][y][1])
slabs[x][y][1] = currentSlab
slabList[#slabList + 1] = currentSlab
for z = 2, 16 do
if currentSlab[7] == blocks[x][y][z] then
GrowSlab(currentSlab, 0, 0, 1)
else
currentSlab = NewSlab(x, y, z, 1, 1, 1, blocks[x][y][z + 1])
slabs[x][y][z] = currentSlab
slabList[#slabList + 1] = currentSlab
end
end
end
end
end

Plt Plot to OpenCV Image/Numpy Array

I have a piece of code which I use to visualize a graph:
if (visualize == True):
# Black removed and is used for noise instead.
unique_labels = set(db.labels_)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (db.labels_ == k)
xy = scaled_points[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = scaled_points[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
# display the graph
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
# get the image into a variable that OpenCV likes
# uh, how?
while this works, I want to have the end result (whatever is being shown) as an OpenCV image.
Since I don't even have the variable -image-, I have no idea how to achieve this.
Did anyone do something similar?
EDIT: I am actually getting close. Now I can create an OpenCV image out of a fig, but the contents are not right. The fig is empty. I wonder where I go wrong? Why doesn't it get the plt object from above and draw the actual content?
fig = plt.figure()
canvas = FigureCanvas(fig)
canvas.draw()
# convert canvas to image
graph_image = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
graph_image = graph_image.reshape(fig.canvas.get_width_height()[::-1] + (3,))
# it still is rgb, convert to opencv's default bgr
graph_image = cv2.cvtColor(graph_image,cv2.COLOR_RGB2BGR)
Okay, I finally got it! One has to create the fig object at the very beginning, then use the necessary plotting functions, then convert to canvas and then to OpenCV image.
EDIT: Thanks to the suggestion of #ImportanceOfBeingErnest, now the code is even more straightforward!
Here is the full code:
if (visualize == True):
# create a figure
fig = plt.figure()
# Black removed and is used for noise instead.
unique_labels = set(db.labels_)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (db.labels_ == k)
xy = scaled_points[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = scaled_points[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
# convert it to an OpenCV image/numpy array
canvas = FigureCanvas(fig)
canvas.draw()
# convert canvas to image
graph_image = np.array(fig.canvas.get_renderer()._renderer)
# it still is rgb, convert to opencv's default bgr
graph_image = cv2.cvtColor(graph_image,cv2.COLOR_RGB2BGR)

Resultant weight vector has same values in SAS/IML

I'm trying to create a binary perceptron classifier using SAS to develop my skills with SAS. The data has been cleaned and split into training and test sets. Due to my inexperience, I expanded the label vector into a table of seven identical columns to correspond to the seven weights to make the calculations more straightforward, at least, given my limited experience this seemed to be a usable method. Anyway, I run the following:
PROC IML;
W = {0, 0, 0, 0, 0, 0, 0};
USE Work.X_train;
XVarNames = {"Pclass" "Sex" "Age" "FamSize" "EmbC" "EmbQ" "EmbS"};
READ ALL VAR XVarNames INTO X_trn;
USE Work.y_train;
YVarNames = {"S1" "S2" "S3" "S4" "S5" "S6" "S7"};
READ ALL VAR YVarNames INTO y_trn;
DO i = 1 to 668;
IF W`*X_trn[i] > 0 THEN Z = {1, 1, 1, 1, 1, 1, 1};
ELSE Z = {0, 0, 0, 0, 0, 0, 0};
W = W+(y_trn[i]`-Z)#X_trn[i]`;
END;
PRINT W;
RUN;
and the result is a column vector with seven entries each having value -2.373. The particular value isn't important, but clearly, a weight vector that is comprised of identical values is not useful. The question then is, what error in the code am I making that is producing this result?
My intuition is that something with how I am trying to call each row of observations for X_trn and y_trn into the equation is resulting in this error. Otherwise, it might be due to the matrix arithmetic in the W = line, but the orientation of all of the vectors seems to be appropriate.

Resources