PositionConstraint goal for robot arm: Unable to construct goal representation - ros

I have a setting of ROS indigo, Gazebo under Ubuntu 14.04. Under ROS, moveit node is running. A robot arm IRB120 is simulated and standing in Gazebo. I have a node that uses moveit (move_group node) to plan a path (trajectory) for for the destination that Bob wants. The planned trajectory will be sent to Gazebo to be shown later.
There is two approaches that Bob can use to describe the destination:
Angles of each joints of the arm: using an array of six numbers (for six joints of the arm), the form of each joint and shin is defined. This approach works fine. It uses the JointConstraint class:
double goal_poses [] = {0.52, 0.50, 0.73, -0.02, 0.31, 6.83};
for(int i = 0 ; i < 6; i++) // iterate over joints of the arm.
{
moveit_msgs::JointConstraint jc;
jc.weight = 1.0;
jc.tolerance_above = 0.0001;
jc.tolerance_below = 0.0001;
jc.position = goal_poses[i];
jc.joint_name = names[i];
goal_constraint.joint_constraints.push_back(jc);
}
Define the location and direction of the end effector only. I can not use this approach. I have used the PositionConstraint class.
Problem in short: I can describe a destination using JointConstraint class, But I don't know how to describe it in PositionConstraint class. How to describe a goal, by just pointing out where the end effector should be?
How i describe the goal in PositionConstraint format: (I point out where the end effector should be and what it's orientation should be.)
moveit_msgs::PositionConstraint pc;
pc.weight = 1.0;
geometry_msgs::Pose p;
p.position.x = 0.3; // not sure if feasible position
p.position.y = 0.3; // not sure if feasible position
p.position.z = 0.3; // not sure if feasible position
pc.link_name="tool0";
p.orientation.x = 0;
p.orientation.y = 0;
p.orientation.z = 0;
p.orientation.w = 1;
pc.constraint_region.mesh_poses.push_back(p);
goal_constraint.position_constraints.push_back(pc);
But When the request is sent, server responds with:
[ERROR] [1527689581.951677797, 295.242000000]: Unable to construct goal representation
Note:
In both cases, I add the goal_constraint to the trajectory_request:
trajectory_request.goal.request.goal_constraints.push_back(goal_constraint);
// add other details to trajectory_request here...
trajectory_request is to be sent to the move_group. (by publishing the trajectory_request on the /move_group/goal topic)

A slightly different solution solved the problem of describing goal with end-effector orientation and location:
Instead of publishing the goal on a topic for another node to parse and read, we can use the moveit library function computeCartesianPath. (In this example the code to publish the trajectory is commented and partially missing)
void planTo(std::vector<double> coordinate, std::vector<double> orientation){
geometry_msgs::Pose p;
p.orientation.w = 1.0;
p.position.x = coordinate[0];
p.position.y = coordinate[1];
p.position.z = coordinate[2];
tf::Quaternion q = tf::createQuaternionFromRPY(
orientation[0],orientation[1],orientation[2]);
p.orientation.x = q.getX();
p.orientation.y = q.getY();
p.orientation.z = q.getZ();
p.orientation.w = q.getW();
std::vector<geometry_msgs::Pose> goals;
goals.push_back(p);
moveit::planning_interface::MoveGroup mg("manipulator");
mg.setStartStateToCurrentState();
// load the path in the `trajectory` variable:
moveit_msgs::RobotTrajectory trajectory;
mg.computeCartesianPath(goals, 0.01, 0.0, trajectory);
// publish to gazebo:
// trajectory.joint_trajectory.header.stamp = ros::Time::now();
// publisher.publish(trajectory.joint_trajectory);
}
I solved this a few months ago and unfortunately i do not remember the exact source/tutorial.

Related

Autodiff for Jacobian derivative with respect to individual joint angles

I am trying to compute $\partial{J}{q_i}$ in drake C++ for manipulator and as per my search, the best approach seems to be using autodiff function. I was not able to fully understand autodiff approach from the resources that I found, so I apologize if my approach is not clear enough. I have used my understanding from some already asked questions mentioned on the forum regarding auto diff as well as https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_multibody_plant.html as reference.
As I want to calculate $\partial{J}{q_i}$, the return type will be a tensor i.e. 3 * 7 * 7(or 6 * 7 * 7 depending on the spatial jacobian). I can think of using std::vectorEigen::MatrixXd to allocate the output or alternatively just doing one $q_i$ at a time and computing the respective jacobian for the auto diff. In either case, I was struggling to pass it in the initializing the jacobian function.
I did the following to initialize autodiff
std::unique_ptr<multibody::MultibodyPlant<AutoDiffXd>> mplant_autodiff = systems::System<double>::ToAutoDiffXd(mplant);
std::unique_ptr<systems::Context<AutoDiffXd>> mContext_autodiff = mplant_autodiff->CreateDefaultContext();
mContext_autodiff->SetTimeStateAndParametersFrom(*mContext);
const multibody::Frame<AutoDiffXd>* mFrame_EE_autodiff = &mplant_autodiff->GetBodyByName(mEE_link).body_frame();
const multibody::Frame<AutoDiffXd>* mWorld_Frame_autodiff = &(mplant_autodiff->world_frame());
//Initialize the q as autodiff vector
drake::AutoDiffVecXd q_autodiff = drake::math::InitializeAutoDiff(mq_robot);
MatrixX<AutoDiffXd> mJacobian_autodiff; // Linear Jacobian matrix.
mplant_autodiff->SetPositions(context_autodiff.get(), q_autodiff);
mplant_autodiff->CalcJacobianTranslationalVelocity(*mContext_autodiff,
multibody::JacobianWrtVariable::kQDot,
*mFrame_EE_autodiff,
Eigen::Vector3d::Zero(),
*mWorld_Frame_autodiff,
*mWorld_Frame_autodiff,
&mJacobian_autodiff
);
However, as far as I understand, InitializeAutoDiff initializes to the identity matrix, whereas I want to $\partial{J}{q_i}$, so is there is a better way to do it. In addition, I get error messages when I try to call the jacobian matrix. Is there a way to address this problem both for $\partial{J}{q_i}$ for each q_i and changing q_i in a for loop or directly getting the result in a tensor. My apologies if I am doing something total tangent to the correct approach. I thank you in anticipation.
However, as far as I understand, InitializeAutoDiff initializes to the identity matrix, whereas I want to $\partial{J}{q_i}$, so is there is a better way to do it
That is correct. When you call InitializeAutoDiff and compute mJacobian_autodiff, you get a matrix of AutoDiffXd. Each AutoDiffXd has a value() function that stores the double value, and a derivatives() storing the gradient as an Eigen::VectorXd. We have
mJacobian(i, j).value() = J(i, j)
mJacobian_autodiff(i, j).derivatives()(k) = ∂J(i, j)/∂q(k)
So if you want to create a std::vecot<Eigen::MatrixXd> such that the k'th entry of this vector stores the matrix ∂J/∂q(k), then here is a code
std::vector<Eigen::MatrixXd> dJdq(q_autodiff.rows());
for (int i = 0; i < q_autodiff.rows(); ++i) {
dJdq[i].resize(mJacobian_autodiff.rows(), mJacobian_autodiff.cols());
}
for (int i = 0; i < q_autodiff.rows(); ++i) {
// dJidq stores the gradient of the ∂J.col(i)/∂q, namely dJidq(j, k) = ∂J(j, i)/∂q(k)
auto dJidq = ExtractGradient(mJacobian_autodiff.col(i));
for (int j = 0; j < static_cast<int>(dJdq.size()); ++j) {
dJdq[j].col(i) = dJidq.col(j);
}
}
Compute ∂J/∂q(i) for a single i
If you do not want to compute ∂J/∂q(i) for all i, but only for one specific i, you can change the initialization of q_autodiff from InitializeAutoDiff to this
AutoDiffVecXd q_autodiff(q.rows());
for (int k = 0; k < q_autodiff.rows(); ++k) {
q_autodiff(k).value() = q(k)
q_autodiff(k).derivatives() = Vector1d::Zero();
if (k == i) {
q_autodiff(k).derivatives()(0) = 1;
}
}
namely q_autodiff stores the gradient ∂q/∂q(i), which is 0 for all k != i and 1 when k == i. And then you can compute mJacobian_autodiff using your current code. Now mJacobian_autodiff(m, n).derivatives() store the gradient of ∂J(m, m)/∂q(i) for that specific i. You can extract this gradient as
Eigen::Matrix dJdqi(mJacobian_autodiff.rows(), mJacobian_autodiff.cols());
for (int m = 0; m < dJdqi.rows(); ++m) {
for (int n = 0; n < dJdqi.cols(); ++n) {
dJdqi(m, n) = mJacobian_autodiff(m, n).derivatives()(0);
}
}

Visualizing a line in drake visualizer with C++

The question is related to Is there a way of visualising a line in drake visualizer where I had asked about how to visualize a line in the drake visualizer (about 3 years ago, which worked fine with v0.10.0). I am trying to achieve the same with the new API and was wondering if there was any example/documentation which can guide me on how to publish a line onto the visualizer. My previous method used for publishing a line looks like:
void publishLine(const std::vector<std::vector<double>>& pts,
const std::vector<std::string>& path, lcm::DrakeLcm& lcm,
std::vector<double> color) {
long long int now = getUnixTime() * 1000 * 1000;
nlohmann::json j = {{"timestamp", now},
{
"setgeometry",
{{{"path", path},
{"geometry",
{
{"type", "line"},
{"points", pts},
{"color", color},
{"radius", 0.1},
}}}},
},
{"settransform", nlohmann::json({})},
{"delete", nlohmann::json({})}};
auto msg = robotlocomotion::viewer2_comms_t();
msg.utime = now;
msg.format = "treeviewer_json";
msg.format_version_major = 1;
msg.format_version_minor = 0;
msg.data.clear();
for (auto& c : j.dump()) msg.data.push_back(c);
msg.num_bytes = j.dump().size();
// Use channel 0 for remote viewer communications.
lcm.get_lcm_instance()->publish("DIRECTOR_TREE_VIEWER_REQUEST_<0>", &msg);
}
You can use Meshcat::SetLine or Meshcat::SetLineSegments https://drake.mit.edu/doxygen_cxx/classdrake_1_1geometry_1_1_meshcat.html#aa5b082d79e267c040cbd066a11cdcb54
One caveat is that many browsers/webGL implementations do not support the linewidth property in ThreeJS. For thick lines, consider adding a cylinder using SetObject.

no method matching logpdf when sampling from uniform distribution

I am trying to use reinforcement learning in julia to teach a car that is constantly being accelerated backwards (but with a positive initial velocity) to apply brakes so that it gets as close to a target distance as possible before moving backwards.
To do this, I am making use of POMDPs.jl and crux.jl which has many solvers (I'm using DQN). I will list what I believe to be the relevant parts of the script first, and then more of it towards the end.
To define the MDP, I set the initial position, velocity, and force from the brakes as a uniform distribution over some values.
#with_kw struct SliderMDP <: MDP{Array{Float32}, Array{Float32}}
x0 = Distributions.Uniform(0., 80.)# Distribution to sample initial position
v0 = Distributions.Uniform(0., 25.) # Distribution to sample initial velocity
d0 = Distributions.Uniform(0., 2.) # Distribution to sample brake force
...
end
My state holds the values of (position, velocity, brake force), and the initial state is given as:
function POMDPs.initialstate(mdp::SliderMDP)
ImplicitDistribution((rng) -> Float32.([rand(rng, mdp.x0), rand(rng, mdp.v0), rand(rng, mdp.d0)]))
end
Then, I set up my DQN solver using crux.jl and called a function to solve for the policy
solver_dqn = DQN(π=Q_network(), S=s, N=30000)
policy_dqn = solve(solver_dqn, mdp)
calling solve() gives me the error MethodError: no method matching logpdf(::Distributions.Categorical{Float64, Vector{Float64}}, ::Nothing). I am quite sure that this comes from the initial state sampling, but I am not sure why or how to fix it. I have only been learning RL from various books and online lectures for a very short time, so any help regarding the error or my the model I set up (or anything else I'm oblivious to) would be appreciated.
More comprehensive code:
Packages:
using POMDPs
using POMDPModelTools
using POMDPPolicies
using POMDPSimulators
using Parameters
using Random
using Crux
using Flux
using Distributions
Rest of it:
#with_kw struct SliderMDP <: MDP{Array{Float32}, Array{Float32}}
x0 = Distributions.Uniform(0., 80.)# Distribution to sample initial position
v0 = Distributions.Uniform(0., 25.) # Distribution to sample initial velocity
d0 = Distributions.Uniform(0., 2.) # Distribution to sample brake force
m::Float64 = 1.
tension::Float64 = 3.
dmax::Float64 = 2.
target::Float64 = 80.
dt::Float64 = .05
γ::Float32 = 1.
actions::Vector{Float64} = [-.1, 0., .1]
end
function POMDPs.gen(env::SliderMDP, s, a, rng::AbstractRNG = Random.GLOBAL_RNG)
x, ẋ, d = s
if x >= env.target
a = .1
end
if d+a >= env.dmax || d+a <= 0
a = 0.
end
force = (d + env.tension) * -1
ẍ = force/env.m
# Simulation
x_ = x + env.dt * ẋ
ẋ_ = ẋ + env.dt * ẍ
d_ = d + a
sp = vcat(x_, ẋ_, d_)
reward = abs(env.target - x) * -1
return (sp=sp, r=reward)
end
function POMDPs.initialstate(mdp::SliderMDP)
ImplicitDistribution((rng) -> Float32.([rand(rng, mdp.x0), rand(rng, mdp.v0), rand(rng, mdp.d0)]))
end
POMDPs.isterminal(mdp::SliderMDP, s) = s[2] <= 0
POMDPs.discount(mdp::SliderMDP) = mdp.γ
mdp = SliderMDP();
s = state_space(mdp); # Using Crux.jl
function Q_network()
layer1 = Dense(3, 64, relu)
layer2 = Dense(64, 64, relu)
layer3 = Dense(64, length(3))
return DiscreteNetwork(Chain(layer1, layer2, layer3), [-.1, 0, .1])
end
solver_dqn = DQN(π=Q_network(), S=s, N=30000) # Using Crux.jl
policy_dqn = solve(solver_dqn, mdp) # Error comes here
Stacktrace:
policy_dqn
MethodError: no method matching logpdf(::Distributions.Categorical{Float64, Vector{Float64}}, ::Nothing)
Closest candidates are:
logpdf(::Distributions.DiscreteNonParametric, !Matched::Real) at C:\Users\name\.julia\packages\Distributions\Xrm9e\src\univariate\discrete\discretenonparametric.jl:106
logpdf(::Distributions.UnivariateDistribution{S} where S<:Distributions.ValueSupport, !Matched::AbstractArray) at deprecated.jl:70
logpdf(!Matched::POMDPPolicies.PlaybackPolicy, ::Any) at C:\Users\name\.julia\packages\POMDPPolicies\wMOK3\src\playback.jl:34
...
logpdf(::Crux.ObjectCategorical, ::Float32)#utils.jl:16
logpdf(::Crux.DistributionPolicy, ::Vector{Float64}, ::Float32)#policies.jl:305
var"#exploration#133"(::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, ::typeof(Crux.exploration), ::Crux.DistributionPolicy, ::Vector{Float64})#policies.jl:302
exploration#policies.jl:297[inlined]
action(::Crux.DistributionPolicy, ::Vector{Float64})#policies.jl:294
var"#exploration#136"(::Crux.DiscreteNetwork, ::Int64, ::typeof(Crux.exploration), ::Crux.MixedPolicy, ::Vector{Float64})#policies.jl:326
var"#step!#173"(::Bool, ::Int64, ::typeof(Crux.step!), ::Dict{Symbol, Array}, ::Int64, ::Crux.Sampler{Main.workspace#2.SliderMDP, Vector{Float32}, Crux.DiscreteNetwork, Crux.ContinuousSpace{Tuple{Int64}}, Crux.DiscreteSpace})#sampler.jl:55
var"#steps!#174"(::Int64, ::Bool, ::Int64, ::Bool, ::Bool, ::Bool, ::typeof(Crux.steps!), ::Crux.Sampler{Main.workspace#2.SliderMDP, Vector{Float32}, Crux.DiscreteNetwork, Crux.ContinuousSpace{Tuple{Int64}}, Crux.DiscreteSpace})#sampler.jl:108
var"#fillto!#177"(::Int64, ::Bool, ::typeof(Crux.fillto!), ::Crux.ExperienceBuffer{Array}, ::Crux.Sampler{Main.workspace#2.SliderMDP, Vector{Float32}, Crux.DiscreteNetwork, Crux.ContinuousSpace{Tuple{Int64}}, Crux.DiscreteSpace}, ::Int64)#sampler.jl:156
solve(::Crux.OffPolicySolver, ::Main.workspace#2.SliderMDP)#off_policy.jl:86
top-level scope#Local: 1[inlined]
Short answer:
Change your output vector to Float32 i.e. Float32[-.1, 0, .1].
Long answer:
Crux creates a Distribution over your network's output values, and at some point (policies.jl:298) samples a random value from it. It then converts this value to a Float32. Later (utils.jl:15) it does a findfirst to find the index of this value in the original output array (stored as objs within the distribution), but because the original array is still Float64, this fails and returns a nothing. Hence the error.
I believe this (converting the sampled value but not the objs array and/or not using approximate equality check i.e. findfirst(isapprox(x), d.objs)) to be a bug in the package, and would encourage you to raise this as an issue on Github.

In box2d for iOS, how to find the distance between two circular bodies?

I am using the following code to get the distance between two circular bodies of different radius:
distance = b2Distance(body1->GetPosition(), body2->GetPosition());
I have realized that variable distance is storing the distance between the two centers of the bodies, but not the distance between the borders. What I want is distance=0 when the two bodies are touching.
How can I do that? I've been trying this code but it fails:
b2DistanceInput *distanceInput;
distanceInput->transformA = body1->GetTransform();
distanceInput->transformB = body2->GetTransform();
b2DistanceProxy *proxyA;
proxyA->Set(fixtureBody1->GetShape(), 1);
b2DistanceProxy *proxyB;
proxyB->Set(fixtureBody2->GetShape(), 1);
distanceInput->proxyA = *proxyA;
distanceInput->proxyB = *proxyB;
b2DistanceOutput *theDistance;
b2SimplexCache *cache;
cache->count = 0;
b2Distance(theDistance, cache, distanceInput);
The getShape method is giving a bad access error within the b2box code.
Any ideas?
Thanks,
GA
Try use this code - it work for me:
b2DistanceInput *distanceInput = new b2DistanceInput();
b2DistanceProxy *proxyA = new b2DistanceProxy();
b2DistanceProxy *proxyB = new b2DistanceProxy();
b2SimplexCache *cache = new b2SimplexCache();
b2DistanceOutput *theDistance = new b2DistanceOutput();
proxyA->Set(fixtureBody1->GetShape(),1);
proxyB->Set(fixtureBody2->GetShape(),1);
distanceInput->transformA = body1->GetTransform();
distanceInput->transformB = body2->GetTransform();
distanceInput->proxyA = *proxyA;
distanceInput->proxyB = *proxyB;
distanceInput->useRadii = true;
cache->count = 0;
b2Distance(theDistance, cache, distanceInput);

How to clone an object3d in Three.js?

I want to clone a model loaded with loader, I found this issue on github,but the solution doesn't work. It seems there has been an structural change in Object3D.
How can I clone an Object3D in current stable version of Three.js?
In this new version of three.js you have a method clone.
For example, I use a queen from chess and I had to duplicate multiple times:
// queen is a mesh
var newQueen = queen.clone();
// make sure to re position to be able to see the new queen!
newQueen.position.set(100,100,100); // or any other coordinates
It will work with any mesh.
I used three.js r61.
Actually mrdoob's answer is your answer...
The loader output a geometry you use to create a mesh.
You need to create a new mesh with the loader-created mesh's geometry and material.
for ( var i = 0; i < 10; i ++ ) {
var mesh = new THREE.Mesh( loadedMesh.geometry, loadedMesh.material );
mesh.position.set( i * 100, 0, 0 );
scene.add( mesh );
}
You want to clone a Mesh and not an Object3D because the output of the loader is a Mesh.
I found one fast solution (not the most efficient)
The GLTFLoader uses the THREE.FileLoader() method internally, which allows you to cache files.
To do so, you need to add this line before you create an instance of the GLTFLoader
THREE.Cache.enabled = true;
Then you can load multiple times the same model, but only the first time will take longer, for example:
THREE.Cache.enabled = true;
var loader = new GLTFLoader();
var deeplyClonedModels = [];
for( var i = 0; i < 10; i++ ){
loader.load('foo.gltf', function ( gltf ) {
deeplyClonedModels.push(gltf.scene);
});
}

Resources