My xml file is test.xml. See below
<?xml version="1.0" ?>
<Output>
<partialstore /> <!-- The code writes the spectrum at Sun position for each species on a FITS file (optional) -->
<fullstore /> <!-- The code writes the complete (r,z,p) grid of propagated particles for each species on a FITS file (optional) -->
<feedback value="2" /> </Output>
<Diffusion type="Constant"> <!-- Spatial distribution of the diffusion coefficient; options: Constant, Exp, Qtau -->
<!-- In Constant mode: D(Rigidity) = beta^etaT * D0 * (Rigidity/4GV)^delta -->
<!-- In Exp mode: D(Rigidity,z) = beta^etaT * D0 * (Rigidity/4GV)^delta * exp(z/zt) -->
<D0_1e28 value="2.7" /> <!-- Normalization of the diffusion coefficient at reference rigidity DiffRefRig Unit: 10^28 cm^2/s -->
<DiffRefRig value = "4" /> <!-- Reference rigidity for the normalization of the diffusion coefficient -->
<!-- NOTE: the reference rigidity 4 GV is stored in the const D_ref_rig defined in include/constants.h -->
<Delta value="0.6" /> <!-- Slope of the diffusion coefficient spectrum -->
<zt value="4" /> <!-- Scale heigth of the diffusion coefficient, useful in Exp mode: D(z) \propto exp(z/zt) (optional) -->
<etaT value="1." /> <!-- Low energy correction factor of the diffusion coefficient: D \propto beta^etaT --> </Diffusion>
I want to parse this xml with python. There are two root elements output and diffusion. I want to add 0.6 with D0_1e28 value and Delta_value so that those become 3.1 and 1.2 respectively and save the test.xml as a test_mod.xml with the modification. How can I do it with python code?
If I understand you correctly, this should work, using xpath with lxml:
xml = """
<root>
[your xml above; note that it was an invalid xml file, so I added a root]
</root>
"""
import lxml.etree as et
tree = et.fromstring(xml)
to_change = ['//Diffusion/D0_1e28','//Delta'] #those are the items you are changing
for i in range(len(to_change)):
item=tree.xpath(to_change[i])[0] #xpath returns a list of 1 item, so [0] selects it
item.attrib['value']= str(float(item.attrib['value'])+0.6)
#the "value" attribute has a value of type 'string' which needs to be cast to
#'float' to increment it by 0.6, and then back to 'string` to change the value
#of the 'value' attribute
mydata=et.tostring(tree)
with open("test_mod.xml", "wb") as myfile:
myfile.write(mydata)
Related
Running ROS melodic, gazebo version 9.11.0 and the official Gazebo/PR2 plugin.
I am using a PR2 robot simulated within gazebo and sending control commands through ROS. However the robot is moving at most within 0.25 m/s while the maximum speed is 1 m/s (as per specification). I'm using the teleop application provided by PR2/Gazebo plugin.
The PR2 teleop launch file teleop_keyboard.launch correctly indicates the correct value
<launch>
<node pkg="pr2_teleop" type="teleop_pr2_keyboard" name="spawn_teleop_keyboard" output="screen">
<remap from="cmd_vel" to="robot1/base_controller/command" />
<param name="walk_vel" value="0.5" />
<param name="run_vel" value="1.0" />
<param name="yaw_rate" value="1.0" />
<param name="yaw_run_rate" value="1.5" />
</node>
</launch>
Accessing the robot1 odometry directly indicates the problem: the maximum speed is somehow capped at 0.261 m/s. But the walking velocity in pr2_teleop is set to 0.5 m/s!
twist:
twist:
linear:
x: 0.261318236589
y: 4.47927095593e-06
z: 0.0
angular:
x: 0.0
y: 0.0
z: -1.72120462594e-06
covariance: <removed>
According to teleop_pr2_keyboard.cpp the values of geometry_msgs::Twist have to be:
0.5 for twist.twist.linear.x -> forward moving
-0.5 for twist.twist.linear.x -> backward moving
and so on, see http://docs.ros.org/en/melodic/api/pr2_teleop/html/teleop__pr2__keyboard_8cpp_source.html.
My question is how do you get a so minimal deviation in linear.y and angular.z and did you change something in the above mentioned file?
I am trying to model an elastic bouncing ball in drake. However, I have not figured out how to set something like the coefficient of restitution for the urdf model I load. Does drake support elastic collisions for the point contact model? If yes how can I set the respective parameters?
Edit: I already tried setting the penetration allowance with plant.set_penetration_allowance(0.0001) but I got the following error: AttributeError: 'MultibodyPlant_[float]' object has no attribute 'set_penetration_allowance'. But since it models a critically damped system I assume it would not help with my problem anyways.
My current code looks as follows:
plane_friction_coef = CoulombFriction(static_friction=1.0, dynamic_friction=1.0)
# generate the diagram of the system
builder = DiagramBuilder()
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, time_step=0.0)
parser = Parser(plant=plant)
# connect to Drake Visualizer
lcm = DrakeLcm()
ConnectDrakeVisualizer(builder, scene_graph, lcm=lcm)
# add plane to plant
X_WP = xyz_rpy_deg(xyz=[0, 0, 0], rpy_deg=[0,0,0]) # offset and orientation of plane wrt.
world frame
plant.RegisterVisualGeometry(plant.world_body(), X_BG=X_WP, shape=HalfSpace(),
name='InclinedPlaneVisualGeometry',
diffuse_color=np.array([1, 1, 1, 0.999]))
plant.RegisterCollisionGeometry(plant.world_body(), X_BG=X_WP, shape=HalfSpace(),
name='InclinedPlaneCollisionGeometry',
coulomb_friction=plane_friction_coef)
# set gravity in world
plant.mutable_gravity_field().set_gravity_vector(gravity_vec)
# add object from sdf or urdf file
my_object = parser.AddModelFromFile(obj_file_path, model_name='my_object')
plant.Finalize()
# add a logger
logger = LogOutput(plant.get_state_output_port(), builder)
logger.set_name('logger')
logger.set_publish_period(1 / recording_rate)
# build diagram and set its context
diagram = builder.Build()
diagram_context = diagram.CreateDefaultContext()
plant_context = diagram.GetMutableSubsystemContext(plant, diagram_context)
plant.SetPositionsAndVelocities(plant_context, gen_pos)
# start simulation
simulator = Simulator(diagram, diagram_context)
simulator.Initialize()
simulator.set_target_realtime_rate(1)
simulator.AdvanceTo(sim_time)
time_log = logger.sample_times()
state_log = logger.data()
The urdf file I load looks like this:
<?xml version="1.0"?>
<robot name="my_ball">
<material name="Black">
<color rgba="0.0 0.0 0.0 1.0"/>
</material>
<link name="base_link">
<inertial>
<origin rpy="0 0 0" xyz="0.0 0.0 0.0"/>
<mass value="5"/>
<inertia ixx="0.05" ixy="0" ixz="0" iyy="0.05" iyz="0" izz="0.05"/>
</inertial>
<visual>
<geometry>
<sphere radius="0.2"/>
</geometry>
<material name="Black"/>
</visual>
<collision name='collision'>
<geometry>
<sphere radius="0.2"/>
</geometry>
<drake:proximity_properties>
<drake:mu_dynamic value="1.0" />
<drake:mu_static value="1.0" />
</drake:proximity_properties>
</collision>
</link>
</robot>
Nicolas,
No, currently we do not support elastic collisions given we focused our efforts on slowly approaching contact surfaces as it is the case with manipulation applications. We will definitely support this as our contact solver matures.
That being said, currently there is no support to specify a coefficient of restitution for your model.
The best solution will depend on your particular problem. Is this a vertically bouncing ball? is friction important? (i.e. can the ball also move horizontally?) is it a 2D or 3D case?
From simpler to more complex I'd suggest:
If a vertically bouncing ball, 1DOF, then I'd suggest writing a the dynamics by hand in a LeafSystem.
An "event driven" hybrid model is also possible. And you have an example in Drake here, though probably an advanced used of Drake.
You can create your own LeafSystem that given the state of an MBP as input, computes a contact force as its output (for instance using something like a Hertz model with Hunt-Crossley dissipation). You'd then wire up the applied force through the MBP port MBP::get_applied_spatial_force_input_port().
I hope this helps you out.
In many deep learning models for shape analysis, the input image/shape is first going through some Spatial Transform Network (STN) to align the input to a constant canonical space for better model learning and performance. I am also considering including a STN in my application, which takes inputs of 3D points clouds, so I am building a STN for point clouds.
Then, I try to reference from some existing models.
For example, here is the beginning part (the localization network part) of the STN in PointNet:
def input_transform_net(point_cloud, is_training, bn_decay=None, K=3):
""" Input (XYZ) Transform Net, input is BxNx3 gray image
Return:
Transformation matrix of size 3xK """
batch_size = point_cloud.get_shape()[0].value
num_point = point_cloud.get_shape()[1].value
input_image = tf.expand_dims(point_cloud, -1)
net = tf_util.conv2d(input_image, 64, [1,3],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='tconv1', bn_decay=bn_decay)
net = tf_util.conv2d(net, 128, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='tconv2', bn_decay=bn_decay)
net = tf_util.conv2d(net, 1024, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='tconv3', bn_decay=bn_decay)
net = tf_util.max_pool2d(net, [num_point,1],
padding='VALID', scope='tmaxpool')
# and more...
Please let me try to summarize the steps taken here (hope I am making it right here):
1. expand input image so dimensions change from:
BxNx3 --> BxNx3x1.
2. 1st conv. op. has kernel size [1,3], so after 1st conv. the dimensions of image should be:
BxNx3x1 --> BxNx1x64
3. 2nd conv. op. has kernel size [1,1], so after 2nd conv. the dimensions of image should be:
BxNx1x64 --> BxNx1x128
4. ... and it goes on.
Just for my curiosity, I am wondering with the settings following would be equivalent as the one adopted in PointNet (shown above), as well as many other models.
I am wondering here if it will be equivalent if I replace Conv2d by Conv1d, and change it to something like:
# Input dimension: BxNx3
# not expanding dims this time
net = input_image
# After 1st conv. dims: BxNx3 --> BxNx64
net = conv1d(net, 64, kernel_size=[1], stride=[1], ...)
# After 2nd conv. dims: BxNx64 --> BxNx128
net = conv1d(net, 64, kernel_size=[1], stride=[1], ...)
# ... and it goes on.
Coming from the first setting, I am also wondering if the conv. op. here is actually equivalent to some matrix multiplication/fully connected layer (as in tensorflow)/linear layer (as in torch):
# Reshape input dims: BxNx3 --> (B*N)x3
# so data entries in the batch are "stacked together".
net = input_image.reshape(-1, 3)
# I use torch.nn.linear() from pytorch here for easy explanation.
# which basically does the follows:
# After 1st linear: (B*N)x3 --> (B*N)x64
net = torch.nn.linear(net, in_feat=3, out_feat=64)
# After 2nd linear: (B*N)x64 --> (B*N)x128
net = torch.nn.linear(net, in_feat=64, out_feat=128)
# ... and it goes on.
I actually prefer the second setting, if it is correct, because the first dimension of input image can be replaced by something like (N_1+N_2+...+N_B) instead of B*N. This means that I do not have to fix every data entry to have the same size. Of course, it this setting is correct.
I have a 5 dimensional matrix in an hdf5 data file. I would like to plot this data using paraview. The solution I have in mind is describing the data via the Xdmf Format.
The 5 dimensional matrix is structured as follows:
matrix[time][type][x][y][z]
The 'time' index specifies a time step. The 'type' selects the matrices for different particle types. And x,y,z describes the spatial coordinates of a grid. The value of the matrix is a Scalar that I would like to plot.
My question is: How can I select a specific 3 dimensional matrix for a given time step and type to plot, using the xdmf format? Ideally the timestep can be represented by the <time> functionality of Xdmf.
I tried the 'hyperslab' functionality of xdmf, but that seems to not reduce the dimensionallity to, which I need to to plot the grid.
I also had a look at the 'SubSet' functionality, but I did not understand how to use it, by reading the official documentation at xdmf.
With help of the mailing list of Xdmf I found a solution that works for me.
My input matrix is 5-dim (1,2,12,6,6) in the hdf5 file "ana.h5" and I select timestep 0 and type 1.
<?xml version="1.0" ?>
<!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []>
<Xdmf xmlns:xi="http://www.w3.org/2003/XInclude" Version="2.2">
<Domain>
<Topology name="topo" TopologyType="3DCoRectMesh" Dimensions="12 6 6"></Topology>
<Geometry name="geo" Type="ORIGIN_DXDYDZ">
<!-- ORigin -->
<DataItem Format="XML" Dimensions="3">
0.0 0.0 0.0
</DataItem>
<!-- DxDyDz -->
<DataItem Format="XML" Dimensions="3">
1 1 1
</DataItem>
</Geometry>
<Grid Name="TimeStep_0" GridType="Uniform">
<Topology Reference="/Xdmf/Domain/Topology[1]"/>
<Geometry Reference="/Xdmf/Domain/Geometry[1]"/>
<Time Value="64"/>
<Attribute Type="Scalar" Center="Cell" Name="Type1">
<!-- Result will be 3 dimensions -->
<DataItem ItemType="HyperSlab" Dimensions="12 6 6 ">
<!-- The source is 5 dimensions -->
<!-- Origin=0,1,0,0,0 Stride=1,1,1,1,1 Count=1,1,12,6,6 -->
<DataItem Dimensions="3 5" Format="XML">
0 1 0 0 0
1 1 1 1 1
1 1 12 6 6
</DataItem>
<DataItem Format="HDF" NumberType="UInt" Precision="2" Dimensions="1 2 12 6 6 ">
ana.h5:/density_field
</DataItem>
</DataItem>
</Attribute>
</Grid>
</Domain>
</Xdmf>
The resulting matrix is 3 dimensional (12,6,6) and plotable with paraview.
I know that is possible to save a trained ANN into a file using CvFileStorage, but I really don't like the way that CvFileStorage saves the training, then I was wondering: Is that possible to retrieve the informations of a training and save it in a custom way?
Thanks in advance.
Just look at xml structure, it's very simple.
The names of objects are the same as in ANN class.
Here is a XOR solving network:
<?xml version="1.0"?>
<opencv_storage>
<my_nn type_id="opencv-ml-ann-mlp">
<layer_sizes type_id="opencv-matrix">
<rows>1</rows>
<cols>3</cols>
<dt>i</dt>
<data>
2 3 1</data></layer_sizes>
<activation_function>SIGMOID_SYM</activation_function>
<f_param1>1.</f_param1>
<f_param2>1.</f_param2>
<min_val>-9.4999999999999996e-001</min_val>
<max_val>9.4999999999999996e-001</max_val>
<min_val1>-9.7999999999999998e-001</min_val1>
<max_val1>9.7999999999999998e-001</max_val1>
<training_params>
<train_method>RPROP</train_method>
<dw0>1.0000000000000001e-001</dw0>
<dw_plus>1.2000000000000000e+000</dw_plus>
<dw_minus>5.0000000000000000e-001</dw_minus>
<dw_min>1.1920928955078125e-007</dw_min>
<dw_max>50.</dw_max>
<term_criteria><epsilon>9.9999997764825821e-003</epsilon>
<iterations>1000</iterations></term_criteria></training_params>
<input_scale>
2. -1. 2. -1.</input_scale>
<output_scale>
5.2631578947368418e-001 4.9999999999999994e-001</output_scale>
<inv_output_scale>
1.8999999999999999e+000 -9.4999999999999996e-001</inv_output_scale>
<weights>
<_>
-3.8878915951440729e+000 -3.7728173427563569e+000
-1.9587678786875042e+000 3.7898767378369680e+000
3.0354324494246829e+000 1.9757881693499044e+000
-3.5862527376978406e+000 -3.2701446005792296e+000
1.3000011629911392e+000</_>
<_>
3.1017381376627204e+000 1.1052842857439200e+000
-4.6739037571329822e+000 3.2282702769334666e+000</_></weights></my_nn>
</opencv_storage>
You can save the same parameters to you oun formated file. Some of the fields are protected, but you can make child class from CvANN_MLP and make your oun file saver.