I'm trying to setup simulated pepper in gazebo for navigation but I can't visualize any topic in map or odom frame because rviz can't find any transformation between links but my tf tree looks normal. my rviz errors my tf tree. I think the problem might be with the names of the frames because in tf tree they all have prefix pepper_robot and in rviz they are missing that prefix.
Thanks
Yes, the problem is that the tf frames being published (by robot_state_publisher) don't match the frame ids being published by the sensors.
I assume you're using the pepper-ros-stack repo (if not, it serves as a good example of what changes to make).
Somewhere, a launch file you're using probably has something that looks like below:
<node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher">
<param name="tf_prefix" value="pepper_robot"/>
</node>
This is where the tf_prefix is being set and added to all your tf frames. Remove it and it should match with the frames being published by the sensors.
<node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher" />
Related
I have a system consisting of a MultibodyPlant (and scene graph), which loads an iiwa manipulator from a URDF file, and a LeafSystem that is creating ROS TF messages corresponding to the geometry stored in the MultibodyPlant.
The LeafSystem has an input port that receives a graph query object, and an output port that produces ROS TF messages. The input port is connected to the scene graph's graph query port. The output port is connected elsewhere, which is not relevant to my problem.
The LeafSystem's output port's value is calculated by this callback. It inspects the scene graph and creates a TF frame for every frame in the scene graph.
The problem I'm having is that the scene graph has no knowledge of the hierarchical relationship between bodies in the MultibodyPlant. This means that the TF messages produced contain a set of frames that are all rooted in the world frame. I would like to instead have them reflect the actual hierarchy of joints and links that is in the URDF. The MultibodyPlant has this information. What architecture or ports should I use to make the information available in my LeafSystem's output port value-calculating callback?
Can the solution to this be generic to any system that contains geometry, or does it only make sense for a MultibodyPlant? (I may not be understanding systems correctly here - I'm relatively new to Drake.)
I am new to pybullet and i was just trying to render table.
I used the one given as an example on kukaarm. What i wanted to do here is resize it.So i edited the .obj file but this is the result, scaling the mesh in urdf isn't giving me any results. Is there any other way to scale it?after changing the v values .obj file
I suggest to refer to the soccerball example in the pybullet repo:
https://github.com/bulletphysics/bullet3/blob/master/examples/pybullet/examples/soccerball.py
When loading URDF you can set globalScaling parameter.
I'm trying to get Autoware to send out a tracked-object list for the LGSVL Simulation. I turn on Yolo3, Euclidian Cluster detection, then pixel_cloud_fusion. When I do, it constantly states that it's looking for TF and Intrinsics data. Looking further, this seems to be a "camera_info" topic that is missing. So I made one up just trying to get it working (not sure if LGSVL has any kind of native support??). I used a bunch of 1s for the matrices and "plumb bob" for the type and matched the width/height to the published camera images. Once I send it, however, I get the error:
[pixel_cloud_fusion] "camera" passed to lookupTransform argument target_frame does not exist
I have no idea what this means and the text does not appear in the Autoware software. Am I doing something wrong? Is there another topic I'm lacking?
P.S Maybe someone with 1500 rep should create an Autoware tag
It seems like this might be an issue with the TF tree being incomplete. For loopup transform to work it needs a well defined TF tree to whatever other fixed frame. To add your camera to the TF tree you should be able to use the static transform publisher.
I'm new to this .. I am working on some mapping project and I need help filtering LIDAR pointcloud data. I have two questions and i hope you can help in this:
1) Can you use Veloview to filter Point Cloud data? Can you crop the data captured? if so, what's the process?
2) Can you export/convert PCAP file in .csv?
Thanks
Have a look at VeloView User Guide
For manipulation and analysis purpose, you can use Veloview-python console interface mentioned in section III - Python Console.
For cropping the captured data:
File --> Save As --> [preferred option] --> Frames from.
I have this .gpx file that is formatted as follows:
<trk>
<name>Area1</name>
<extensions>
<gpxx:TrackExtension>
<gpxx:DisplayColor>Magenta</gpxx:DisplayColor>
</gpxx:TrackExtension>
</extensions>
<trkseg>
<trkpt lat="52.806521779700120" lon="5.795177063346190"/>
...
<trkpt lat="52.806521779700120" lon="5.795177063346190"/>
</trkseg>
</trk>
<trk>
<name>Area2</name>
<extensions>
<gpxx:TrackExtension>
<gpxx:DisplayColor>Magenta</gpxx:DisplayColor>
</gpxx:TrackExtension>
</extensions>
<trkseg>
<trkpt lat="52.764805147811629" lon="5.377259838276261"/>
...
This file that contains well over 18.000 coordinates, describes several (adjacent) areas.
My ultimate goal is to see, if a given GPS coordinate is within the borders of any of these areas. What would be the best approach for that?
Related, intermediate question:
Will any method be reasonable fast to run through all these coordinates?
Solution to check if a point is inside an polygonal area without holes
Just use a point-In-polygon function. And use x for longitude and y for latitude. You can directly pass in the long, lat cordinates.
Such a function is easily found here or via google.
E.g here, 7 lines of code: https://wrf.ecse.rpi.edu/Research/Short_Notes/pnpoly.html
The only possibility that this function fail, is when the region overlaps the datum limit (meridian at longitude -180, 180).
However since there is mostly water, it is unlikely that the region will overlap that line.
18.000 points is no problem for this task, there is no need for advanced algorithms.