Is there a way of visualising a line in drake visualizer - drake

I am currently working with the compass-gait example. I am trying to visualise a planned path for a robot. Is there a way to visualise a few lines dynamically during the simulation on the visualizer as the robot traces a path?
I am looking into publishing lcm messages directly from here, but I am finding it a little hard to get the right sequence. Is there a sub-module I can check to get a better idea about this?

Related

How can I find the parameters needed to steer an ultrasonic beam?

To be honest, I don't know if this question is more appropriate for Stack Overflow, Software Engineering, or Physics, so apologies in advance if I guessed wrong.
I'm trying to build a 3D ultrasound machine out of commercially available parts (just to see if I can). In order to do that, I need to be able to scan an ultrasonic beam from a fixed array of transducers.
I've tried Googling beam steering techniques, and everything I find is big on theory and noticeably short on practicals.
So, is there an algorithm that will tell me the phase angle, intensity, etc. I need to use for each transducer to point the beam in a certain direction? If that's not possible (I rather strongly suspect that that one's NP-Complete), is there an algorithm that will predict the beam angle that I can use as a fitness function?

ROS Human-Robot mapping (Baxter)

I'm having some difficulties understanding the concept of teleoperation in ROS so hoping someone can clear some things up.
I am trying to control a Baxter robot (in simulation) using a HTC Vive device. I have a node (publisher) which successfully extracts PoseStamped data (containing pose data in reference to the lighthouse base stations) from the controllers and publishes this on separate topics for right and left controllers.
So now I wish to create the subscribers which receive the pose data from controllers and converts it to a pose for the robot. What I'm confused about is the mapping... after reading documentation regarding Baxter and robotics transformation, I don't really understand how to map human poses to Baxter.
I know I need to use IK services which essentially calculate the co-ordinates required to achieve a pose (given the desired location of the end effector). But it isn't as simple as just plugging in the PoseStamped data from the node publishing controller data to the ik_service right?
Like a human and robot anatomy is quite different so I'm not sure if I'm missing a vital step in this.
Seeing other people's example codes of trying to do the same thing, I see that some people have created a 'base'/'human' pose which hard codes co-ordinates for the limbs to mimic a human. Is this essentially what I need?
Sorry if my question is quite broad but I've been having trouble finding an explanation that I understand... Any insight is very much appreciated!
You might find my former student's work on motion mapping using a kinect sensor with a pr2 informative. It shows two methods:
Direct joint angle mapping (eg if the human has the arm in a right angle then the robot should also have the arm in a right angle).
An IK method that controls the robot's end effector based on the human's hand position.
I know I need to use IK services which essentially calculate the
co-ordinates required to achieve a pose (given the desired location of
the end effector). But it isn't as simple as just plugging in the
PoseStamped data from the node publishing controller data to the
ik_service right?
Yes, indeed, this is a fairly involved process! In both cases, we took advantage of the kinects api to access the human's joint angle values and the position of the hand. You can read about how Microsoft research implemented the human skeleton tracking algorithm here:
https://www.microsoft.com/en-us/research/publication/real-time-human-pose-recognition-in-parts-from-a-single-depth-image/?from=http%3A%2F%2Fresearch.microsoft.com%2Fapps%2Fpubs%2F%3Fid%3D145347
I am not familiar with the Vive device. You should see if it offers a similar api for accessing skeleton tracking information since reverse engineering Microsoft's algorithm will be challenging.

path planning -> ways from goal to initial state?

the problem: is it true that finding a path from goal to start point is much more efficient than finding a path from start to goal?
if this is true,can some one help me out and explain why?
my opinion:
it shouldn't be different because finding a way from goal to start is just like renaming goal to start and start to goal.
The answer to your question all depends on the path finding algorithm you use.
One of the most well know path finding algorithms, A-Star (or A*), is commonly used in a reverse sense. It all has to do with the heuristics. Since we usually use proximity as the heuristic for the algorithm we can get stuck in obstacles. These obstacles might however be easier to face the other way around. A great explanation with examples can be found here. Just for clarity: if there is no certain knowledge of obstacles, then there is no predictable difference between forwards and backwards path finding by A*.
Another reason why you might want to reverse the pathfinding is if you have multiple actors trying to reach the same goal. Instead of having to execute A*, or another path finding algorithm, for every actor you can combine them into a single executing of a graph explorational path finding algorithm. For example, a variation on Dijkstra's algorithm could find all the shortest distances to all actors in one graph exploration.

Using encoders and robotc to map a line circuit

I am looking for a way to use the encoder information from the motors that drive the wheels of my robot to map a line circuit. The robot navigates around using a single light sensor following a line and on its second lap I want it to recognize where it is in the circuit. I've read a lot about SLAM but not sure I could implement this with robotc and only the encoder information.
Any help and advice on the best way to tackle this would be greatly appreciated.n
You can use an Odometry model to make a prediction on the movement of your robot. Assuming a vehicle with a preferred forward direction on a plane, you would have (x,y,theta) as your state, and then have a state transition depending on your encoder values. What the function looks like really depends on the configuration of your robot. I remember that Introduction to Autonomous Mobile Robots had a good coverage on the subject. You'll find lots of examples on the net, though. Simultaneous Localization and Mapping (SLAM) would be to use a probabilistic Odometry model, and then perform some correction based on your sensor. At first I thought this wasn't very feasible with your setup, but I actually think it is. Using an Occupancy-Grid based Rao-Blackwellized Particle Filter might give you some good results. I haven't used the CAS Toolbox, but have a look as it seems a good place to start.

Are there any good non-predictive path following algorithms?

All the path following steering algorithms (e.g. for robots steering to follow a colored terrain) that I can find are predictive, so they rely on the robot being able to sense some distance beyond its body.
I need path following behavior on a robot with a light sensor on its underside. It can only see terrain it is directly over and so can't make any predictions; are there any standard examples of good techniques to use for this?
I think that the technique you are looking for will most likely depend on what environment will you be operating in as well as to what type of your resources will your robot have access to. I have used NXT robots in the past, so you might consider this video interesting (This video is not mine).
Assuming that you will be working on a flat non glossy surface, you can let your robot wander around until it finds a predefined colour. The robot can then kick in a 'path following' mechanism and will keep tracking the line. If it does not sense the line any more, it might want to try to turn right and/or left (since the line might no longer be under the robot because it has found a bend).
In this case though the robot will need in advance what is the colour of the line that it needs to follow.
The reason the path finding algorithms you are seeing are predictive is because the robot needs to be able to interpret what it is "seeing" in context.
For instance, consider a coloured path in the form of a straight line. Even in this simple example, how is the robot to know:
Whether there is a coloured square in front of it, hence it should advance
Which direction it is even travelling in.
These two questions are the fundamental goals the algorithm you are looking for would answer (and things would get more complex as you add more difficult terrain and paths).
The first can only be answered with suitable forward-looking ability (hence a predictive algorithm), and the latter can only be answered with some memory of the previous state.
Based solely on the details you provided in your question, you wouldn't be able to implement an appropriate solution. Although, I would imagine that your sensor input and on-board memory would in fact be suitable for a predictive solution, you may just need to investigate further what the capabilities of your hardware allow for.

Resources