I am interested in using the Direct Collocation Method to generate a walking trajectory for a 2D 7-link biped robot (torso, left and right upper leg, lower leg and foot).
Specifically,
input : torque to each joint
state : position of waist and angle of each joint
I have parameters of each link and equations of motion.
However, I couldn't understand how to write "system" (and "context") by reading the API.
Is there a good way to describe "system" from this information or a similar example anywhere?
I'm going to use pydrake.
I have a number of relevant examples in my course notes. I would recommend the compass gait limit cycle exercise from the chapter on planning through contact (which uses a URDF to specify the dynamics), or the SLIP model example in the notebook associated with the "Simple models of legged robots" chapter for an example of writing the equations of motion out manually.
Please understand that DirectCollocation, by itself, is not ideal for planning through collisions. Those chapters describe the "hybrid trajectory optimization" approach that is likely what you will want.
Related
There is no api description about change friction coefficient and moment of inertia in CompassGait in https://drake.mit.edu/pydrake/pydrake.examples.compass_gait.html. What's the best way to deal with this?
It's not a missing API -- those concepts are explicitly missing from the mathematical model. The assumption of infinite friction (no slip) allows us to capture the dynamics in minimal coordinates with a single mode. The point mass assumptions could be replaced with inertias without much additional complexity, but that's not how we have derived these equations.
The "searching for limit cycles" exercise in my course notes derives the equations with friction explicitly: (currently the first exercise in this chapter http://underactuated.csail.mit.edu/simple_legs.html)
I'm trying to get a Human pose from their body joints.
The number of body joints coordinates is 14 for one person (ex. ankle, knee, hip etc)
And I need to give their connectivity between the coordinates (ex. ankle-knee, knee-hip) as an input of DNN model.
I used to use the relative coordinates (ex.x1-x2, y1-y2) for giving the direction between joints, but there were limitations to increase predict performance.
I want to get a fresh and creative idea you have.
If you have any ideas, let's share.
Thanks
A commonly used convention for selecting frames of reference in robotics applications is the Denavit and Hartenberg (D–H) convention which was introduced by Jacques Denavit and Richard S. Hartenberg. In this convention, coordinate frames are attached to the joints between two links such that one transformation is associated with the joint, [Z], and the second is associated with the link [X].
The coordinate transformations along a serial robot consisting of n links form the kinematics equations of the robot.
You can check out the wiki link for the same here: DH-Parameter
Now to implement this in Python there are lot's of open-source module is available. One such is uw-biorobotics/IKBT, or if you can refer to Python_robotics
Can anyone recommend a reinforcement learning library or framework that can handle large state spaces by abstracting them?
I'm attempting to implement the intelligence for a small agent in a game world. The agent is represented by a small two-wheeled robot that can move forward and backwards, and turn left and right. It has a couple sensors for detecting a boundary on the ground, a couple ultrasonic sensors for detecting objects far away, and a couple bump sensors for detecting contact with an object or opponent. It also can do some simple dead reckoning to estimate its position in the world using its starting position as a reference. So all the state features available to it are:
edge_detected=0|1
edge_left=0|1
edge_right=0|1
edge_both=0|1
sonar_detected=0|1
sonar_left=0|1
sonar_left_dist=near|far|very_far
sonar_right=0|1
sonar_right_dist=near|far|very_far
sonar_both=0|1
contact_detected=0|1
contact_left=0|1
contact_right=0|1
contact_both=0|1
estimated_distance_from_edge_in_front=near|far|very_far
estimated_distance_from_edge_in_back=near|far|very_far
estimated_distance_from_edge_to_left=near|far|very_far
estimated_distance_from_edge_to_right=near|far|very_far
The goal is to identify the state where the reward signal is received, and learn a policy to acquire that reward as quickly as possible. In a traditional Markov model, this state space represented discretely would have 2985984 possible values, which is far too much to explore each and every one using something like Q-learning or SARSA.
Can anyone recommend a reinforcement library appropriate for this domain (preferably with Python bindings) or an unimplemented algorithm that I could potentially implement myself?
Your actual state is the robot's position and orientation in the world. Using these sensor readings is an approximation, since it is likely to render many states indistinguishable.
Now, if you go down this road, you could use linear function approximation. Then this is just 24 binary features (12 0|1 + 6*2 near|far|very_far). This is such a small number that you could even use all pairs of features for learning. Farther down this road is online discovery of feature dependencies (see Alborz Geramifard's paper, for example). This is directly related to your interest in hierarchical learning.
An alternative is to use a conventional algorithm to track the robot's position and use the position as input to RL.
I need to know the definition of "integration points" in abaqus subroutines.
I'm new to abaqus software and I'm waiting for your help
It is now 2.5 years after the OP asked this question, so my answer is probably more for anyone who has followed a link here, hoping for some insight. On the grounds that FEM programming is special,0 I will try to answer this question rather than flag it as off-topic. Anyway, some of my answer is applicable to FEM in general, some is specific to Abaqus.
Quick check:
If you're only asking for the specific numerical value to use for the (usual or standard) location of integration points, then the answer is that it depends. Luckily, standard values are widely available for a variety of elements (see resources below).
However, I assume you're asking about writing a User-Element (UEL) subroutine but are not yet familiar with how elements are formulated, or what an integration point is.
The answer: In the standard displacement-based FEM the constitutive response of an individual finite element is usually obtained by numerical integration (aka quadrature) at one or more points on or within the element. How many and where these points are located depends on the element type, certain performance tradeoffs, etc, and the particular integration technique being used. Integration techniques that I have seen used for continuum (solid) finite elements include:
More Common: Gauss integration -- the number & position of sampling points are determined by the Gauss quadrature rule used; nodes are not included in the sampling domain of [-1,1].
Less Common: Newton-Cotes integration -- evenly spaced sampling points; includes the nodes in the sampling domain of (-1,1).
In my experience, the standard practice by far is to use Gauss quadrature or reduced integration methods (which are often variations of Gauss quadrature). In Gauss quadrature, the location of the integration points are taken at special ("optimal") points within the element known as Gauss points which have been shown to provide a high level of reliably accurate solutions for a given level of computational expense - at least for the typical polynomial functions used for many isoparametric finite elements. Other integration techniques have been found to be competitive in some cases1 but Gauss quadrature is certainly the gold standard. There are other techniques that I'm not familiar with.
Practical advice: Assuming an isoparametric formulation, in the UEL you use "element shape functions" and the primary field variables defined by the nodal degrees of freedom (with a solid mechanics focus, these are typically the displacements) to calculate the element strains, stresses, etc. at each integration point. If this doesn't make sense to you, see resources below.
Note that if you need the stresses at the nodes (or at any other point) you must extrapolate them from the integration points, again using the shape functions, or calculate/integrate directly at the nodes.
Suggested resources:
Please: If you're writing a user subroutine you should already know what an integration point is. I'm sorry, but that's just how it is. You have to know at least the basics before you attempt to write a UEL.
That said, I think it's great that you're interested in programming for FEA/FEM. If you're motivated but not at university where you can enroll in an FEM course or two, then there are a number of resources available, from Massive Open Online Courses (MOOCs), to a plethora of textbooks - I generally recommend anything written by Zienkiewicz. For a readable yet "solid" introduction with an emphasis on solid mechanics, I like Concepts and Applications of Finite Element Analysis, 4th Edition, by Cook et al (aka the "Cook Book"). Good luck!
0 You typically need a lot of background before you even ask the right questions.
1 Trefethen, 2008, "Is Gauss Quadrature Better than Clenshaw-Curtis?", DOI 10.1137/060659831
Your question is not really clear.
Do you mean in the python environment? You have section points for shell elements which are trough thickness you set these through your shell section. The amount of integration points depend on your element type.
You can find a lot of info in the Abaqus scripting manual. For example
http://www.tu-chemnitz.de/projekt/abq_hilfe/docs/v6.12/books/cmd/default.htm
An integration point in FEM where the primary variables are solved. Just keep that in mind. In user subroutines in Abaqus, the calculation takes place at each integration point. Remember that and go forward. If you are unsatisfied, take a look at any FEM book for the definition/explanation of the integration point. It is not dependent on subroutines.
An integration point is one of the nodal values within an element. For example an eight node C3D8R continuum brick element has eight integration points, one at each corner of the brick.
Also within a subroutine other variables such as state variables, SVARS, or stored at the integration points so if your element has say 4 SVARS you need to keep track of then there will 8 * 4 = 32 SVARS in the entire 8 node element.
I hope this answers your question.
Last couple of days I spent on searching for curve reconstruction implementations, and found none - not as a library nor as a tool.
To describe my problem.
My main concern are contours with gaps:
From papers I've read in the meantime, I guess solution will require usage of Delaunay triangulation, and the method referenced most seems to be described in 1997 paper "The Crust and the β-Skeleton: Combinatorial Curve Reconstruction
"
Can someone point me to a curve reconstruction implementation, that can help me solve this problem?
Algorithm is implemented in CGAL. Example implementation can be seen in C++ in CGAL ipelets demo package. Even more compiling the demo allows user applying the algorithm in ipe GUI application:
In above example I selected just part of my image, as bottom lines did not meet necessary requirements, so crust can't be applied on that part until corrected. Further, image has to be sampled, as can be noticed.
If no one provides another implementation example, I'll mark my answer as correct after couple of days.
Delaunay triangulation uses discretized curve, and with that loses information. That can cause strange problems where you don't expect them. In your example, probably middle part on lower boundary would cause a problem.
In this situations maybe it is good to collect relevant information from model and try to make a matching.
Something like, for each end point collect contour derivative in a neighbourhood. Than find all end points to which that end point can be connected, with approximative derivative direction and that joint doesn't cross other line. It is possible to give weight to possible connection by joint distance and deviation from local derivative. Giving weight defines weighted graph with possible end point connections. Maximal edge matching in that graph would be good solution to a problem.
There are quite a few ways to solve this;
You could simply write a worm that follows the curves and when you reach the end of one, you take your current direction vector along with gradient and extrapolate it forward. Find all the other endpoints that would best fit and then score them; Reconnect up with the one with the highest score. Simple, and prone to problems if its more than a simple break up.
A hierarchical waterfall method might be interesting
There are threshold methods in waterfall (and level-set methods) that can be used to detect these gaps and fill them in.