p_WC vs p_WCa and p_WCb - drake

It makes sense that we have two contact points p_WCa and p_WCb, that come from PenetrationAsPointPair. The bottom right corner of the green body is p_WCb, and the top left of the blue body is p_WCa.
But what is p_WC, which comes from contact_results.point_pair_contact_info(cidx).contact_point()?
Which contact point is most appropriate for calculating the torque (using the body center of mass as the torque reference point), for purposes of static equilibrium calculation? I'm inclined to say F_AB_W should be associated with p_WCa.

Ultimately, to satisfy newton's laws, equal and opposite forces must be applied to both bodies, at the same point. Neither p_WCa and p_WCb is necessarily that point. They are what we call "witness" points. They are intimately related to the penetration depth and contact normal, but they aren't the contact point, per se. The displacement vector of the two points, in the contact normal direction is the penetration depth. However, we do use them to compute the contact point.
The contact point is essentially a linear interpolation of those points. Remember that the point contact model is a compliant model. It allows for small deformations of the body (relative to its volume and mass). But the two bodies don't necessarily deform the same amount. If one object, let's say object A, is much stiffer than object B, it deforms less and the effective contact point will be close to the stiff body's surface -- close to p_WCa. The interpolation factor is, in fact, a function of the two bodies' elasticity values. If they are equally compliant, it is the mid-point, etc.
So, at the end of the day, the geometric contact characterization produces contact normal, depth, and witness points. The contact solver produces forces and the point to apply the force, and that's what you see in the contact result's contact_point.
You can read more about it in Drake's documentation
As a foot-note: the effective (combined) elasticity is also what defines the magnitude of the normal force.

Following on Sean's excellent answer which may have been more than sufficient to answer the original question. The set of forces exerted on a body A by contact with a body B can be summed (added together) to create a net/resultant force. That resultant force is applied to a certain point Ap of (fixed to) body A. There is an equal/opposite force applied to a point Bp of (fixed to) body B. Although points Ap and Bp are at the same location (as Sean describes above), points Ap and Bp are not the same point. So there are two contact points, namely Ap (fixed to body A) and Bp (fixed to body B). They share a common location, but may have different velocities and/or accelerations.

Related

FEM Integrating Close to Integration Points

I am working on a program that can essentially determine the electrostatic field of some arbitrarily shaped mesh with some surface charge. To test my program I make use of a cube whose left and right faces are oppositely charged.
I use a finite element method (FEM) that discretizes the object's surface into triangles and gives to each triangle 3 integration points (see below figure, bottom-left and -right). To obtain the field I then simply sum over all these points, whilst taking into account some weight factor (because not all triangles have the same size).
In principle this works all fine, until I get too close to a triangle. Since three individual points are not the same as a triangular surface, the program breaks and gives these weird dots. (block spots precisely between two integration points).
Below you see a figure showing the simulation of the field (top left), the discretized surface mesh (bottom left). The picture in the middle depicts what you see when you zoom in on the surface of the cube. The right-most picture shows qualitatively how the integration points are distributed on a triangle.
Because the electric field of one integration point always points away from that point, two neighbouring points will cancel each other out since their vectors aim in the exact opposite direction. Of course what I need instead is that both vectors point away from the surface instead.
I have tried many solutions, mostly around the following points:
Patching the regions near an integration point with a theoretically correct uniform field pointing away from the surface.
Reorienting the vectors only nearby the integration point to manually put them in the right direction.
Apply a sigmoid or other decay function to make the above look more smooth.
Though, none of the methods above allow me to properly connect the nearby and faraway regions.
I guess what might work is some method to extrapolate the correct value from the surroundings. Though, because of the large number of computations, I moved the simulation the my GPU, which means that I have to be careful allowing two pixels to write to each other.
Either way, my question here is as follows:
What would be a good way to smooth out my results? That is, I need a more accurate description of my model when I get closer to a triangle.
As a final note I want to add that it is not my goal to simply obtain a smooth image. Later in the program I need this data to determine the response of a conducting material, which is where these black dots internally become a real pain...
Thank you for your help !!!

Graph where edges represent vector (force and direction) between nodes

Is there any domain (/ dedicated keyword) of graph theory that covers graphs where the edges represent forces?
Force is a vector. Thus, it has two attributes: weight, and direction.
weight represents the magnitude of the force.
direction represents the direction in which the force is acting. This direction is different from directed graphs where only the head or tail nodes matter.
The sense of direction can be better understood by the following examples:
Example 1:
Consider a network of inelastic strings under tension. Let's say the network is under equilibrium. If we pull a node, all other nodes will be pulled. Please note, the length of the strings (~ weight) won't change. But, the locations of the nodes and thereby the direction of the strings may change to bring all the nodes back to equilibrium after the pull.
Example 2: Consider all the planets (~nodes) in the universe in the form of a graph. All of them impart gravitational force (~edges) on each other and are under equilibrium. If we dislodge (or increase the size) of a planet/sun, others are likely to disturb.
The edge weight/length can represent the magnitude of force (But, direction??).
In both the example, the direction component differ them from traditional sense of edge weights where the edges are just scalars. They, do not have direction.
The scalars can be analogous to a sense of distance (shortest distance, eccentricity, closeness centralities) or flow (betweenness centrality etc.); but not force.
The question is How to incorporate direction of edges (in addition to length/weight) in network analysis? Is there any domain that focuses on graphs where edges have weights as well as direction?
Note: The direction of the edge can be an additional parameter like angle; or be specified by the location of the connecting nodes.
What you're describing sounds like force-directed graph drawing algorithms as discussed here. Since you tagged this with networkx, the spring_layout method uses the Fruchterman-Reingold force-directed algorithm.
The networkx documentation doesn't list an actual reference to the algorithm, but the R igraph package lists this as the reference for their layout_with_fr function:
Fruchterman, T.M.J. and Reingold, E.M. (1991). Graph Drawing by Force-directed Placement. Software - Practice and Experience, 21(11):1129-1164.

Human pose estimation - efficient linking of body parts

Problem's description:
I am working at a project whose goal is to identify people's body parts in images (torso, head, left and right arms etc). The approach is based on finding parts of the human body (hypothesis) and then searching for the best pose configuration (= all the parts that really form a human body). The ideea is described better at this link http://www.di.ens.fr/willow/events/cvml2010/materials/INRIA_summer_school_2010_Andrew_human_pose.pdf.
The hypothesis are obtained after running a detection algorithm (here I am using a classifier from machine learning field) for each body part separately. So, the type of each hypothesis is known. Also, each hypothesis has a location (x and y coordinates in the image) and an orientation.
To determine the cost of linking two parts together, one can consider that each hypothesis of type head can be linked with each hypothesis of type torso (for example). But, a head hypothesis which is in the top right location of the image can not be linked (from a human point of view) with a torso hypothesis which is in the bottom left location of the image. I am trying to avoid these kinds of links based on the last statement and also due to the execution time.
Question: I am planing to reduce the searching space by considering a distance to the farthest hypothesis which can be a linking candidate. Which is the fastest way of solving this searching problem?
For similar problems I have resorted to spliting the source images into 16 (or more, depending on the relative size of the parts you're trying to link) smaller images, doing the detection and linking step in each of these seperatly, and an extra step where you will do only a link step for each subimage, and it's (possibly 8) neighbours.
In this case you will never even try to link one part in the upper left corner with the lower right one, and as an added bonus the first part of your problem is now extremely parallel.
Update:
You could do a edge detection on the image first, and never cut the image in 2 when that would mean cutting an edge in two. recursively doing this would allow you to get a lot of small images with body parts on them you can then process separately.
This kind of discrete assignment problem can be solved using the Hungarian algorithm.
In the computation of the cost (= distance) matrix, you can set the entry to some infinite or very high value when the distance is grater than a predefined threshold,
This will prevent the algorithm from assigning a head to a torso which is too far away.
This last technique is also called gating in tracking lectures.

How to use the A* path finding algorithm on a grid less 2D plane?

How can I implement the A* algorithm on a gridless 2D plane with no nodes or cells? I need the object to maneuver around a relatively high number of static and moving obstacles in the way of the goal.
My current implementation is to create eight points around the object and treat them as the centers of imaginary adjacent squares that might be a potential position for the object. Then I calculate the heuristic function for each and select the best. The distances between the starting point and the movement point, and between the movement point and the goal I calculate the normal way with the Pythagorean theorem. The problem is that this way the object often ignores all obstacle and even more often gets stuck moving back and forth between two positions.
I realize how silly mu question might seem, but any help is appreciated.
Create an imaginary grid at whatever resolution is suitable for your problem: As coarse grained as possible for good performance but fine-grained enough to find (desirable) gaps between obstacles. Your grid might relate to a quadtree with your obstacle objects as well.
Execute A* over the grid. The grid may even be pre-populated with useful information like proximity to static obstacles. Once you have a path along the grid squares, post-process that path into a sequence of waypoints wherever there's an inflection in the path. Then travel along the lines between the waypoints.
By the way, you do not need the actual distance (c.f. your mention of Pythagorean theorem): A* works fine with an estimate of the distance. Manhattan distance is a popular choice: |dx| + |dy|. If your grid game allows diagonal movement (or the grid is "fake"), simply max(|dx|, |dy|) is probably sufficient.
Uh. The first thing that come to my mind is, that at each point you need to calculate the gradient or vector to find out the direction to go in the next step. Then you move by a small epsilon and redo.
This basically creates a grid for you, you could vary the cell size by choosing a small epsilon. By doing this instead of using a fixed grid you should be able to calculate even with small degrees in each step -- smaller then 45° from your 8-point example.
Theoretically you might be able to solve the formulas symbolically (eps against 0), which could lead to on optimal solution... just a thought.
How are the obstacles represented? Are they polygons? You can then use the polygon vertices as nodes. If the obstacles are not represented as polygons, you could generate some sort of convex hull around them, and use its vertices for navigation. EDIT: I just realized, you mentioned that you have to navigate around a relatively high number of obstacles. Using the obstacle vertices might be infeasible with to many obstacles.
I do not know about moving obstacles, I believe A* doesn't find an optimal path with moving obstacles.
You mention that your object moves back and fourth - A* should not do this. A* visits each movement point only once. This could be an artifact of generating movement points on the fly, or from the moving obstacles.
I remember encountering this problem in college, but we didn't use an A* search. I can't remember the exact details of the math but I can give you the basic idea. Maybe someone else can be more detailed.
We're going to create a potential field out of your playing area that an object can follow.
Take your playing field and tilt or warp it so that the start point is at the highest point, and the goal is at the lowest point.
Poke a potential well down into the goal, to reinforce that it's a destination.
For every obstacle, create a potential hill. For non-point obstacles, which yours are, the potential field can increase asymptotically at the edges of the obstacle.
Now imagine your object as a marble. If you placed it at the starting point, it should roll down the playing field, around obstacles, and fall into the goal.
The hard part, the math I don't remember, is the equations that represent each of these bumps and wells. If you figure that out, add them together to get your final field, then do some vector calculus to find the gradient (just like towi said) and that's the direction you want to go at any step. Hopefully this method is fast enough that you can recalculate it at every step, since your obstacles move.
Sounds like you're implementing The Wumpus game based on Norvig and Russel's discussion of A* in Artifical Intelligence: A Modern Approach, or something very similar.
If so, you'll probably need to incorporate obstacle detection as part of your heuristic function (hence you'll need to have sensors that alert your agent to the signs of obstacles, as seen here).
To solve the back and forth issue, you may need to store the traveled path so you can tell if you've already been to a location and have the heurisitic function examine the past N number of moves (say 4) and use that as a tie-breaker (i.e. if I can go north and east from here, and my last 4 moves have been east, west, east, west, go north this time)

Basic pathfinding with obstacle avoidance in a continuous 2D space

I'm writing a simulation in which a creature object should be able to move towards some other arbitrary object in the environment, sliding around obstacles rather than doing any intelligent pathfinding. I'm not trying to have it plan a path -- just to move in one general direction, and bounce around obstacles.
It's a 2D environment (overhead view), and every object has a bounding rectangle for collision detection. There is no grid, and I am not looking for A* solution.
I haven't been able to find any tutorials on this kind of "dumb" collision-based pathfinding, so I might not be describing this using the most common terms.
Any recommendations on how to implement this (or links to tutorials)?
Expanding on what Guillaume said about obstacle avoidance, a technique that would work well for you is anti-gravity movement. You treat local obstacles as point sources of antigravity, the destination as gravity, and your computer controlled character will slip (like soap!) around the obstacles to get to the destination.
you can combine two steering algorithm :
seek : you apply a steering force in the direction which is the difference between the current velocity and the desired velocity towards the target
Obstacle Avoidance : you anticipates the vehicle's future using a box whose length is a constant time multiplied by the current velocity of the vehicle. Any obstacle that intersects this box is a potential collision threat. The nearest such threat is chosen for avoidance. To avoid an obstacle, a lateral steering force is applied opposite to the obstacle's center. In addition, a braking (deceleration) force is applied. These forces vary with urgency (the distance from the tip of the box to the point of potential collision). Steering varies linearly, braking varies quadratically.
You can find more on the website "Steering Behaviors For Autonomous Characters"
regards
Guillaume
PS : this assume you're using a point/velocity/acceleration method for the object's movement.
Maybe you could use Pledge's algorithm
Whenever your creature, travelling in vector direction v, collides with a wall whose direction is represented by vector w, the direction that you need to "slide" is given by the vector that is the projection of v onto w. This can be found using
v . w
--------- w
|w|*|w|
where . is the vector dot product and |w| is the magnitude of vector w ( = sqrt(w . w)). If w is a unit vector, this becomes simply
(v . w) w
Using the resulting vector as your creature's speed will mean your creature travels quickly when it just "grazes" the wall, and slowly when it hits the wall nearly dead-on. (This is how most first-person shooter games manage collisions for the human player.)
If instead you want your creature to always travel at full speed, you only need the sign of v . w -- you will always be travelling either in the direction the wall faces (w) or the opposite direction (-w).
The issue that you will have is when your creature hits the wall dead-on. In that case your projected vector will be (0, 0), and you need some other technique to decide which way (w or -w) to go. The usual approach here is A*, although this may be unnecessary if your environment possesses enough structure.
I posted a pathfinding algorithm in C# a while back
Here's the link
You can try and use it as a starting point, ie, you could modify the function that checks the next cell to see if it's valid to check for the obstacles, and you could feed it small intervals instead of the starting and end points, kinda like multiple mini-pahfinding routes.
(The text is in spanish, but you can download the application from the link at the top)

Resources