Abaqus, how can boundary conditions affect prescribed material response? - abaqus

I'm simulating a Representative Volume Element (RVE) to estimate it's homogenised properties under pure compression in X direction.
After some attempts I decided to try just an isotropic material model with one-element simulation, but with the similar boundary conditions, which emulate Periodic Boundary Conditions (PBC) in my loading case: symmetry on X on one side, symmetry on Y on both sides and symmetry on Z on both sides as well. One side is subjected to uniform displacement in X direction. I tested the Extended Drucker-Prager model with perfect plasticity after 120 MPa
What I found out in results is that after yield had begun stresses (mises, pressure, principles and S11) kept on rising up to the end of simulation in the same "straight" manner they had benn rising before yield. It doesn't even seem to look like plastic behaviour.
If I change my boundary conditions to simple support on one side perpendicular to X and retain the displacement on the opposite side, the result picture of stresses begins to look more "plastic-like".
Can anyone explain me the performance of material model in first case (PBC)? Thanks in advance

Related

How to model 4 points bending reinforced concrete beam with ABAQUS?

I have some questions about how to define boundary conditions for 4 points bending reinforced concrete with Abaqus. So I model this beam but I don't know how to define boundary conditions and step. Can You please help me? The model is 1/4 beam because the beam is symmetrical. In result the Force=0 in all model, I don't know how to solve that . I have another questions for modeling rebars, beam element isn't better than truss element?
Four point bending is an interesting test configuration.
Here are a few ideas to consider:
Symmetry at x = 0 means out-of-plane displacement ux and in-plane rotations txy and txz are zero for all points on the symmetry plane.
Four point bending doesn't mean no force on the beam. There are vertical reaction forces at each of the four bending points. They sum to zero, but their effect on the beam should be clear. You'll need to constrain the support points to have zero vertical displacement and apply either a displacement or load to the two loading points. You'll calculate how shear and bending vary along the length of the beam.
The fact that the beam is reinforced concrete is immaterial (no pun intended) to the boundary conditions applied.
You'll need beam elements at a minimum. They include bending effects. Truss elements only model axial loads.
Concrete is strong in compression but weak in shear and bending.
Make sure you have a good understanding of the failure model for concrete before you begin.
How are you modeling the concrete? Is this a continuum model with steel rebar elements surrounded by concrete, or are you smearing them out using an anisotropic composite model and beam elements? Are you modeling large aggregate using something like Eshelby tensor? Do you have to be concerned about the concrete/steel interface? What about fracture?
Great problem. Good luck.

Human pose estimation - efficient linking of body parts

Problem's description:
I am working at a project whose goal is to identify people's body parts in images (torso, head, left and right arms etc). The approach is based on finding parts of the human body (hypothesis) and then searching for the best pose configuration (= all the parts that really form a human body). The ideea is described better at this link http://www.di.ens.fr/willow/events/cvml2010/materials/INRIA_summer_school_2010_Andrew_human_pose.pdf.
The hypothesis are obtained after running a detection algorithm (here I am using a classifier from machine learning field) for each body part separately. So, the type of each hypothesis is known. Also, each hypothesis has a location (x and y coordinates in the image) and an orientation.
To determine the cost of linking two parts together, one can consider that each hypothesis of type head can be linked with each hypothesis of type torso (for example). But, a head hypothesis which is in the top right location of the image can not be linked (from a human point of view) with a torso hypothesis which is in the bottom left location of the image. I am trying to avoid these kinds of links based on the last statement and also due to the execution time.
Question: I am planing to reduce the searching space by considering a distance to the farthest hypothesis which can be a linking candidate. Which is the fastest way of solving this searching problem?
For similar problems I have resorted to spliting the source images into 16 (or more, depending on the relative size of the parts you're trying to link) smaller images, doing the detection and linking step in each of these seperatly, and an extra step where you will do only a link step for each subimage, and it's (possibly 8) neighbours.
In this case you will never even try to link one part in the upper left corner with the lower right one, and as an added bonus the first part of your problem is now extremely parallel.
Update:
You could do a edge detection on the image first, and never cut the image in 2 when that would mean cutting an edge in two. recursively doing this would allow you to get a lot of small images with body parts on them you can then process separately.
This kind of discrete assignment problem can be solved using the Hungarian algorithm.
In the computation of the cost (= distance) matrix, you can set the entry to some infinite or very high value when the distance is grater than a predefined threshold,
This will prevent the algorithm from assigning a head to a torso which is too far away.
This last technique is also called gating in tracking lectures.

motion reconstruction from a single camera

I have a single calibrated camera (known intrinsic parameters, i.e. camera matrix K is known, as well as the distortion coefficients).
I would like to reconstruct the camera's 3d trajectory. There is no a-priori knowledge about the scene.
simplifying the problem by presenting two images that look on the same scene and extracting two set of corresponding matched feature points from them (SIFT, SURF, ORB, etc.)
My problem is how can I calculate the camera extrinsic parameters (i.e. the rotation matrix R and the translation vector t ) between the to viewpoints?
I have managed to calculate the fundamental matrix, and since K is know, the essential matrix as well. using David Nister's efficient solution to the Five-Point Relative Pose Problem I've managed to get 4 possible solution but:
the constraint on the essential matrix E ~ U * diag (s,s,0) * V' doesn't always apply - causing incorrect results.
[EDIT]: taking the average singular value seems to correct the results :) one down
how can I tell which one of the four is the correct one?
Thanks
Your solution to point 1 is correct: diag( (s1 + s2)/2, (s1 + s2)/2, 0).
As for telling which one of the four solutions is correct, only one will give positive depths for all points with respect to the camera frame. That's the one you want.
Code for checking which solution is correct can be found here: http://cs.gmu.edu/%7Ekosecka/examples-code/essentialDiscrete.m from http://cs.gmu.edu/%7Ekosecka/bookcode.html
They use the determinants of U and V to determine the solution with the correct orientation. Look for the comment "then four possibilities are". Since you're only estimating the essential matrix, it's susceptible to noise and does not behave well at all if all of the points are coplanar.
Also, the translation is only recovered to within a constant scaling factor, so the fact that you're seeing a normalized translation vector of unit magnitude is exactly correct. The reason is that the depth is unknown and estimated to be 1. You'll have to find some way to recover the depth as in the code for the eight-point algorithm + 3d reconstruction (Algorithm 5.1 in the bookcode link.)
The book the sample code above is taken from is also a very good reference. http://vision.ucla.edu/MASKS/ Chapter 5, the one you're interested in, is available on the Sample Chapters link.
Congrats on your hard work, sounds like you've tried hard to learn these techniques.
For actual production-strength code, I'd advise to download libmv and ceres, and re-code your solution using them.
Your two questions are really one: invalid solutions are rejected based on the data you have collected. In particular, Nister's (as well as Stewenius's) algorithm is normally used in the inner loop of a RANSAC-like solver, which selects for the solution with the best fit / max number of inliers.

How to use the A* path finding algorithm on a grid less 2D plane?

How can I implement the A* algorithm on a gridless 2D plane with no nodes or cells? I need the object to maneuver around a relatively high number of static and moving obstacles in the way of the goal.
My current implementation is to create eight points around the object and treat them as the centers of imaginary adjacent squares that might be a potential position for the object. Then I calculate the heuristic function for each and select the best. The distances between the starting point and the movement point, and between the movement point and the goal I calculate the normal way with the Pythagorean theorem. The problem is that this way the object often ignores all obstacle and even more often gets stuck moving back and forth between two positions.
I realize how silly mu question might seem, but any help is appreciated.
Create an imaginary grid at whatever resolution is suitable for your problem: As coarse grained as possible for good performance but fine-grained enough to find (desirable) gaps between obstacles. Your grid might relate to a quadtree with your obstacle objects as well.
Execute A* over the grid. The grid may even be pre-populated with useful information like proximity to static obstacles. Once you have a path along the grid squares, post-process that path into a sequence of waypoints wherever there's an inflection in the path. Then travel along the lines between the waypoints.
By the way, you do not need the actual distance (c.f. your mention of Pythagorean theorem): A* works fine with an estimate of the distance. Manhattan distance is a popular choice: |dx| + |dy|. If your grid game allows diagonal movement (or the grid is "fake"), simply max(|dx|, |dy|) is probably sufficient.
Uh. The first thing that come to my mind is, that at each point you need to calculate the gradient or vector to find out the direction to go in the next step. Then you move by a small epsilon and redo.
This basically creates a grid for you, you could vary the cell size by choosing a small epsilon. By doing this instead of using a fixed grid you should be able to calculate even with small degrees in each step -- smaller then 45° from your 8-point example.
Theoretically you might be able to solve the formulas symbolically (eps against 0), which could lead to on optimal solution... just a thought.
How are the obstacles represented? Are they polygons? You can then use the polygon vertices as nodes. If the obstacles are not represented as polygons, you could generate some sort of convex hull around them, and use its vertices for navigation. EDIT: I just realized, you mentioned that you have to navigate around a relatively high number of obstacles. Using the obstacle vertices might be infeasible with to many obstacles.
I do not know about moving obstacles, I believe A* doesn't find an optimal path with moving obstacles.
You mention that your object moves back and fourth - A* should not do this. A* visits each movement point only once. This could be an artifact of generating movement points on the fly, or from the moving obstacles.
I remember encountering this problem in college, but we didn't use an A* search. I can't remember the exact details of the math but I can give you the basic idea. Maybe someone else can be more detailed.
We're going to create a potential field out of your playing area that an object can follow.
Take your playing field and tilt or warp it so that the start point is at the highest point, and the goal is at the lowest point.
Poke a potential well down into the goal, to reinforce that it's a destination.
For every obstacle, create a potential hill. For non-point obstacles, which yours are, the potential field can increase asymptotically at the edges of the obstacle.
Now imagine your object as a marble. If you placed it at the starting point, it should roll down the playing field, around obstacles, and fall into the goal.
The hard part, the math I don't remember, is the equations that represent each of these bumps and wells. If you figure that out, add them together to get your final field, then do some vector calculus to find the gradient (just like towi said) and that's the direction you want to go at any step. Hopefully this method is fast enough that you can recalculate it at every step, since your obstacles move.
Sounds like you're implementing The Wumpus game based on Norvig and Russel's discussion of A* in Artifical Intelligence: A Modern Approach, or something very similar.
If so, you'll probably need to incorporate obstacle detection as part of your heuristic function (hence you'll need to have sensors that alert your agent to the signs of obstacles, as seen here).
To solve the back and forth issue, you may need to store the traveled path so you can tell if you've already been to a location and have the heurisitic function examine the past N number of moves (say 4) and use that as a tie-breaker (i.e. if I can go north and east from here, and my last 4 moves have been east, west, east, west, go north this time)

Basic pathfinding with obstacle avoidance in a continuous 2D space

I'm writing a simulation in which a creature object should be able to move towards some other arbitrary object in the environment, sliding around obstacles rather than doing any intelligent pathfinding. I'm not trying to have it plan a path -- just to move in one general direction, and bounce around obstacles.
It's a 2D environment (overhead view), and every object has a bounding rectangle for collision detection. There is no grid, and I am not looking for A* solution.
I haven't been able to find any tutorials on this kind of "dumb" collision-based pathfinding, so I might not be describing this using the most common terms.
Any recommendations on how to implement this (or links to tutorials)?
Expanding on what Guillaume said about obstacle avoidance, a technique that would work well for you is anti-gravity movement. You treat local obstacles as point sources of antigravity, the destination as gravity, and your computer controlled character will slip (like soap!) around the obstacles to get to the destination.
you can combine two steering algorithm :
seek : you apply a steering force in the direction which is the difference between the current velocity and the desired velocity towards the target
Obstacle Avoidance : you anticipates the vehicle's future using a box whose length is a constant time multiplied by the current velocity of the vehicle. Any obstacle that intersects this box is a potential collision threat. The nearest such threat is chosen for avoidance. To avoid an obstacle, a lateral steering force is applied opposite to the obstacle's center. In addition, a braking (deceleration) force is applied. These forces vary with urgency (the distance from the tip of the box to the point of potential collision). Steering varies linearly, braking varies quadratically.
You can find more on the website "Steering Behaviors For Autonomous Characters"
regards
Guillaume
PS : this assume you're using a point/velocity/acceleration method for the object's movement.
Maybe you could use Pledge's algorithm
Whenever your creature, travelling in vector direction v, collides with a wall whose direction is represented by vector w, the direction that you need to "slide" is given by the vector that is the projection of v onto w. This can be found using
v . w
--------- w
|w|*|w|
where . is the vector dot product and |w| is the magnitude of vector w ( = sqrt(w . w)). If w is a unit vector, this becomes simply
(v . w) w
Using the resulting vector as your creature's speed will mean your creature travels quickly when it just "grazes" the wall, and slowly when it hits the wall nearly dead-on. (This is how most first-person shooter games manage collisions for the human player.)
If instead you want your creature to always travel at full speed, you only need the sign of v . w -- you will always be travelling either in the direction the wall faces (w) or the opposite direction (-w).
The issue that you will have is when your creature hits the wall dead-on. In that case your projected vector will be (0, 0), and you need some other technique to decide which way (w or -w) to go. The usual approach here is A*, although this may be unnecessary if your environment possesses enough structure.
I posted a pathfinding algorithm in C# a while back
Here's the link
You can try and use it as a starting point, ie, you could modify the function that checks the next cell to see if it's valid to check for the obstacles, and you could feed it small intervals instead of the starting and end points, kinda like multiple mini-pahfinding routes.
(The text is in spanish, but you can download the application from the link at the top)

Resources