1/4 Symmetry Model in abaqus - abaqus

I have a full 3D model of a Test Specimen. I want to use Symmetry condition and model only 1/4 of the model so that I can save calculation Time. I apply Displacements on the Nodes at center of Specimen and calculate the Forces acting on the Model from a Reference Point in space Using MPC constraint,I calculate Forces from the right hand side. The other end of the specimen is fixed. I want to ask what Boundary Conditions should I apply to the Model ?. The original distance between the two nodes where I Give displacements is 30 mm.Should I also change that distance between the nodes while Modelling 1/4 Model?. Thanks in Advance for Your Help......

Related

p_WC vs p_WCa and p_WCb

It makes sense that we have two contact points p_WCa and p_WCb, that come from PenetrationAsPointPair. The bottom right corner of the green body is p_WCb, and the top left of the blue body is p_WCa.
But what is p_WC, which comes from contact_results.point_pair_contact_info(cidx).contact_point()?
Which contact point is most appropriate for calculating the torque (using the body center of mass as the torque reference point), for purposes of static equilibrium calculation? I'm inclined to say F_AB_W should be associated with p_WCa.
Ultimately, to satisfy newton's laws, equal and opposite forces must be applied to both bodies, at the same point. Neither p_WCa and p_WCb is necessarily that point. They are what we call "witness" points. They are intimately related to the penetration depth and contact normal, but they aren't the contact point, per se. The displacement vector of the two points, in the contact normal direction is the penetration depth. However, we do use them to compute the contact point.
The contact point is essentially a linear interpolation of those points. Remember that the point contact model is a compliant model. It allows for small deformations of the body (relative to its volume and mass). But the two bodies don't necessarily deform the same amount. If one object, let's say object A, is much stiffer than object B, it deforms less and the effective contact point will be close to the stiff body's surface -- close to p_WCa. The interpolation factor is, in fact, a function of the two bodies' elasticity values. If they are equally compliant, it is the mid-point, etc.
So, at the end of the day, the geometric contact characterization produces contact normal, depth, and witness points. The contact solver produces forces and the point to apply the force, and that's what you see in the contact result's contact_point.
You can read more about it in Drake's documentation
As a foot-note: the effective (combined) elasticity is also what defines the magnitude of the normal force.
Following on Sean's excellent answer which may have been more than sufficient to answer the original question. The set of forces exerted on a body A by contact with a body B can be summed (added together) to create a net/resultant force. That resultant force is applied to a certain point Ap of (fixed to) body A. There is an equal/opposite force applied to a point Bp of (fixed to) body B. Although points Ap and Bp are at the same location (as Sean describes above), points Ap and Bp are not the same point. So there are two contact points, namely Ap (fixed to body A) and Bp (fixed to body B). They share a common location, but may have different velocities and/or accelerations.

Fast way to find corresponding objects across stereo views

Thanks for taking your time to read this.
We have fixed stereo pairs of cameras looking into a closed volume. We know the dimensions of the volume and have the intrinsic and extrinsic calibration values
for the camera pairs. The objective being to be able to identify the 3d positions of multiple duplicate objects accurately.
Which naturally leads to what is described as the correspondence problem in litrature. We need a fast technique to match ball A from image 1 with Ball A from image 2 and so on.
At the moment we use the properties of epipolar geomentry (Fundamental matrix) to match the balls from different views in a crude way and works ok when the objects are sparse,
but gives a lot of false positives if the objects are densely scattered. Since ball A in image 1 can lie anywhere on the epipolar line going across image 2, it leads to mismatches
when multiple objects lie on that line and look similar.
Is there a way to re-model this into a 3d line intersection problem or something? Since the ball A in image 1 can only take a bounded limit of 3d values, Is there a way to represent
it as a line in 3d? and do a intersection test to find the closest matching ball in image 2?
Or is there a way to generate a sparse list of 3d values which correspond to each 2d grid of pixels in image 1 and 2, and do a intersection test
of these values to find the matching objects across two cameras?
Because the objects can be identical, OpenCV feature matching algorithms like FLANN, ORB doesn't work.
Any ideas in the form of formulae or code is welcome.
Thanks!
Sak
You've set yourself quite a difficult task. Because one point can occlude another in a view, it's not generally possible even to count the number of points. If each view has two points, but those points fall on the same epipolar line on the other view, then you can count anywhere between 2 and 4 points.
Assuming you want to minimize the points, this starts to look like Minimum Vertex Cover in a dense bipartite graph, with each edge representing the association of a point from each view, and the weight of each edge taken from the registration error of associating the corresponding points (vertices) from each view. MVC is, of course, NP-hard, and if you treat the problem as a general MVC problem then you'll never do better than O(n^2) because that's how many edges there are to examine.
Your particular MVC problem might have structure that can be exploited to perform a more efficient approximation. In particular, I might suggest calculating the epipolar lines in one view, ordering them by angle from the epipole, and similarly sorting the points in that view from the epipole. You can then iterate over the two sorted lists roughly in parallel, greedily associating each point with a nearby epipolar line. Then you can do the same in the other view, but only looking at points in that view which had not yet been associated during the previous pass. I think that a more regimented and provably optimal approach might be possible with dynamic programming (particularly if you strictly bound the registration error) which wouldn't require the second pass, but I can't sketch it out offhand.
For different types of objects it's easy- to find the match using sum-of-absolute-differences. For similar objects, the idea(s) could lead to publish a good paper. Anyway here's one quick algorithm:
detect the two balls in first image (using object detection methods).
divide the image into two segments cantaining two balls.
repeat steps 1 & 2 for second image also.
the direction of segments in two images should give correspondence of the two balls.
Try this, it should work for two balls.

Abaqus, how can boundary conditions affect prescribed material response?

I'm simulating a Representative Volume Element (RVE) to estimate it's homogenised properties under pure compression in X direction.
After some attempts I decided to try just an isotropic material model with one-element simulation, but with the similar boundary conditions, which emulate Periodic Boundary Conditions (PBC) in my loading case: symmetry on X on one side, symmetry on Y on both sides and symmetry on Z on both sides as well. One side is subjected to uniform displacement in X direction. I tested the Extended Drucker-Prager model with perfect plasticity after 120 MPa
What I found out in results is that after yield had begun stresses (mises, pressure, principles and S11) kept on rising up to the end of simulation in the same "straight" manner they had benn rising before yield. It doesn't even seem to look like plastic behaviour.
If I change my boundary conditions to simple support on one side perpendicular to X and retain the displacement on the opposite side, the result picture of stresses begins to look more "plastic-like".
Can anyone explain me the performance of material model in first case (PBC)? Thanks in advance

How do I combine two electromagnetic readings to predict the position of a sensor?

I have an electromagnetic sensor and electromagnetic field emitter.
The sensor will read power from the emitter. I want to predict the position of the sensor using the reading.
Let me simplify the problem, suppose the sensor and the emitter are in 1 dimension world where there are only position X (not X,Y,Z) and the emitter emits power as a function of distance squared.
From the painted image below, you will see that the emitter is drawn as a circle and the sensor is drawn as a cross.
E.g. if the sensor is 5 meter away from the emitter, the reading you get on the sensor will be 5^2 = 25. So the correct position will be either 0 or 10, because the emitter is at position 5.
So, with one emitter, I cannot know the exact position of the sensor. I only know that there are 50% chance it's at 0, and 50% chance it's at 10.
So if I have two emitters like the following image:
I will get two readings. And I can know exactly where the sensor is. If the reading is 25 and 16, I know the sensor is at 10.
So from this fact, I want to use 2 emitters to locate the sensor.
Now that I've explained you the situation, my problems are like this:
The emitter has a more complicated function of the distance. It's
not just distance squared. And it also have noise. so I'm trying to
model it using machine learning.
Some of the areas, the emitter don't work so well. E.g. if you are
between 3 to 4 meters away, the emitter will always give you a fixed
reading of 9 instead of going from 9 to 16.
When I train the machine learning model with 2 inputs, the
prediction is very accurate. E.g. if the input is 25,36 and the
output will be position 0. But it means that after training, I
cannot move the emitters at all. If I move one of the emitters to be
further apart, the prediction will be broken immediately because the
reading will be something like 25,49 when the right emitter moves to
the right 1 meter. And the prediction can be anything because the
model has not seen this input pair before. And I cannot afford to
train the model on all possible distance of the 2 emitters.
The emitters can be slightly not identical. The difference will
be on the scale. E.g. one of the emitters can be giving 10% bigger
reading. But you can ignore this problem for now.
My question is How do I make the model work when the emitters are allowed to move? Give me some ideas.
Some of my ideas:
I think that I have to figure out the position of both
emitters relative to each other dynamically. But after knowing the
position of both emitters, how do I tell that to the model?
I have tried training each emitter separately instead of pairing
them as input. But that means there are many positions that cause
conflict like when you get reading=25, the model will predict the
average of 0 and 10 because both are valid position of reading=25.
You might suggest training to predict distance instead of position,
that's possible if there is no problem number 2. But because
there is problem number 2, the prediction between 3 to 4 meters away
will be wrong. The model will get input as 9, and the output will be
the average distance 3.5 meters or somewhere between 3 to 4 meters.
Use the model to predict position
probability density function instead of predicting the position.
E.g. when the reading is 9, the model should predict a uniform
density function from 3 to 4 meters. And then you can combine the 2
density functions from the 2 readings somehow. But I think it's not
going to be that accurate compared to modeling 2 emitters together
because the density function can be quite complicated. We cannot
assume normal distribution or even uniform distribution.
Use some kind of optimizer to predict the position separately for each
emitter based on the assumption that both predictions must be the same. If
the predictions are not the same, the optimizer must try to move the
predictions so that they are exactly at the same point. Maybe reinforcement
learning where the actions are "move left", "move right", etc.
I told you my ideas so that it might evoke some ideas in you. Because this is already my best but it's not solving the issue elegantly yet.
So ideally, I would want the end-to-end model that are fed 2 readings, and give me position even when the emitters are moved. How would I go about that?
PS. The emitters are only allowed to move before usage. During usage or prediction, the model can assume that the emitter will not be moved anymore. This allows you to have time to run emitters position calibration algorithm before usage. Maybe this will be a helpful thing for you to know.
You're confusing memoizing a function with training a model; the former is merely recalling previous results; the latter is the province of AI. To train with two emitters, you need to give the useful input data and appropriate labels (right answers), and design your model topology such that it can be trained to a useful functional response fro cases it has never seen.
Let the first emitter be at position 0 by definition. Your data then consists of the position of the second emitter and the two readings. The label is the sensor's position. Your given examples would look like this:
emit2 read1 read2 sensor
1 25 36 0
1 25 16 5
2 25 49 0
1.5 25 9 5 distance of 3 < d < 4 always reads as 3^2
Since you know that you have an squared relationship in the underlying physics, you need to include quadratic capability in your model. To handle noise, you'll want some dampener capability, such as an extra node or two in a hidden layer after the first. For more complex relationships, you'll need other topologies, non-linear activation functions, etc.
Can you take it from there?

Clustering K-means algorithm for elongated data set

I have go question while programming K-means algorithm in Matlab. Why K-means algorithm not suitable for classifying elongated data set?
In sort, draw some thick lines on a paper. Can you really represent each one with a single point? How would single points give information about orientation?
K-means assigns each datapoint to each nearest centroid. That is to say that for each centroid c, all points that their distance from c is smaller (in comparison to all other centroids) will be assigned to c. And, since the surface of a (hyper)sphere is in fact, all points with distance less or equal to some value from a center, I think it is easy to see how resulted clusters tend to be spherical. (To be exact, kmeans practically creates a Voronoi diagram in the vector space)
Elongated clusters however, don't necessarily satisfy the requirement that all their points are closer to their "center of mass" than to some other cluster's center.
It is difficult for you to choose a init cluster center point in elongated data set, but it has a powerful effect on the result.You may get different results when choose different points.
You will get only one result in this case when you choose 3 init points:
But it is different in elongated data set.

Resources