Abaqus moving reaction front for a buckling Plate - abaqus

I am trying simulate lithiation of a beam in Abaqus, where during this process, a plate buckles. I partitioned 3 section of the plate where the top and bottom represent the lithiated material ( different material properties) and the middle section represents the original material.
My goal is to apply heat (as to represent the chemical reaction for now) and then take the stress and strains of that analysis increment to calculate a new thickness for the lithiated material ( which I then change the partition height so that lithiated material section covers more of the whole plate), and then repeat the whole process where in every increment, stress and strain is extracted to calculate the new lithiated material thickness and repeated (increasing the lithiated material section after each increment), while the whole plate still buckles.
However I am stuck on whether to use the restart function or import analysis results from a previous analysis. As each increment the beam does bend more and the material also expands. Any thoughts how to approach this problem in order to somehow change the partition height of each increment whilst incorporating stress or displacements of the nodes from previous analysis ? Thank you
This is a diagram of the beam and the (lithiated section is T and the thickness denoted as h

It's an interesting problem. Please correct me if this link isn't a good description.
If you're already using a heat transfer approach to approximate the chemical reaction, I wonder why you can't simulate the deformations by calculating thermal stress and strain? Is there a way to calculate approximate thermal strain properties to stand in for the effect you want?
I'm thinking that you wouldn't need as much manual intervention to step through the transient that way.

Related

Hard time finding SARIMA parameters from ACF and PACF

Im a beginner in time series analyses.
I need help finding the SARIIMA(p,d,q,P,D,Q,S) parameters.
This is my dataset. Sampletime 1 hour. Season 24 hour.
S=24
Using the adfuller test I get p = 6.202463523469663e-16. Therefor stationary.
d=0 and D=0
Plotting ACF and PACF:
Using this post:
https://arauto.readthedocs.io/en/latest/how_to_choose_terms.html
I learn to "start counting how many “lollipop” are above or below the confidence interval before the next one enter the blue area."
So looking at PACF I can see maybe 5 before one is below the confidence interval. Therefor non seasonal p=5 (AR).
But I having a hard time finding the q - MA parameter from the ACF.
"To estimate the amount of MA terms, this time you will look at ACF plot. The same logic is applied here: how much lollipops are above or below the confidence interval before the next lollipop enters the blue area?"
But in the ACF plot not a single lollipop is inside the blue area.
Any tips?
There are many different rules of thumb and everyone has own views. I would say, in your case you probably do not need the MA component at all. The rule with the lollipop refers to ACF/PACF plots that have a sharp cut-off after a certain lag, for example in your PACF after the second or third lag. Your ACF is trailing off which can be an indicator for not using the MA component. You do not have to necessarily use it and sometimes the data is not suited for an MA model. A good tip is to always check what pmdarima’s auto_arima() function returns for your data:
https://alkaline-ml.com/pmdarima/tips_and_tricks.html
https://alkaline-ml.com/pmdarima/modules/generated/pmdarima.arima.auto_arima.html
Looking at you autocorrelation plot you can clearly see the seasonality. Just because the ADF test tells you it is stationary does not mean it necessarily is. You should at least check if you model works better with seasonal differencing (D).

How to Split and Merge Erroneously Segmented Regions

I have performed the watershed segmentation on a picture of clustered cells. There seems to be many clusters of cells that have not been segmented enough or not at all. There are also single cells that have been oversegmented. What methods could I use to merge the oversegmented single cells and further split the undersegmented clusters of cells?
Edit: The criteria for determining whether a cell has been over or undersegmented will be done by determining whether the area of the cell is within a certain average range of normal sized cells. I'm not sure if this is a good idea though. Any help will be appreciated, thanks.
You cannot expect to make zero errors and segment everything perfect. Maybe you have smaller or larger cells. Maybe your image quality was really bad.
If you know that cells have an area in a certain range, just adapt the watershed parameters (the threshold) until on average the estimated area is consistent with your prior knowledge.
If you have really large segments (large area, more than twice the average area or so) let the watershed run again locally with a higher threshold.
If you have locally really small segments, let the watershed run again locally with a smaller threshold.
I wouldn't do more except using another algorithm like for example ilastik which has semi-automated segmentation.
You have to decide what is the ideal or expected cell for you; apparently its some round shape with no reversal of curvature (i.e. it goes around with no reversal of direction) =simple shape. For that you can use shape features such as circularity: you need to determine what is the range of circularities that you accept.
For watershed I think it might be better to go for oversegmentation - then shapes that are near can be merged based on whether the combined shape fills the criteria (as above). Other shape features can be used (elongatedness etc).
If you go for undersegmentation you have no choice (under the method you are using) but to repeat the segmentation on the remaining shapes.

Algorithm for real-time tracking of several simple objects

I'm trying to write a program to track relative position of certain objects while I'm playing the popular game, League of Legends. Specifically, I want to track the x,y screen coordinates of any "minions" currently on the screen (The "minions" are the little guys in the center of the picture with little red and green bars over their heads).
I'm currently using the Java Robot class to send screen captures to my program while I'm playing, and am trying to figure out the best algorithm for locate the minions and track them so long as they stay on the screen.
My current thinking is to use a convolutional neural network to identify and locate the minions by the the colored bars over there heads. However, I'd have to re-identify and locate the minions on every new frame, and this seems like it'd be computationally expensive if I want to do this in real time (~10-60 fps).
These sorts of computer vision algorithms aren't really my specialization, but it seems reasonable that algorithms exist that exploit the fact objects in videos move in a continuous manner (i.e. they don't jump around from frame to frame).
So, is there an easily implementable algorithm for accomplishing this task?
Since this is a computer game, I think that the color of the bars should be constant. That might not be true only if the dynamic illumination affects the health bar, which is highly unlikely.
Thus, just find all of the pixels with this specific colors. Then you do some morphological operations and segment the image into blobs. By selecting only the blobs that fit some criteria, you can find the location of the units.
I know that my answer does not involve video, but the operations should be so simple, that it should be very quick.
As for the tracking, just find per each point the closest in the next frame.
Since the HUD location is constant, there should be no problem removing it.
Here is mine quick and not-so-robust implementation in Matlab that has a few limitations:
Units must be quite healthy (At least 40 pixels wide)
The bars do not overlap.
function FindUnits()
x = double(imread('c:\1.jpg'));
green = cat(3,149,194,151);
diff = abs(x - repmat(green,[size(x,1) size(x,2)]));
diff = mean(diff,3);
diff = logical(diff < 30);
diff = imopen(diff,strel('square',1));
rp = regionprops(diff,'Centroid','MajorAxisLength','MinorAxisLength','Orientation');
long = [rp.MajorAxisLength]./[rp.MinorAxisLength];
rp( long < 20) = [];
xy = [rp.Centroid];
x = xy(1:2:end);
y = xy(2:2:end);
figure;imshow('c:\1.jpg');hold on ;scatter(x,y,'g');
end
And the results:
You should use a model which includes a dynamic structure in it. For your object tracking purpose Hidden Markov Models (HMMs) (or in general Dynamic Bayesian Networks) are very well suitable. You can find a lot of resources on HMMs online. The issues you are going to face however, depends on your system model. If your system dynamics can easily be represented as a linear Gauss-Markov model then a simple Kalman Filter will do fine. However, in the case of nonlinear non-gaussian dynamics you should use Particle Filtering which is a Sequential Monte Carlo Method. Both Kalman Filter and Particle Filter are sequential methods so you will use the results you have at the current step to have a result at the next time step. I suggest you to check some online tutorials and papers on Multiple Object Tracking via Particle Filters. As far as I am concerned the main difficulty you will have is however, the number of objects you may want to track since you won't know the number of the objects you want to track and also a object you are tracking can just disappear as well (you may kill those little guys or they may just leave the screen) or some other guy can just enter the screen. Hope this helps.

How to divide a runtime procedural generated world into chunks

I've been thinking of making a top-down 2D game with a pseudo-infinite runtime procedural generated world. I've read several articles about procedural generation and, maybe I've misread or misunderstood them, but I have yet to come across one explaining how to divide the world into chunks (like Minecraft apparently does).
Obviously, I need to generate only the part of the world that the player can currently see. If my game is tile-based, for example, I could divide the world into n*n chunks. If the player were at the border of such a chunk, I would also generate the adjacent chunk(s).
What I can't figure out is how exactly do I take a procedural world generation algorithm and only use it on one chunk at a time. For example, if I have an algorithm that generates a big structure (e.g. castle, forest, river) that would spread across many chunks, how can I adjust it to generate only one chunk, and afterwards the adjacent chunks?
I apologize if I completely missed something obvious. Thank you in advance!
Study the Midpoint displacement algorithm. Note that the points all along one side are based on the starting values of the corners. You can calculate them without knowing the rest of the grid.
I used this approach to generate terrain. I needed the edges of each 'chunk' of terrain to line up with the adjacent chunks. Using a variation of the Midpoint displacement algorithm I made it so that the height of each point along the edge of a chunk was calculated based only on values at the two corners. If I needed to add randomness, I seeded a random number generator with data from the two corners. This way, any two adjacent chunks could be generated independently and the edges were sure to match.
You can use approaches for height-maps for other things. Instead of height, the data could determine vegetation type, population density, etc. Instead of a chunks of height map where the hills and valleys match up you can have a vegetation map where the forests match up.
It certainly takes some creative programming for any kind of complex world.

Guiding a Robot Through a Path

I have a field filled with obstacles, I know where they are located, and I know the robot's position. Using a path-finding algorithm, I calculate a path for the robot to follow.
Now my problem is, I am guiding the robot from grid to grid but this creates a not-so-smooth motion. I start at A, turn the nose to point B, move straight until I reach point B, rinse and repeat until the final point is reached.
So my question is: What kind of techniques are used for navigating in such an environment so that I get a smooth motion?
The robot has two wheels and two motors. I change the direction of the motor by turning the motors in reverse.
EDIT: I can vary the speed of the motors basically the robot is an arduino plus ardumoto, I can supply values between 0-255 to the motors on either direction.
You need feedback linearization for a differentially driven robot. This document explains it in Section 2.2. I've included relevant portions below:
The simulated robot required for the
project is a differential drive robot
with a bounded velocity. Since
the differential drive robots are
nonholonomic, the students are encouraged to use feedback linearization to
convert the kinematic control output
from their algorithms to control the
differential drive robots. The
transformation follows:
where v, ω, x, y are the linear,
angular, and kinematic velocities. L
is an offset length proportional to the
wheel base dimension of the robot.
One control algorithm I've had pretty good results with is pure pursuit. Basically, the robot attempts to move to a point along the path a fixed distance ahead of the robot. So as the robot moves along the path, the look ahead point also advances. The algorithm compensates for non-holonomic constraints by modeling possible paths as arcs.
Larger look ahead distances will create smoother movement. However, larger look ahead distances will cause the robot to cut corners, which may collide with obstacles. You can fix this problem by implementing ideas from a reactive control algorithm called Vector Field Histogram (VFH). VFH basically pushes the robot away from close walls. While this normally uses a range finding sensor of some sort, you can extrapolate the relative locations of the obstacles since you know the robot pose and the obstacle locations.
My initial thoughts on this(I'm at work so can't spend too much time):
It depends how tight you want or need your corners to be (which would depend on how much distance your path finder gives you from the obstacles)
Given the width of the robot you can calculate the turning radius given the speeds for each wheel. Assuming you want to go as fast as possible and that skidding isn't an issue, you will always keep the outside wheel at 255 and reduce the inside wheel down to the speed that gives you the required turning radius.
Given the angle for any particular turn on your path and the turning radius that you will use, you can work out the distance from that node where you will slow down the inside wheel.
An optimization approach is a very general way to handle this.
Use your calculated path as input to a generic non-linear optimization algorithm (your choice!) with a cost function made up of closeness of the answer trajectory to the input trajectory as well as adherence to non-holonomic constraints, and any other constraints you want to enforce (e.g. staying away from the obstacles). The optimization algorithm can also be initialised with a trajectory constructed from the original trajectory.
Marc Toussaint's robotics course notes are a good source for this type of approach. See in particular lecture 7:
http://userpage.fu-berlin.de/mtoussai/teaching/10-robotics/

Resources