I have some questions about how to define boundary conditions for 4 points bending reinforced concrete with Abaqus. So I model this beam but I don't know how to define boundary conditions and step. Can You please help me? The model is 1/4 beam because the beam is symmetrical. In result the Force=0 in all model, I don't know how to solve that . I have another questions for modeling rebars, beam element isn't better than truss element?
Four point bending is an interesting test configuration.
Here are a few ideas to consider:
Symmetry at x = 0 means out-of-plane displacement ux and in-plane rotations txy and txz are zero for all points on the symmetry plane.
Four point bending doesn't mean no force on the beam. There are vertical reaction forces at each of the four bending points. They sum to zero, but their effect on the beam should be clear. You'll need to constrain the support points to have zero vertical displacement and apply either a displacement or load to the two loading points. You'll calculate how shear and bending vary along the length of the beam.
The fact that the beam is reinforced concrete is immaterial (no pun intended) to the boundary conditions applied.
You'll need beam elements at a minimum. They include bending effects. Truss elements only model axial loads.
Concrete is strong in compression but weak in shear and bending.
Make sure you have a good understanding of the failure model for concrete before you begin.
How are you modeling the concrete? Is this a continuum model with steel rebar elements surrounded by concrete, or are you smearing them out using an anisotropic composite model and beam elements? Are you modeling large aggregate using something like Eshelby tensor? Do you have to be concerned about the concrete/steel interface? What about fracture?
Great problem. Good luck.
Related
I'm simulating a Representative Volume Element (RVE) to estimate it's homogenised properties under pure compression in X direction.
After some attempts I decided to try just an isotropic material model with one-element simulation, but with the similar boundary conditions, which emulate Periodic Boundary Conditions (PBC) in my loading case: symmetry on X on one side, symmetry on Y on both sides and symmetry on Z on both sides as well. One side is subjected to uniform displacement in X direction. I tested the Extended Drucker-Prager model with perfect plasticity after 120 MPa
What I found out in results is that after yield had begun stresses (mises, pressure, principles and S11) kept on rising up to the end of simulation in the same "straight" manner they had benn rising before yield. It doesn't even seem to look like plastic behaviour.
If I change my boundary conditions to simple support on one side perpendicular to X and retain the displacement on the opposite side, the result picture of stresses begins to look more "plastic-like".
Can anyone explain me the performance of material model in first case (PBC)? Thanks in advance
I am not an expert in finite element models (FEMs), however, I am reading related articles and trying to find the boundary conditions for knee joint including the bone and cartilage.
Does anyone have any suggestion? How can I find the boundary conditions for the knee joint?
If there any platform or an atlas guiding on different material properties, I would really appreciate if you guide me on this or share any information.
Thanks
This is a question that's impossible to answer without knowing more about the extent of your model.
If you're modeling a whole leg, and just want to see the relative motion of the upper and lower parts, you can fix all degrees of freedom at the hip joint.
The joint itself is complex. There's contact and relative sliding. Do you plan to include those effects?
What kind of loading will you apply? This is potentially a small strain, large rotation problem. Biology is non-linear.
What materials? Bone? Artificial knee? Muscle? Skin? What properties? What material model (e.g. elastic, incompressible, small or large strain, etc.)?
im trying to write a code that will do projective transformation, but with more than 4 key points. i found this helpful guide but it uses 4 points of reference
https://math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript
i know that matlab uses has a function tcp2form that handles that, but i haven't found a way so far.
anyone can give me some guidance, on how to do so? i can solve the equations using (least squares), but i'm stuck since i have a matrix that is larger than 3*3 and i can't multiple the homogeneous coordinates.
Thanks
If you have more than four control points, you have an overdetermined system of equations. There are two possible scenarios. Either your points are all compatible with the same transformation. In that case, any four points can be used, and the rest will match the transformation exactly. At least in theory. For the sake of numeric stability you'd probably want to choose your points so that they are far from being collinear.
Or your points are not all compatible with a single projective transformation. In this case, all you can hope for is an approximation. If you want the best approximation, you'll have to be more specific about what “best” means, i.e. some kind of error measure. Measuring things in a projective setup is inherently tricky, since there are usually a lot of arbitrary decisions involved.
What you can try is fixing one matrix entry (e.g. the lower right one to 1), then writing the conditions for the remaining 8 coordinates as a system of linear equations, and performing a least squares approximation. But the choice of matrix representative (i.e. fixing one entry here) affects the least squares error measure while it has no effect on the geometric meaning, so this is a pretty arbitrary choice. If the lower right entry of the desired matrix should happen to be zero, you'd computation will run into numeric problems due to overflow.
How exactly is an U-matrix constructed in order to visualise a self-organizing-map? More specifically, suppose that I have an output grid of 3x3 nodes (that have already been trained), how do I construct a U-matrix from this? You can e.g. assume that the neurons (and inputs) have dimension 4.
I have found several resources on the web, but they are not clear or they are contradictory. For example, the original paper is full of typos.
A U-matrix is a visual representation of the distances between neurons in the input data dimension space. Namely you calculate the distance between adjacent neurons, using their trained vector. If your input dimension was 4, then each neuron in the trained map also corresponds to a 4-dimensional vector. Let's say you have a 3x3 hexagonal map.
The U-matrix will be a 5x5 matrix with interpolated elements for each connection between two neurons like this
The {x,y} elements are the distance between neuron x and y, and the values in {x} elements are the mean of the surrounding values. For example, {4,5} = distance(4,5) and {4} = mean({1,4}, {2,4}, {4,5}, {4,7}). For the calculation of the distance you use the trained 4-dimensional vector of each neuron and the distance formula that you used for the training of the map (usually Euclidian distance). So, the values of the U-matrix are only numbers (not vectors). Then you can assign a light gray colour to the largest of these values and a dark gray to the smallest and the other values to corresponding shades of gray. You can use these colours to paint the cells of the U-matrix and have a visualized representation of the distances between neurons.
Have also a look at this web article.
The original paper cited in the question states:
A naive application of Kohonen's algorithm, although preserving the topology of the input data is not able to show clusters inherent in the input data.
Firstly, that's true, secondly, it is a deep mis-understanding of the SOM, thirdly it is also a mis-understanding of the purpose of calculating the SOM.
Just take the RGB color space as an example: are there 3 colors (RGB), or 6 (RGBCMY), or 8 (+BW), or more? How would you define that independent of the purpose, ie inherent in the data itself?
My recommendation would be not to use maximum likelihood estimators of cluster boundaries at all - not even such primitive ones as the U-Matrix -, because the underlying argument is already flawed. No matter which method you then use to determine the cluster, you would inherit that flaw. More precisely, the determination of cluster boundaries is not interesting at all, and it is loosing information regarding the true intention of building a SOM. So, why do we build SOM's from data?
Let us start with some basics:
Any SOM is a representative model of a data space, for it reduces the dimensionality of the latter. For it is a model it can be used as a diagnostic as well as a predictive tool. Yet, both cases are not justified by some universal objectivity. Instead, models are deeply dependent on the purpose and the accepted associated risk for errors.
Let us assume for a moment the U-Matrix (or similar) would be reasonable. So we determine some clusters on the map. It is not only an issue how to justify the criterion for it (outside of the purpose itself), it is also problematic because any further calculation destroys some information (it is a model about a model).
The only interesting thing on a SOM is the accuracy itself viz the classification error, not some estimation of it. Thus, the estimation of the model in terms of validation and robustness is the only thing that is interesting.
Any prediction has a purpose and the acceptance of the prediction is a function of the accuracy, which in turn can be expressed by the classification error. Note that the classification error can be determined for 2-class models as well as for multi-class models. If you don't have a purpose, you should not do anything with your data.
Inversely, the concept of "number of clusters" is completely dependent on the criterion "allowed divergence within clusters", so it is masking the most important thing of the structure of the data. It is also dependent on the risk and the risk structure (in terms of type I/II errors) you are willing to take.
So, how could we determine the number classes on a SOM? If there is no exterior apriori reasoning available, the only feasible way would be an a-posteriori check of the goodness-of-fit. On a given SOM, impose different numbers of classes and measure the deviations in terms of mis-classification cost, then choose (subjectively) the most pleasing one (using some fancy heuristics, like Occam's razor)
Taken together, the U-matrix is pretending objectivity where no objectivity can be. It is a serious misunderstanding of modeling altogether.
IMHO it is one of the greatest advantages of the SOM that all the parameters implied by it are accessible and open for being parameterized. Approaches like the U-matrix destroy just that, by disregarding this transparency and closing it again with opaque statistical reasoning.
I'm trying to read through PCA and saw that the objective was to maximize the variance. I don't quite understand why. Any explanation of other related topics would be helpful
Variance is a measure of the "variability" of the data you have. Potentially the number of components is infinite (actually, after numerization it is at most equal to the rank of the matrix, as #jazibjamil pointed out), so you want to "squeeze" the most information in each component of the finite set you build.
If, to exaggerate, you were to select a single principal component, you would want it to account for the most variability possible: hence the search for maximum variance, so that the one component collects the most "uniqueness" from the data set.
Note that PCA does not actually increase the variance of your data. Rather, it rotates the data set in such a way as to align the directions in which it is spread out the most with the principal axes. This enables you to remove those dimensions along which the data is almost flat. This decreases the dimensionality of the data while keeping the variance (or spread) among the points as close to the original as possible.
Maximizing the component vector variances is the same as maximizing the 'uniqueness' of those vectors. Thus you're vectors are as distant from each other as possible. That way if you only use the first N component vectors you're going to capture more space with highly varying vectors than with like vectors. Think about what Principal Component actually means.
Take for example a situation where you have 2 lines that are orthogonal in a 3D space. You can capture the environment much more completely with those orthogonal lines than 2 lines that are parallel (or nearly parallel). When applied to very high dimensional states using very few vectors, this becomes a much more important relationship among the vectors to maintain. In a linear algebra sense you want independent rows to be produced by PCA, otherwise some of those rows will be redundant.
See this PDF from Princeton's CS Department for a basic explanation.
max variance is basically setting these axis that occupy the maximum spread of the datapoints, why? because the direction of this axis is what really matters as it kinda explains correlations and later on we will compress/project the points along those axis to get rid of some dimensions