Finite element analysis: How can I find the boundary conditions and settings for the knee joints? - abaqus

I am not an expert in finite element models (FEMs), however, I am reading related articles and trying to find the boundary conditions for knee joint including the bone and cartilage.
Does anyone have any suggestion? How can I find the boundary conditions for the knee joint?
If there any platform or an atlas guiding on different material properties, I would really appreciate if you guide me on this or share any information.
Thanks

This is a question that's impossible to answer without knowing more about the extent of your model.
If you're modeling a whole leg, and just want to see the relative motion of the upper and lower parts, you can fix all degrees of freedom at the hip joint.
The joint itself is complex. There's contact and relative sliding. Do you plan to include those effects?
What kind of loading will you apply? This is potentially a small strain, large rotation problem. Biology is non-linear.
What materials? Bone? Artificial knee? Muscle? Skin? What properties? What material model (e.g. elastic, incompressible, small or large strain, etc.)?

Related

What does top down and bottom up approaches mean?

I came across top down and bottom up approaches while reading this paper
"https://arxiv.org/abs/1611.08050" on image processing.
I got a vague about top down approach from this paragraph:
"top-down approach: apply a separately trained human detector (based on object detection techniques such as the ones we discussed before), find each person, and then run pose estimation on every detection."
But couldn't understand bottom up approach from this:
"bottom-up approaches recognize human poses from pixel-level image evidence directly. They can solve both problems above: when you have information from the entire picture you can distinguish between the people, and you can also decouple the runtime from the number of people on the frameā€¦ at least theoretically."
Please help me understand these concepts. Thank you.
Both paragraph's are from this blog : "https://medium.com/neuromation-blog/neuronuggets-understanding-human-poses-in-real-time-b73cb74b3818"
There are two person in the picture. all human has 15 joint(key point)
Top-down approach
find two bounding boxes including each person
estimate human joint(15 key-point) per each bounding box
In this example, Top-down approach need pose estimation twice.
Bottom-up approach
estimate all human joint(30 key-point) in the picture
classify which joint(15 key-point) are included in the same person
In this example, pose estimator doesn't care how many people are in the picture. they only consider how they can classify each joint to the each person.
In general situation, Top-down approach consume time much more than Bottom-up, because Top-down approach need N-times pose estimation by person detector results.

How to model 4 points bending reinforced concrete beam with ABAQUS?

I have some questions about how to define boundary conditions for 4 points bending reinforced concrete with Abaqus. So I model this beam but I don't know how to define boundary conditions and step. Can You please help me? The model is 1/4 beam because the beam is symmetrical. In result the Force=0 in all model, I don't know how to solve that . I have another questions for modeling rebars, beam element isn't better than truss element?
Four point bending is an interesting test configuration.
Here are a few ideas to consider:
Symmetry at x = 0 means out-of-plane displacement ux and in-plane rotations txy and txz are zero for all points on the symmetry plane.
Four point bending doesn't mean no force on the beam. There are vertical reaction forces at each of the four bending points. They sum to zero, but their effect on the beam should be clear. You'll need to constrain the support points to have zero vertical displacement and apply either a displacement or load to the two loading points. You'll calculate how shear and bending vary along the length of the beam.
The fact that the beam is reinforced concrete is immaterial (no pun intended) to the boundary conditions applied.
You'll need beam elements at a minimum. They include bending effects. Truss elements only model axial loads.
Concrete is strong in compression but weak in shear and bending.
Make sure you have a good understanding of the failure model for concrete before you begin.
How are you modeling the concrete? Is this a continuum model with steel rebar elements surrounded by concrete, or are you smearing them out using an anisotropic composite model and beam elements? Are you modeling large aggregate using something like Eshelby tensor? Do you have to be concerned about the concrete/steel interface? What about fracture?
Great problem. Good luck.

Calculate distance between camera and pixel in image

Can you, please, suggest me ways of determining the distance between camera and a pixel in an image (in real world units, that is cm/m/..).
The information I have is: camera horizontal (120 degrees) and vertical (90 degrees) field of view, camera angle (-5 degrees) and the height at which the camera is placed (30 cm).
I'm not sure if this is everything I need. Please tell me what information should I have about the camera and how can I calculate the distance between camera and one pixel?
May be it isn't right to tell 'distance between camera and pixel ', but I guess it is clear what I mean. Please write in the comments if something isn't clear.
Thank you in advance!
What I think you mean is, "how can I calculate the depth at every pixel with a single camera?" Without adding some special hardware this is not feasible, as Rotem mentioned in the comments. There are exceptions, and though I expect you may be limited in time or budget, I'll list a few.
If you want to find depths so that your toy car can avoid collisions, then you needn't assume that depth measurement is required. Google "optical flow collision avoidance" and see if that meets your needs.
If instead you want to measure depth as part of some Simultaneous Mapping and Localization (SLAM) scheme, then that's a different problem to solve. Though difficult to implement, and perhaps not remotely feasible for a toy car project, there are a few ways to measure distance using a single camera:
Project patterns of light, preferably with one or more laser lines or laser spots, and determine depth based on how the dots diverge or converge. The Kinect version 1 operates on this principle of "structured light," though the implementation is much too complicated to reproduce completely. For a collision warning simple you can apply the same principles, only more simply. For example, if the projected light pattern on the right side of the image changes quickly, turn left! Learning how to estimate distance using structured light is a significant project to undertake, but there are plenty of references.
Split the optical path so that one camera sensor can see two different views of the world. I'm not aware of optical splitters for tiny cameras, but they may exist. But even if you find a splitter, the difficult problem of implementing stereovision remains. Stereovision has inherent problems (see below).
Use a different sensor, such as the somewhat iffy but small Intel R200, which will generate depth data. (http://click.intel.com/intel-realsense-developer-kit-r200.html)
Use a time-of-flight camera. These are the types of sensors built into the Kinect version 2 and several gesture-recognition sensors. Several companies have produced or are actively developing tiny time-of-flight sensors. They will generate depth data AND provide full-color images.
Run the car only in controlled environments.
The environment in which your toy car operates is important. If you can limit your toy car's environment to a tightly controlled one, you can limit the need to write complicated algorithms. As is true with many imaging problems, a narrowly defined problem may be straightforward to solve, whereas the general problem may be nearly impossible to solve. If you want your car to run "anywhere" (which likely isn't true), assume the problem is NOT solvable.
Even if you have an off-the-shelf depth sensor that represents the best technology available, you would still run into limitations:
Each type of depth sensing has weaknesses. No depth sensors on the market do well with dark, shiny surfaces. (Some spot sensors do okay with dark, shiny surfaces, but area sensors don't.) Stereo sensors have problems with large, featureless regions, and also require a lot of processing power. And so on.
Once you have a depth image, you still need to run calculations, and short of having a lot of onboard processing power this will be difficult to pull off on a toy car.
If you have to make many compromises to use depth sensing, then you might consider just using a simpler ultrasound sensor to avoid collisions.
Good luck!

Detecting "city" background versus "desert" background in images using image processing/computer vision

I'm searching for algorithms/methods that are used to classify or differentiate between two outdoor environments. Given an image with vehicles, I need to be able to detect whether the vehicles are in a natural desert landscape, or whether they're in the city.
I've searched but can't seem to find relevant work on this. Perhaps because I'm new at computer vision, I'm using the wrong search terms.
Any ideas? Is there any work (or related) available in this direction?
I'd suggest reading Prince's Computer Vision: Models, Learning, and Inference (free PDF available). It covers image classification, as well as many other areas of CV. I was fortunate enough to take the Machine Vision course at UCL which the book was designed for and it's an excellent reference.
Addressing your problem specifically, a simple MAP or MLE model on pixel colours will probably provide a reasonable benchmark. From there you could look at more involved models and feature engineering.
Seemingly complex classifications similar to "civilization" vs "nature" might be able to be solved simply with the help of certain heuristics along with classification based on color. Like Gilevi said, city scenes are sure to contain many flat lines and right angles, while desert scenes are dominated by rolling dunes and so on.
To address this directly, you could use OpenCV's hough - lines algorithm on the images (tuned for this problem of course) and look at:
a) how many lines are fit to the image at a given threshold
b) of the lines that are fit what is the expected angle between two of them; if the angles are uniformly distributed then chances are its nature, but if the angles are clumped up around multiples of pi/2 (more right angles and straight lines) then it is more likely to be a cityscape.
Color components, textures, and degree of smoothness(variation or gradient of image) may differentiate the desert and city background. You may also try Hough transform, which is used for line detection that can be viewed as city feature (building, road, bridge, cars,,,etc).
I would recommend you this research very similar with your project. This article presents a comparison of different classification techniques to obtain the scene classifier (urban, highway, and rural) based on images.
See my answer here: How to match texture similarity in images?
You can use the same method. I already solved in the past problems like the one you described with this method.
The problem you are describing is that of scene categorization. Search for works that use the SUN database.
However, you only working with two relatively different categories, so I don't think you need to kill yourself implementing state-of-the-art algorithms. I think taking GIST features + color features and training a non-linear SVM would do the trick.
Urban environments is usually characterized with a lot of horizontal and vertical lines, GIST captures that information.

tracking a leaf

I want to track a moving leaf. Its shape changes and occlusion happens because there are other leaves also. What feature should I use to differentiate this leaf among other leaves.
Thanks.
You want some type of global/local tracking method that has weights for terms like spatial coherence (how much one has moved relative to another), shape coherence (how much it moved), and penalties for merging/division of tracks.
A similar problem is cell tracking in biomedical imaging. Some references from this conference here, for instance, might be useful.
Edit:
bjoernz makes an excellent point in the comments. If you can add some form of fiducials to the scene, the task will be much easier.
It need not even be a visible wavelength signal. You can paint the leaf with IR reflective paint and use an IR camera to pick it up, for example. The IR camera can be bore-sighted with the regular visible wavelength camera.
For a pure regular vision solution, my answer above stands.
"Condensation" might be the algorithm that you're looking for. It is able to track object boundaries in highly cluttered backgrounds. On this page you will find an example of tracking a leaf and one thesis on the intricacies.

Resources