I'm modeling a deformable rectangular storage tank filled with water in ABAQUS.and I need to get the water pressure distribution over the long side wall at a moment.
the problem is I can't use stress output because the tank has the stresses of its own (due its weight and deformability).please help me...
Related
I want to get 3d model of some real word object.
I have two web cams and using openCV and SBM for stereo correspondence I get point cloud of the scene, and filtering through z I can get point cloud only of object.
I know that ICP is good for this purprose, but it needs point clouds to be initally good aligned, so it is combined with SAC to achieve better results.
But my SAC fitness score it too big smth like 70 or 40, also ICP doesn't give good results.
My questions are:
Is it ok for ICP if I just rotate the object infront of cameras for obtaining point clouds? What angle of rotation must be to achieve good results? Or maybe there are better way of taking pictures of the object for getting 3d model? Is it ok if my point clouds will have some holes? What is maximal acceptable fitness score of SAC for good ICP, and what is maximal fitness score of good ICP?
Example of my point cloud files:
https://drive.google.com/file/d/0B1VdSoFbwNShcmo4ZUhPWjZHWG8/view?usp=sharing
My advice and experience is that you already have rgb images or grey. ICP is an good application for optimising the point cloud, but has some troubles aligning them.
First start with rgb odometry (through feature points aligning the point cloud (rotated from each other)) then use and learn how ICP works with the already mentioned point cloud library. Let rgb features giving you a prediction and then use ICP to optimize that when possible.
When this application works think about good fitness score calculation. If that all works use the trunk version of ICP and optimize the parameter. After this all been done You have code that is not only fast, but also with the a low error of going wrong.
The following post is explain what went wrong.
Using ICP, we refine this transformation using only geometric information. However, here ICP decreases the precision. What happens is that ICP tries to match as many corresponding points as it can. Here the background behind the screen has more points that the screen itself on the two scans. ICP will then align the clouds to maximize the correspondences on the background. The screen is then misaligned
https://github.com/introlab/rtabmap/wiki/ICP
I'm implementing Monte-Carlo localization for my robot that is given a map of the enviroment and its starting location and orientation. My approach is as follows:
Uniformly create 500 particles around the given position
Then at each step:
motion update all the particles with odometry (my current approach is newX=oldX+ odometryX(1+standardGaussianRandom), etc.)
assign weight to each particle using sonar data (formula is for each sensor probability*=gaussianPDF(realReading) where gaussian has the mean predictedReading)
return the particle with biggest probability as the location at this step
then 9/10 of new particles are resampled from the old ones according to weights and 1/10 is uniformly sampled around the predicted position
Now, I wrote a simulator for the robot's enviroment and here is how this localization behaves: http://www.youtube.com/watch?v=q7q3cqktwZI
I'm very afraid that for a longer period of time the robot may get lost. If add particles to a wider area, the robot gets lost even easier.
I expect a better performance. Any advice?
The biggest mistake is that you assume the particle with the highest weight to be your posterior state. This disagrees with the main idea of the particle filter.
The set of particles you updated with the odometry readings is your proposal distribution. By just taking the particle with the highest weight into account you completely ignore this distribution. It would be the same if you just randomly spread particles in the whole state space and then take the one particle that explains the sonar data the best. You only rely on the sonar reading and as sonar data is very noisy your estimate is very bad. A better approach is to assign a weight to each particle, normalize the weights, multiply each particle state by its weight and sum them up to obtain your posterior state.
For your resample step i would recommend to remove the random samples around the predicted state as they corrupt your proposal distribution. It is legit to generate random samples in order to recover from failures, but those are supposed to be spread over the whole state space and explicitly not around your current prediction.
In an iPad app, we would like to help users visualize the width of an object with their camera in the style of an augmented reality app.
For instance, if we want to help someone visualize the width of a bookshelf against a wall, how could we do that? Are there algorithms to estimate width (i.e., if you're standing 5 feet away and pointing your camera at the wall, 200 pixels in the camera will represent X inches)?
Any good resources to start looking?
To do this you may wish to do a bit of research yourself as this will vary depending upon the camera being used, resolution and it's depth of field.
I would recommend taking a large strip of paper (wallpaper would work fine) and writing measurements at specified intervals, e.g. you could write distance markers for each foot on the paper. Then all you need to do is stand at varying distances from the wall with the paper mounted to it, and take photographs, you should then be able to establish the correlation of how distance, resolution and measurements correlate to each other and use these findings to form your own algorithm. You've essentially answered your own question.
I already have the basics of ambient occlusion down. I have a raycaster and am capable of shooting rays about a hemisphere uniformly. It seems like those are the basics of what are needed for radiosity but I don't know where to go from there. Do I find how much light comes from each face? (I'm making my game out of cubes like minecraft) After that what do I do?
Radiosity, in simple terms, is a two stage algorithm to compute illumination.
It works as follows:
first stage: For every pair of polygons in the scene, you compute "how much they can see of each other". E.g. take a cube: none of the faces see another face of the cube. If you invert the cube to a room: opposite inner walls see each other completely.
second stage: With this 'visibility information', called 'form factors', you can now distribute the light energy progressively through out the scene. At iteration 0, all the energy is in the light-source faces, and this is then transferred onto other faces. At subsequent iterations, more faces are transmitting energy into the scene (indirect illumination).
Drawback: does diffuse illumination only
Strength: once computed, the lighting is viewpoint independent so that static scenes can be "walked through" without recomputing lighting.
If you're interested in computer graphics "theory", I'd highly recommend Foley/van Dam:
http://www.amazon.com/Computer-Graphics-Principles-Practice-2nd/dp/0201848406
If you're just interested in what it is, and how it works, Wikipedia has a great article (with visual examples and math equations):
http://en.wikipedia.org/wiki/Radiosity_%283D_computer_graphics%29
And for an over-simplified one-liner, I guess you could say "radiosity is a more sophisticated technique for rending ambient lighting in a ray traced image".
IMHO ...
I am currently helping a friend working on a geo-physical project, I'm not by any means a image processing pro, but its fun to play
around with these kinds of problems. =)
The aim is to estimate the height of small rocks sticking out of water, from surface to top.
The experimental equipment will be a ~10MP camera mounted on a distance meter with a built in laser pointer.
The "operator" will point this at a rock, press a trigger which will register a distance along of a photo of the rock, which
will be in the center of the image.
The eqipment can be assumed to always be held at a fixed distance above the water.
As I see it there are a number of problems to overcome:
Lighting conditions
Depending on the time of day etc., the rock might be brighter then the water or opposite.
Sometimes the rock will have a color very close to the water.
The position of the shade will move throughout the day.
Depending on how rough the water is, there might sometimes be a reflection of the rock in the water.
Diversity
The rock is not evenly shaped.
Depending on the rock type, growth of lichen etc., changes the look of the rock.
Fortunateness, there is no shortage of test data. Pictures of rocks in water is easy to come by. Here are some sample images:
I've run a edge detector on the images, and esp. in the fourth picture the poor contrast makes it hard to see the edges:
Any ideas would be greatly appreciated!
I don't think that edge detection is best approach to detect the rocks. Other objects, like the mountains or even the reflections in the water will result in edges.
I suggest that you try a pixel classification approach to segment the rocks from the background of the image:
For each pixel in the image, extract a set of image descriptors from a NxN neighborhood centered at that pixel.
Select a set of images and manually label the pixels as rock or background.
Use the labeled pixels and the respective image descriptors to train a classifier (eg. a Naive Bayes classifier)
Since the rocks tends to have similar texture, I would use texture image descriptors to train the classifier. You could try, for example, to extract a few statistical measures from each color chanel (R,G,B) like the mean and standard deviation of the intensity values.
Pixel classification might work here, but will never yield a 100% accuracy. The variance in the data is really big, rocks have different colours (which are also "corrupted" with lighting) and different texture. So, one must account for global information as well.
The problem you deal with is foreground extraction. There are two approaches I am aware of.
Energy minimization via graph cuts, see e.g. http://en.wikipedia.org/wiki/GrabCut (there are links to the paper and OpenCV implementation). Some initialization ("seeds") should be done (either by a user or by some prior knowledge like the rock is in the center while water is on the periphery). Another variant of input is an approximate bounding rectangle. It is implemented in MS Office 2010 foreground extraction tool.
The energy function of possible foreground/background labellings enforces foreground to be similar to the foreground seeds, and a smooth boundary. So, the minimum of the energy corresponds to the good foreground mask. Note that with pixel classification approach one should pre-label a lot of images to learn from, then segmentation is done automatically, while with this approach one should select seeds on each query image (or they are chosen implicitly).
Active contours a.k.a. snakes also requre some user interaction. They are more like Photoshop Magic Wand tool. They also try to find a smooth boundary, but do not consider the inner area.
Both methods might have problems with the reflections (pixel classification will definitely have). If it is the case, you may try to find an approximate vertical symmetry, and delete the lower part, if any. You can also ask a user to mark the reflaction as a background while collecting stats for graph cuts.
Color segmentation to find the rock, together with edge detection to find the top.
To find the water level I would try and find all the water-rock boundaries, and the horizon (if possible) then fit a plane to the surface of the water.
That way you don't need to worry about reflections of the rock.
Easier if you know the pitch angle between the camera and the water and if the camera is is leveled horizontally (roll).
ps. This is a lot harder than I thought - you don't know the distance to all the rocks so fitting a plane is difficult.
It occurs that the reflection is actually the ideal way of finding the level, look for symetric path edges in the rock edge detection and pick the vertex?