Volume enclosed by shell structures in abaqus? - abaqus

This question is directly related to the tutorial at: https://learnsharewithdp.wordpress.com/2017/09/11/cube-on-pillow-fem/
How do I calculate volume enclosed by a closed thin shell using finite element methods?
Does abaqus or/and ansys or/and any other FEA software does it automatically?
Particularly, abaqus supports python.
Is there any python script or package which can be used with Abaqus 6.13 to this task?
Please don't confuse it with 3D solids. This one is hollow closed shell structure. For example, water bottles are closed thin shell structures. I need to calculate the volume without converting it in solid model.
This question is also cross-posted to Quora: https://www.quora.com/How-do-I-calculate-the-volume-enclosed-by-a-close-thin-shell-using-finite-element-methods-Does-the-Abaqus-ANSYS-or-any-other-FEA-software-do-it-automatically

Related

Choosing a neural network architecture for Snake AI Agent [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
I'm new to machine learning and reinforcement learning, and I'm attempting to create an AI agent that learns to play Snake. I am having trouble choosing / developing a neural network architecture that can work with the shape of my input / output vectors.
My input is a 3x10x10 tensor, basically 3 layers of a 10x10 grid the snake moves on
(I only use 0s and 1s throughout the tensor, mark the position of the snake's body parts
in the first layer, mark the apple's position on the second layer, and the snake's head position on the 3rd).
For my output, I'm looking for a vector of 4 values, corresponding to the 4 possible moves a player has available (change direction to up / down / left / right).
I would appreciate any recommendations on how to go about choosing an architecture in this case, as well as any thoughts regarding the way I chose to encode my game state into an input vector for the agent to train.
You could maybe use a ResNet architecture at the beginning and see what happens. Basically, the ResNet takes as input an image of a shape HxWxC, where H-height, W-width, C-channels. In your case, you do not have an actual image, but you still encode your environment in 3 channels, with a HxW=10x10. So, I think your encoding should work.
Then you will also have to change the output of the ResNet so that you will only output 4 values and each value will correspond to one action.
Given that the input space is not that big, maybe you could start with a ResNet 18 which is very small and see what happens. Given that you are new to ML and RL, there is a very old paper that tries to solve Atari games using deep learning https://arxiv.org/pdf/1312.5602v1.pdf and the method is not that hard to understand. Snake is a game with a similar (or even less) complexity as Atari games, so this paper may provide more insights.

Detect hand-drawn circuit components in image, detect text, build connection tree

I want to know, how I should process hand drawn images of circuit diagrams, for the sake of digitizing the drawn circuit and eventually simulating it.
The input to my program would be a regular picture (smartphone etc..) and the finished output should be a simulation of all possible values in the circuit (not covered / required here)
Basically all I would need to be able to detect are electrical components with a fixed number of connections (2 connections e.g. R,L,C,Diode) and the lines connecting them.
I already have a pretrained neural network for detecting which type of component it is. The part where I struggle is, how do I get bounding boxes around the components so I can classify them with my NN? I tried several approaches using Contouring and Object detection using OpenCV (eg. FindContours, ConnectedComponentsWithStats) but I cant seem to get it to detect only the components, and not the text or connecting lines between components.
Basically what I want is the following:
Given this Input Image (not hand-drawn for sake of readability)
I would like to know:
How many components are there?
Where are the bounding boxes of the components?
Basically These bounding boxes
This is used to extract components and classify them with the model I already have.
Furthermore, I would like to extract the Text closest to any component, so that I can read the values of each component. I have already managed to do OCR with the help of tesseract-ocr, so if I can get bounding boxes around the text I can easily read the values.
Like this
But the part I struggle with the most, is finding out which component is connected to which other component, I am unsure how I should approach this. Its really hard to find something googling my problem, and not certain how I should describe this problem in general. But overall, I need enough information to be able to simulate the circuit with the Matrix-Simulations (basic DC-Analysis).
I do not explicitly require code, I need general guidance to solving this problem. Or maybe even links to research papers attacking similar problems.
"Every problem is just a good dataset away from being demolished" source.
There are several electronic symbols that you will need to detect. A modern approach to classifying the symbols would be to use a neural network. To train this model, you will need to create a dataset of hand-drawn electronic symbols. Classifying the electronic symbols will be similar to handwritten digit classification.
Before the symbols can be classified by a neural network model, the image will have to be segmented. Individual components (diode, capacitor, resistor, etc...) will need to be identified, and labeled with bounding boxes.
The complexity of this task depends on the quality of your source images. Images that are created using a scanner (instead of a camera) will be much easier to work with.
This task can be accomplished with OpenCV and Python. This is a breakdown of the sub-tasks:
Mobile Document Scanning
Component segmentation
Component classification
OCR

What is an easy way to build CGAL mesh out of DICOM sequence?

I have a sequence of DICOM images constituting a single scan. I would like to build a CGAL mesh representing 3D volume segmented out of that scan by thresholding. I prefer Windows and few, easy to build dependencies, if any.
I've heard that ITK can be used for this purpose, but it is a large library with a lot of overlap with CGAL. Are there any other options?
The example CGAL-4.9/examples/Mesh_3/mesh_3D_gray_vtk_image.cpp should be a good starting point. As this is not easy to find we will add a link to it in the CGAL User manual, see the pull request on github

detecting geometric shapes in UML, flowcharts

I am looking for ways to automatically image-process a state-machine diagram (eg. Finite State Machine or see a sample at http://www.artima.com/designtechniques/images/TrafficLight.gif) and turn it into a state-transition table. I am told that this is a solved problem -- in that there are multiple automatic image processing solutions out there that already process diagrams and turn them into some internal notation. Is this true? Also, are there similar solutions for flow charts or any other UML diagrams (like message sequence charts) also?
FYI, for image processing I am using OpenCV on windows. For text recognition (i.e. OCR), I have found Tessract useful. Before finding my own mechanism to automatically processing state-machines, I wish to know if this is a solved problem.

Extract points from very large girds

I have 10 grids (currently stored as ascii grids from a GIS), each of them with about 4.5GB uncompressed. In addition I have about 100,000 location with an x and y coordinate. I need to extract the grid value at each of this location. I am currently doing it with GRASS GIS which works, but is very slow. Can anyone recommend me a library or a programming language most suitable for such a task?
Thanks in advance!
Sounds like the classic use-case for Hadoop MapReduce.
Hadoop MapReduce is a programming model and software framework for writing applications that rapidly process vast amounts of data in parallel on large clusters of compute nodes.

Resources