Abaqus BoundingBox for nodes - abaqus

I have made a simple plate in Abaqus/CAE, with dimensions [x, y, z] of [0.1, 0.004, 0.1].
For my project, I need to use the same plate over and over, but with different mesh sizes, and thus different amount of nodes.
I need to find stress values on the edge of the plate, for every plate. I wanted to do this using the getByBoundingBox feature. My code is as follows, I've also included the error message:
>>> allNodes=mdb.models['Model-1'].parts['PLT 100x100x4'].nodes
>>> allNodes.getByBoundingBox(0.04, 0.001, 0.09, 0.06, 0.003, 0.11)
()
>>> mdb.models['Model-1'].parts['PLT 100x100x4'].Set(name='PLTmiddleNodes', region='PLTmiddleNodes')
TypeError: region; found string, expecting Region
>>>
The standard getByBoundingBox code is:
getByBoundingBox(xmin, ymin, zmin, xmax, ymax, zmax)
I think I did everything right, as my box now encloses just a section of the edge of the plate. However, suspicions arose when line 2 returned '()'. Next to that, I got a TypeError.
Could somebody help me with this?
PS: I am relatively new to Abaqus, and coding as a whole. Please forgive me if I've missed something really obvious.

Firstly, there is problem in arguments of getByBoundingBox box. You need include all nodes within the bounding box ( will not work if it is on boundary of bounding box). Choose the arguments according to that. You can try these command in Command Line Interface.
Secondly, You need to store the nodes into a variable. And then you need to pass to create the node set.
# Get all nodes container
allNodes=mdb.models['Model-1'].parts['PLT 100x100x4'].nodes
# get a subset of nodes using getByBoundingBox
PLTmiddleNodes = allNodes.getByBoundingBox(0.04, 0.001, 0.09, 0.06, 0.003, 0.11)
# create the node set
mdb.models['Model-1'].parts['PLT 100x100x4'].Set(name='PLTmiddleNodes', region=PLTmiddleNodes)

Related

Return path - Bug Algorithms

I am trying to develop a simple algorithm in a MBSE tool to obtain a path between a start and an end points.
Here a summary:
Example
I have all informations regarding the start and end point, as well as I know everything for each polygon points (x and y positions, distance to the next point, indexes). What I do at the moment is to check the intersection between the m-line (Start-End line) and each separate polygon line,** one at the time** and to return the intersection points sorted by distance to the start point and the index where this intersection is found. (see Picture)
I think this is similar to the Bug2 algorithm, however I know in advance all the coordinates.
I would like to obtain the green path shown in the picture: [End, E, A, 29, B, C, 53, 52, 51, D, Start]. Ideally this path is the shortest to reach the end point. The problem that I am having at the moment is that I cannot extract the path and to check the shortest distance.
Any good ideas on how I could do that? I am missing a good logic to do that. Any suggestion/articles would be really appreciated. Thanks in advance.
(I am assuming: -all point sorted in counterclockwise order -I have to followthe polygon edges, I cannot cross the polygons)
I am able to found the intersection points, and the index where the intersections are happening. This information is local for each polygons, polygons are not ordered. I have tried to slice arrays and try to merge together the coordinates, however I don't like this approach and was looking for something more clean to merge together the local informations available for each polygons and to make them a path. I have explored Dijkstra algorithm to solve the issue, but maybe there is also something else.

How does center_box parameter of sklearn.datasets.make_blob() work?

I was searching online about how center_box parameter works in sklearn.datasets.make_blobs(). However, I could not find any good answer about it.
How does this parameter affect the sample dataset generation?
From the documentation:
center_box: tuple of float (min, max), default=(-10.0, 10.0)
The bounding box for each cluster center when centers are generated at random.
This means that the parameter center_box is an area of how big a cluster will be.

Findings common paths in two graphs using python-networkx

I have two DiGraphs, say G and H, and I would like to count how many paths of G are part of H.
For any node pairs (src, dst) I can generate the paths between them using the 'all_simple_paths' function to get the generators:
G_gen = nx.all_simple_paths(G, src, dst)
H_gen = nx.all_simple_paths(H, src, dst)
Since the amount of paths is considerably high (the graphs have typically 100 nodes) I cannot resort to building lists etc.. (e.g. list(G_gen)) so I am wondering if there are smarter ways to deal with it. In addition, I would also like to distinguish based on the path lengths.
.. or maybe a better solution can be found with a different module ?
Thanks in advance for any help on this.
Thierry
I wonder if there is some reason why nx.intersection (see here) wouldn't work here? I'm not sure if it checks for direction under the hood but it doesn't seem to force outputs to standard Graph output either. Below might work:
# Create a couple of random preferential attachment graphs
G = nx.barabasi_albert_graph(100, 5)
H = nx.barabasi_albert_graph(100, 5)
# Convert to directed
G = G.to_directed()
H = H.to_directed()
# Get intersection
intersection = nx.intersection(G, H)
# Print info for each
print(nx.info(G))
print(nx.info(H))
print(nx.info(intersection))
which outputs:
>>> DiGraph with 100 nodes and 950 edges
>>> DiGraph with 100 nodes and 950 edges
>>> DiGraph with 100 nodes and 176 edges
The nodes are all shared in the example since the node ids are just simple integers and so they follow the same generation index. With real data I suppose your node sets might not be equivalent like here and you probably will see differences there too.
On the path lengths I'm not quite sure how you would go about that. The intersection just checks which nodes and edges are shared between two graphs and returns those that are in both, unaware of any other conditions I suspect. There might be a way to impose some additional constraints by adapting the source code with of the intersection function with some conditional checks.
I guess this doesn't check the number of paths but rather the number of edges, so I suppose you're looking for something more specific than this. But at the very least no path can exist outside of the intersection, since all shared paths must contain the same edges in both (since if an edge is missing from a path in either, it cannot exist as a path in the shared solution).
Hope this helps in some way shape or form, though I feel I've oversimplified your question quite a bit.
EDIT: Intuitively, the full solution to your question might be to simply enumerate all possible paths in the intersection.

Comsol: Infinite Element Domain module

I want to simulate a 2D heat transfer process in the subsurface on a region which is infinite on the r-direction. So, as you know, the very basic way to model this is to draw a geometry that is very long in the r direction. I have done this, and the results that I obtain is correct as in this case, the results are matched with the analytical solution. As you know, there is a capability in Comsol called infinite element domain which serves the purpose to the problem mentioned above. In this case, we need to define a limited geometry on which we want to solve the PDE, and also need to draw a small domain acting as the Infinite Element Domain. However, in this case, the results are not correct because they are not matched with the analytical solution. Is there anything that I am missing to correctly use Infinite Element Domain in comsol?
Any help or comment would be appreciated.
Edit:
I edited the post to be more specific.
Please consider the following figure where a fluid with high temperature is being injected into a region with lower temperature:
https://i.stack.imgur.com/BQycC.png
The equation to solve is:
https://i.stack.imgur.com/qrZcK.png
With the following initial and boundary conditions (note that the upper and lower boundary condition is no-flux):
https://i.stack.imgur.com/l7pHo.png
We want to obtain the temperature profile over the length of rw<r<140 m (rw is very small and is equal to 0.005 m here) at different times. One way to model this numerically in Comsol is to draw a rectangle that is 2000 m in the r-direction, and get results only in the span of r [rw,140] m:
https://i.stack.imgur.com/BKCOi.png
The results of this case is fine, because they are well-matched with the analytical solution.
Another way to model this is to replace the above geometry with a bounded one that is [rw, 140] m in the r-direction and then augment it with an Infinite Element domain that is meshed mapped, as follows:
https://i.stack.imgur.com/m9ksm.png
Here, I have set the thickness of Infinite Element to 10 m in the r-direction. However, the results in this case are not matched with the analytical solution (or the above case where Infinite Element domain was not used). Is there anything that I am missing in Comsol? I have also changed some variables with regard to Infinite Element in Comsol such as physical width or distance, but I didn't see any changes in the results.
BTW, here are the results:
https://i.stack.imgur.com/cdaPH.png

Does scene_graph.get_source_pose_port() mix up the order of the inputs?

Currently, I am trying to visualize multiple robots in drake. This means that dynamics will not be taken into account of and that MBP will not be a system. MBP is only used to add urdf models into the world and assist for scene graph.
How I removed dynamics and made it purely a visualizer is as follows:
I sent robot joint positions to
builder.AddSystem<MultibodyPositionToGeometryPose<double>>(plant);
and then connected this to
scene_graph.get_source_pose_port(plant.get_source_id().value()));
Part of the code is below:
// Two robots are in the simulations. Their position states will be muxed into one
// in order to send to MultibodyPositionToGeometryPose
// For now, robotA's position states will come from a constant zero vector,
// while robotB's will come from a receiver.
std::vector<int> mux_sizes = { 14,27 };
auto plant_states_mux =
builder.AddSystem<systems::Multiplexer>(mux_sizes);
auto robots_to_pose =
builder.AddSystem<MultibodyPositionToGeometryPose<double>>(plant);
VectorX<double> constant_vector = VectorX<double>::Zero(14);
auto constant_zero_source =
builder.template AddSystem<systems::ConstantVectorSource<double>>(
constant_vector);
builder.Connect(constant_zero_source->get_output_port(),
plant_states_mux->get_input_port(robotA_id));
builder.Connect(robotB_state_receiver->get_position_measured_output_port(),
plant_states_mux->get_input_port(robotB_id));
builder.Connect(plant_states_mux->get_output_port(0),
robots_to_pose->get_input_port());
builder.Connect(robots_to_pose->get_output_port(),
scene_graph.get_source_pose_port(plant.get_source_id().value()));
drake::geometry::ConnectDrakeVisualizer(&builder, scene_graph);
Up until the MultibodyPositionToGeometryPose port, I was able to check that the joint positions were in the correct order. This was done by looking at the function "CalcGeometryPose" and outputting the plant_context_ values on to the terminal.
However, when I run the simulation and send joint position to robot_B, it moves the wrong joints. Additionally, robot_A also moves its joint even though I did not send it any states.
I found out that this was due to reordering of the robot ports (either inside
plant_.get_geometry_poses_output_port().Calc(*plant_context_, output);
or in scene graph)
The ports reorder like this: 7 DOF for every robot's base position (4 quaternion, 3 xyz) is listed first. Then the joints are ordered randomly afterwards.
EX:
robotA qw, robotA qx, robotA qy, robotA qz, robotA x, robotA y, robotA z,
robotB qw, robotB qx, robotB qy, robotB qz, robotB x, robotB y, robotB z,
robotA joint 1, robotA joint 2, robotB joint 1, robotB joint 2, robotA joint 3 ... etc
So, I have thought that sending a state to robotA joint 1 would come right after robotA z position. However, the reordering makes it so that it is robotB qw (for this example).
I have tried to reorder the port wiring to match the joints. But when I do so, the ports switch orders.
My question is, does scenegraph or MultibodyPositionToGeometryPose reorder position state locations? (perhaps for efficiency or something?)
If so, is there a way to prevent this from happening?
Thank you
I haven't looked in detail to your code, but using a multiplexer essentially assumes that all state of robot 2 follows immediately and contiguously the one of robot 1. A fine assumption, but it is not what MBP does. That assumption is not true in general. However, MBP does offer APIs that state some specific ordering.
If you do want the state of a single robot in a vector form, probably best to use model instances. For instance, you can do GetPositionsAndVelocities(cotext, model_instace) which essentially does the multiplexer work above, but we the caveat that MBP does know how states are stored.
Another recommended option is to use the documented public APIs. In general if we document something, we mean it (unless of course there is a bug, but we try). So I recommend you look at things like Joint::position_start() and related methods to find out how things are indexed and possibly build your own map of how you'd like things to be indexed.
Finally, for free bodies, you can say MBP::SetFreeBodyPose() to set a pose (see Body::is_free() and related methods in Body:).
However, it looks like you got rid of MBP (which is the one that knows these mappings) and only care about viz? maybe you could use two MultibodyPositionToGeometryPose, one for each robot?
Hopefully it helps.

Resources