How can I properly model a contact between a beam and another part in abaqus? - abaqus

I have 2 problems in my Abaqus model. I really appreciate your help since I searched a lot, but could not find what I need.
I am modeling a rectangle with 4 beam elements, I rendered the beam profile with scale factor 1 and I saw that my model is as seen in the picture below:
enter image description here
What are those gaps in the corners? How can I get rid of them. The problem is when I sketch the beam, it assumes the lines as the centerline of the beam. Can I change the setting of it in a way that it considers them as the upper bound of my beam?
I want to define a contact between the exterior side (upper surface) of this section and another part that surrounds this rectangular part, how can I define the contact pairs so it understands that the contact is defined on the upper side and not the centerline?
Thank you for your help.

It's normal that cross-sections are rendered this way but you can define an offset (like that used for shells) if you don't mind some limitations.
It's better to use general contact. It will account for the beam section.

Related

Placing Virtual Object Behind the Real World Object

In ARKit for iOS. If you display a virtual item then it always comes before any real item. This means if I stand in front of the virtual item then I would still see the virtual item. How can I fix this scenario?
The bottle should be visible but it is cutting off.
You cannot achieve this with ARkit only. It offers no off the shelve solution for solving occlusion, which is a hard problem.
Ideally you'd know the depth of each pixel projected on the camera, and would use that to determine those that are in front and those that are behind. I would not try something with the feature points ARKit is exposing since 1) their position is innacurate 2) there's no way to know between two frames which feature point of frame A is which feature point in frame B. It's way to noisy data to do anything good.
You might be able to achieve something with third party options that'd process the captured image and understand depth or different depth levels in the scene, but I don't know any good solution. There's some SLAM technique that yields dense depth map like DTAM (https://www.kudan.eu/kudan-news/different-types-visual-slam-systems/) but that'd be redoing most of what arkit is doing. There might be other approaches that I'm not aware off. Apps like snap do this in their own way so it is possible!
So basically your question is to mapping the coordinate of the virtual item on real world coordinate system, in short, you want to see the virtual item blocked by the real item, and you can only see the virtual item once you pass the real item.
If so, you need to know the physical relations of each object in this environment, and then you need to know exactly where you are to decide if the virtual item is blocked.
It's not an intuitive way to fix this, however, it's the only way I can think of.
Cheers.
What you are trying to achieve is not easy.
You need to detect the parts of the real world that "should be visible" using some kind of image processing. Or maybe the ARKit feature points that have the depth information, then based on this you have to add "an invisible virtual object" that cuts the drawing of things behind it. This will represent your "real object" inside the "virtual world" so that the background (camera feed) remains visible in places where this invisible virtual object is present.

image processing close edge matlab

I have the following image
What I need to do is connect the edges in MATLAB that are obviously of the same object in order to use regionprops later. By 'obviously' I mean the edges of the inside object and those of the outside one. What I thought is that I somehow must keep the pixels of each edge in a struct and then for each edge find the one that is closer to it and then apply some fitting(polynomial, bspline etc). The problem is that I have to make it for thousands of such images so I need a robust algorithm and I cannot do it by hand for all of them. Is there a way for somebody to help me? The image of which the previous image is obtained is this one. Ideally I have to catch the two interfaces shown there.
Thank you very much in advance

Sparx Enterprise Architect - Hyperlink to a specific area of a large diagram

I am trying to build a simplified EA from 'top to bottom', what I mean is I have a large diagram which has multiple objects, mainly ERDs Entities. I also have more and more detailed diagrams and can successfully drill down by hyperlinking to the next level down.
I have even setup a hyperlink on each of the lower level diagrams to go back to the previous.
So far, so good. When I publish as HTML, I get a really useful web tree that pretty much does what I want, except for one thing!
Each of the lower diagrams are reasonably small, so when I drill back up, I am happy with being positioned at the top left of the previous diagram (with me so far?).
When I drill back up to the primary diagram, I get returned to the top left.
BUT - as this primary diagram prints out on 12 A3 pages, it would be really good to be able to return to the area of the primary diagram that refers to the diagram that I just clicked into/out-of.
I am no deep HTML expert, but I know there is methods in HTML to hyperlink to a specific part of a page. Can anyone think of a way to tweak the returning hyperlink to position me at a specific point in the primary diagram?
PHEW
Thanks, PGB
To my knowledge there is no way to achieve this using EA's hyperlinks. EA does not use HTML internally, and an EA diagram hyperlink has no space for offset or zoom level, it simply opens the diagram.
Normally I would say that if you want an element to do something like this you can code it up yourself in an Add-In, but I'm pretty sure you can't specify pan/zoom when opening a diagram in the API either.
So I'm afraid this is one of those rare occasions where the answer is "you're doing it wrong." Adding all information everywhere is a sure-fire way of ending up with a model that's both impossible to navigate and a nightmare to maintain.
To build a better model you should work with abstractions and/or aspects (hard to say which is the better route forward without doing an actual architectural analysis).
What I do is to create sub-domain diagrams and then drag those onto an overview diagram. They scale down to nearly iconic size but still give an idea of the contents. Now I use large text to explain those sub-domains. This can usually fit to some A3-size (A2 if you like to show off). But from this overview you can easily focus to the single sub-domains by double-clicking the diagram frames.

Find corner of field

I am working on project in C#/Emgu CV, but answer in any language with OpenCv should be ok.
I have following image: http://i42.tinypic.com/2z89h5g.jpg
Or it might look like this: http://i43.tinypic.com/122iwsk.jpg
I am trying to do automatic calibration and I would like to know how to find corners of the field. They are marked by LEDs, but I would prefer to find it by color tags. If need I can replace all tags by same color tags. (Note that light in room is changing so the colors might be bit different next time)
Edge detection might be ok too, but I am afraid that I would not find the corner correctly.
Please help.
Thank you.
Edit:
Thanks aardvarkk for advice, but I think I need to give you little bit more info.
I am already able to detect and identify robots in field and get their position and rotation. But for that I have to set corners of field manually first. So I was looking for aa automatic way, but I was worried I would not be able to distinguish color tags from background because light in the room is changing quite often.
And as for the camera angle. Point of this is that camera can be every time from different (reasonable) angle.
I would start by searching for the colours. The LEDs won't be much help to you as they're not much brighter than anything else in the scene. I would look for the rectangular pieces of coloured tape. Try segmenting the image based on colour. That may allow you to retrieve the corner tape pieces directly without needing to know their exact colour in advance. After that, you may look for pairs of the same colour blob that are close to each other to define the corners where the pieces of tape are the same. Knowing what kinds of camera angles you are going to have to solve is also very important -- if you need this to work when viewing from the side, then this approach certainly won't work. If it's almost top down, this would probably be a good start. Nobody will be able to provide you with a start to finish solution, but this might be a good base to begin with.

Image Processing and Stroke filter

I am new into Image Processing, and I have started simple project for recognizing text on images with complex background. I want to use stroke filter as first step, but I cannot find enough data about stroke filters to implement this.
I have only managed to find pure definition of Stroke Filters. So does anyone know something about Stroke filters that can be helpfull to me for my implementation.
I found one interesting article here, but it doesnt explain Stroke Filters in depth
http://www.google.com/url?sa=t&source=web&ct=res&cd=4&ved=0CBoQFjAD&url=http%3A%2F%2Fwww-video.eecs.berkeley.edu%2FProceedings%2FICIP2006%2Fpdfs%2F0001473.pdf&ei=qtxaS5ubEtHB4gbFj8HqBA&usg=AFQjCNEnXQCMAFnqPRHe2kNZ6JEidR1sQg&sig2=wpaIDIQmNn739aF0cYWbsg
"Stroke Filters" are not a standard idea in image-processing. Instead, this is a term created for the paper that you linked to, where they define what they mean by a stroke filter (sect. 2) and how you can implement one computationally (sect. 4).
Because of this, you are unlikely to find an implementation in a standard toolkit; though you might want to look around for someone who's posted an implementation, or contact the authors.
Basically, though, they are saying that you can identify text in an image based on typical properties of text, in particular, that text has many stroke-like structures with a specific spatial distribution.

Resources