How to generate different 2D displays in Repast Simphony (gui or style code?)? - agent-based-modeling

I have built a 3D model in repast simphony and it is working (fairly) well. However, due to the nature of the model, agents tend to form dense clumps. I was wondering if there is a way to generate a 2D slice or cross section through the middle of the clump to see what agents are doing inside the clumps, by either generating a continuously updating 2D display or an end-state view.
I have explored the display options in the gui and experimented with different layering of the agents, but due to the density, none of these have worked. Would there be a way to alter this aspect of the gui slightly to give a 2D view (for example) of the yz plane at x=25 in a 50x50x50 grid.
Thank you in advance for your help!

You can change the transparency of shapes in the 3D display by changing the transparency attributes in the style class based on a visibility attribute of the agent. For example, your agents could check their current position in 3D space and only return isVisible() true when the agent is in the plane of space you'd like to visualize. This will only show agents in the 3D display that exist on your defined plane, which can be any x,y,z orientation through the space. In your style class you will need to update the transparency in the getAppearance(...) method as follows:
public TaggedAppearance getAppearance(MyAgent agent, TaggedAppearance taggedAppearance, Object shapeID) {
if (taggedAppearance == null) {
taggedAppearance = new TaggedAppearance();
// Customize your agent style here...
AppearanceFactory.setMaterialAppearance(taggedAppearance.getAppearance(), Color.white);
}
if (trans == null) {
trans = new TransparencyAttributes();
trans.setCapability(TransparencyAttributes.ALLOW_VALUE_READ);
trans.setCapability(TransparencyAttributes.ALLOW_VALUE_WRITE);
trans.setCapability(TransparencyAttributes.ALLOW_MODE_READ);
trans.setCapability(TransparencyAttributes.ALLOW_MODE_WRITE);
trans.setTransparencyMode(TransparencyAttributes.FASTEST);
taggedAppearance.getAppearance().setTransparencyAttributes(trans);
}
if (agent.isVisible())
trans.setTransparency(0.0f);
else
trans.setTransparency(1.0f);
return taggedAppearance;
}
You could also adjust the transparency value from 0 to 1 to provide different levels of transparency, so that the agents of interest are purely opaque (0.0f) while agents in the periphery are very transparent (0.8f).

Related

How to always show Sirius Label in foreground

For the context, I'm working with Capella, an Eclipse RCP application based on Sirius (hence, EMF, GMF and draw2d). This application is used for MBSE, that basically means diagram representations for industrial systems.
I'm developping an add-on (viewpoint) to display custom labels next to diagram elements. These diagram elements are, to put it simply, boxes inside boxes. My problem is that usually the label text is larger than the space between a box and its container, so the label gets hidden. What I need is these labels to always be in foreground. As I'm more used to web development, what I'm looking for would be the equivalent of the z-index CSS property.
Currently I have no idea of how to achieve this, I'm using a custom .odesign that allows me to control some rendering options, like labels text, the color of some elements or to add decoration, but I dont think its the way to go for my problem. Maybe I should use a custom EditPart or a custom StyleConfiguration (I already used these components for other projects) but I have no clue where to start for this issue.
Any leads will be greatly appreciated.
We recently did this kind of changes to keep some labels in Sirius Sequence diagrams always on top: the combined fragments are placed behind the lifelines (z order) but we wanted to keep the labels of the CombinedFragments visible event their bounds intersects Lifelines, Executions or States).
This has been handled in Bug 564239 for Sirius 6.3.2 (used in Capella 1.4.1).
You could find some hints the bugzilla (Gerrits and commits can be retrieved from the See also section).
In Sirius Sequence diagram , we use org.eclipse.sirius.diagram.sequence.ui.tool.internal.layout.SequenceZOrderingRefresher to control the z-order of CombinedFragments : all the figures that composes them comes from some expressions in the odesign, and synchronization with the Capella model for exemple.
But in your case you want to control only the label, so it must not be dealt on the edit par level, but on the figure one. The "overlay" layer and figure lead might be a good one.
Do not forget another thing: in GMF/GEF, the labels of an element is displayed/shown/rendered/visible if it fits to the visible area of the parent container: in the case of a node in a container with scrollbar, the visible are will impact the visibility of the sub nodes (extended to their border nodes, edges, labels, ...)
Regards
Maxime

Is it possible to create a Q-Q plot when lacking a coordinate system?

I'm looking to create a Q-Q plot within Rascal using the Vis library. I have been told there is no positional system. Is this true? If true, how would I go about plotting this or any scatterplot? Does anyone have an example of this in use?
That's an excellent question. Certainly Rascal's Vis library is "point free" in the sense that its layout mechanism has no absolute coordinate system. However, there are certain Figure kinds which have a relative coordinate system wrt their own "origin". When you combine several of those using horizontal, vertical or overlay boxes (and align them properly), you can create the effect of bar charts, scatterplots and whatever you desire.
In particular the overlay Figure composition is interesting: http://tutor.rascal-mpl.org/Rascal/Libraries/Vis/Figure/Figure.html#/Rascal/Libraries/Vis/Figure/Figures/overlay/overlay.html
Figure point(num x, num y){ return ellipse(shrink(0.05),fillColor("red"),align(x,y));}
coords = [<0.0,0.0>,<0.5,0.5>,<0.8,0.5>,<1.0,0.0>];
ovl = overlay([point(x,y) | <x,y> <- coords]);
render(ovl);
Produces this (both code and image taken from the documentation linked above):
Each point is an ellipse which is aligned at the (x, y) position relative to the origin of the enclosing overlay box.
The origin by default of this overlay seems to be the upper-left corner, when no other FProperty's are given to the overlay. It's possible other alignment options for the overlay Figure also change the position of its origin.
With the help of Jurgen Vinju I wrote this code, hope it helps someone: https://gist.github.com/rlmhermans/c9e82a6a623b65f0c6957ab3ff2742cf

Edge detection on pool table

I am currently working on an algorithm to detect the playing area of a pool table. For this purpose, I captured an image, transformed it to grayscale, and used a Sobel operator on it. Now I want to detect the playing area as a box with 4 corners located in the 4 corners of the table.
Detecting the edges of the table is quite straightforward, however, it turns out that detecting the 4 corners is not so easy, as there are pockets in the pool table. Now I just want to fit a line to each of the side edges, and from those lines, I can compute the intersects, which are the corners for my table.
I am stuck here, because I could not yet come up with a good solution to find these lines in my image. I can see it very easily when I used the Sobel operator. But what would be a good way of detecting it and computing the position of the corners?
EDIT: I added some sample Images
Basic Image:
Grayscale Image
Sobel Filter (horizontal only)
For a general solution, there will be many sources of noise: problems with cloth around the rails, wood texture (or no texture) on the rails, varying lighting, shadows, stains on the cloth, chalk on the rails, and so on.
When color and lighting aren't dependable, and when you want to find the edges of geometric objects, then it's best to think in terms of edge pixels rather than gray/color pixels.
A while back I was thinking of making a phone-based app to save ball positions for later review, including online, so I've though a bit about this problem. Although I can provide some guidance for your current question, it occurs to me you'll run into new problems each step of the way, so I'll try to provide a more complete answer.
Convert the image to grayscale. If we can't get an algorithm to work in grayscale, we'll inevitably run into problems with color. (See below)
[TBD] Do some preprocessing to reduce noise.
Find edge points using Sobel or (if you must) Canny.
Run Hough lines detection, but with a few caveats and parameterizations as described below.
Find the lines described a keystone-shaped quadrilateral. (This will likely be the inner quadrilateral of two: one inside the rail on the bed, and the other slightly larger quadrilateral at the cloth/wood rail edge at top.)
(Optional) Use the side pockets to help determine the orientation of the quadrilateral.
Use an affine transform to map the perspective-distorted table bed to a rectangle of [thankfully] known relative dimensions. We know the bed sizes in advance, so you can remap the distorted rectangle to a proper rectangle. (We'll ignore some optical effects for now.)
Remap the color image to the perspective-corrected rectangle. You'll probably need to tweak the positions of some balls.
General notes:
Filtering by color in the general sense can be difficult. It's tempting to think of the cloth as being simply green, blue, or red (or some other color), but when you look at the actual RGB values and try to separate colors you'll begin to appreciate what a nightmare working in color can be.
Optical distortion might throw off some edges.
The far short rail may be difficult to detect, BUT you do this: find the inside lines for the two long rails, then search vertically between the two rails for the first strong horizontal-ish edge at the far side of the image. That'll be the far short rail.
Although you probably want to use your phone camera for convenience, using a Kinect camera or similar (preferably smaller) device would make the problem easier. Not only would you have both color data and 3D data, but you would eliminate some problems with lighting since the depth data wouldn't depend on visible lighting.
For your app, consider limiting the search region for rail edges to a perspective-distorted rectangle. The user might be able to adjust the search region. This could greatly simplify the processing, and could help you work around problems if the table isn't lit well (as can be the case).
If color segmentation (as suggested by #Dima) works, get the outline of the blob using contour following. Then simplify the outline to a quadrilateral (or a polygon of few sides) by the Douglas-Peucker algorithm. You should find the four table edges this way.
For more accuracy, you can refine the edge location by local search of transitions across it and perform line fitting. Then intersect the lines to get the corners.
The following answer assumes you have already found the positions of the lines in the image. This however can be done "easily" by directly looking at the pixels and seeing if they are in a "line". Usually it is easier to detect this if the image has been deskewed first as well, i.e. Rotated so the rectangle (pool table) is more like this: [] than like /=/. Then it is just a case of scanning the pixels and if there are ones of similar colour alongside it assuming a line is between them.
The code works by looping over the lines found in the image. Whenever the end points of each line falls within a tolerance on within the x and y coordinates it is marked as a corner. Once the corners are found I take the average value between them to find where the corner lies. For example:
A horizontal line ending at 10, 10 and a vertical line starting at 12, 12 will be found to be a corner if there is a tolerance of 2 or more. The corner found will be at: 11, 11
NOTE: This is just to find Top Left corners but can easily be adapted to find all of them. The reason it has been done like this is because in the application where I use it, it is faster to sort each array first into an order where relevant values will be found first, see: Why is processing a sorted array faster than an unsorted array?.
Also note that my code finds the first corner for each line which might not be applicable for you, this is mainly for performance reasons. However the code can easily be adapted to find all the corners with all the lines then either select the "more likely" corner or average through them all.
Also note my answer is written in C#.
private IEnumerable<Point> FindTopLeftCorners(IEnumerable<Line> horizontalLines, IEnumerable<Line> verticalLines)
{
List<Point> TopLeftCorners = new List<Point>();
Line[] laHorizontalLines = horizontalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
Line[] laVerticalLines = verticalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
foreach (Line verticalLine in laVerticalLines)
{
foreach (Line horizontalLine in laHorizontalLines)
{
if (verticalLine.StartPoint.X <= (horizontalLine.StartPoint.X + _nCornerTolerance) && verticalLine.StartPoint.X >= (horizontalLine.StartPoint.X - _nCornerTolerance))
{
if (horizontalLine.StartPoint.Y <= (verticalLine.StartPoint.Y + _nCornerTolerance) && horizontalLine.StartPoint.Y >= (verticalLine.StartPoint.Y - _nCornerTolerance))
{
int nX = (verticalLine.StartPoint.X + horizontalLine.StartPoint.X) / 2;
int nY = (verticalLine.StartPoint.Y + horizontalLine.StartPoint.Y) / 2;
TopLeftCorners.Add(new Point(nX, nY));
break;
}
}
}
}
return TopLeftCorners;
}
Where Line is the following class:
public class Line
{
public Point StartPoint { get; private set; }
public Point EndPoint { get; private set; }
public Line(Point startPoint, Point endPoint)
{
this.StartPoint = startPoint;
this.EndPoint = endPoint;
}
}
And _nCornerTolerance is an int of a configurable amount.
A playing area of a pool table typically has a distinctive color, like green or blue. I would try a color-based segmentation approach first. The Color Thresholder app in MATLAB gives you an easy way to try different color spaces and thresholds.

OpenCV: Generating points from image after thinning

I've ran in to an issue concerning generating floating point coordinates from an image.
The original problem is as follows:
the input image is handwritten text. From this I want to generate a set of points (just x,y coordinates) that make up the individual characters.
At first I used findContours in order to generate the points. Since this finds the edges of the characters it first needs to be ran through a thinning algorithm, since I'm not interested in the shape of the characters, only the lines or as in this case, points.
Input:
thinning:
So, I run my input through the thinning algorithm and all is fine, output looks good. Running findContours on this however does not work out so good, it skips a lot of stuff and I end up with something unusable.
The second idea was to generate bounding boxes (with findContours), use these bounding boxes to grab the characters from the thinning process and grab all none-white pixel indices as "points" and offset them by the bounding box position. This generates even worse output, and seems like a bad method.
Horrible code for this:
Mat temp = new Mat(edges, bb);
byte roi_buff[] = new byte[(int) (temp.total() * temp.channels())];
temp.get(0, 0, roi_buff);
int COLS = temp.cols();
List<Point> preArrayList = new ArrayList<Point>();
for(int i = 0; i < roi_buff.length; i++)
{
if(roi_buff[i] != 0)
{
Point tempP = bb.tl();
tempP.x += i%COLS;
tempP.y += i/COLS;
preArrayList.add(tempP);
}
}
Is there any alternatives or am I overlooking something?
UPDATE:
I overlooked the fact that I need the points (pixels) to be ordered. In the method above I simply do scanline approach to grabbing all the pixels. If you look at the 'o' for example, it would grab first the point on the left hand side, then the one on the right hand side. I would need them to be ordered by their neighbouring pixels since I want to draw paths with the points later on (outside of opencv).
Is this possible?
You should look into implementing your own connected components labelling. The concept is very simple: you scan the first line and assign unique labels to each horizontally connected strip of pixels. You basically check for every pixel if it is connected to its left neighbour and assign it either that neighbour's label or a new label. In the second row you do the same, but you also check against the pixels above it. Sometimes you need a label merge: two strips that were not connected in the previous row are joined in the current row. The way to deal with this is either to keep a list of label equivalences or use pointers to labels (so you can easily do a complete label change for an object).
This is basically what findContours does, but if you implement it yourself you have the freedom to go for 8-connectedness and even bridge a single-pixel or two-pixel gap. That way you get "almost-connected components labelling". It looks like you need this for the "w" in your example picture.
Once you have the image labelled this way, you can push all the pixels of a single label to a vector, and order them something like this. Find the top left pixel, push it to a new vector and erase it from the original vector. Now find the pixel in the original vector closest to it, push it to the new vector and erase from the original. Continue until all pixels have been transferred.
It will not be very fast this way, but it should be a start.

Rotated text with OpenVG

I've noticed that the OpenVG transformation matrix is ignored by the text rendering routine at all and I cannot control the text position with it manually with VG_GLYPH_ORIGIN parameter.
I'm implementing a scene graph. I found out that I can use vgGetMatrix, read components 6 and 7 of the current 3x3 transform matrix and set VG_GLYPH_ORIGIN to those values before drawing a block of text. This allows the text origin to be placed in correct place, but the text is still always displayed left-to-right.
However, this itself doesn't enable me to do any other transformations, like rotation. I'm surprised because the text is composed from VGPaths and they are indeed transformed
Is there a way to make the text rotated with OpenVG 1.1? Or should I ignore the text functionality from OpenVG 1.1 and draw the letters as individual paths or images manually?
All the draw functions use a different user->surface matrix:
vgDrawPath uses VG_MATRIX_PATH_USER_TO_SURFACE
vgDrawImage uses VG_MATRIX_IMAGE_USER_TO_SURFACE
vgDrawGlyph/vgDrawGlyphs use VG_MATRIX_GLYPH_USER_TO_SURFACE
By default, all of the matrix functions (vgTranslate, vgRotate, vgLoadMatrix, etc) operate on VG_MATRIX_PATH_USER_TO_SURFACE. To change the active matrix, call vgSeti with VG_MATRIX_MODE as the first argument:
vgSeti(VG_MATRIX_MODE, VG_MATRIX_GLYPH_USER_TO_SURFACE);
/* now vgTranslate, vgRotate, etc will operate on VG_MATRIX_GLYPH_USER_TO_SURFACE */

Resources