How to get vertices xy from Jung after layout? Jung 2.0.1 - jung

this seems to be a repeated questioned. But I did not get the expected results following previos Q/A. I am exploring things that I did not have time in the past. It is for my own enrichment only. My background in Java is almost non-existing and struggling with Jung too.
These are what I have done (or trying to do):
(1) Define network topology with MIT network GNU Octave toolbox, i.e., vertices and edges.
(2) Define Jung graph vertices and edges in a Java class I put together. Octave invokes the class method.
(3) Vertex layout in Jung with the graph and dimension. KKLayout selected.
(4) Visualization of the layout in Jung.
(5) Output the post-layout vertices (and their xy) and edges back to Octave.
xy of the vertices are located similar to How to use JUNG layout transformations correctly? except that the posting as it is would not compile. Instead, I sent the graph to Abstractlayout and used the transform operated on the graph vertices to generate the xy, as other posts suggested or my understanding of other posts. Abstractlayout is the only layout interface provides the getX and getY methods I found. So I cannot use KKlayout to export xy by design.
However, I am not getting the xy in (5) as shown in (4). The edges appear to be correctly returned. But the xy of the vertices are completely different. See the attached picture.
I further compared the xy within my Java class vs. what is in Octave. They are the same. So, I am getting vertices xy but not the xy used for visualization.
My question is how to extract the vertices xy after Jung layout in 2.0.1? Is there an example that I can follow?
two results

Related

Voronoi graph from set of polygons in Emgu CV (or OpenCV)

Using Emgu CV I have extracted a set of closed polygons from the contours in an image of a road network. The polygons represent road outlines. The result is shown below, plotted over an OpenStreetMaps map (the polygons in 'pixel' form from Emgu CV have been converted to latitude/longitude form to be plotted).
Set of polygons representing road outlines:
I would now like to compute the Voronoi diagram of this set of polygons, which will help me find the centerline of the road. But in Emgu CV I can only find a way to get the Voronoi diagram of a set of points. This is done by finding the Delaunay triangulation of the set of points (using the Subdiv2D class) and then computing the voronoi facets with GetVoronoiFacets.
I have tried computing the Voronoi diagram of the points defined by all the polygons in the set (each polygon is a list of points), but this gives me an extremely complicated Voronoi diagram, as one might expect:
Voronoi diagram of set of points:
This image shows a smaller portion of the first picture (for clarity, since it is so convoluted). Indeed some of the lines in the diagram seem to represent the road centerline, but there are so many other lines, it will be tough to find a criterion to extract the "good" lines.
Another potential problem that I am facing is that, as you should be able to tell from the first picture, some polygons are in the interior of others, so we are not in the standard situation of a set of disjoint closed polygons. That is, sometimes the road is between the outer boundary of one polygon and the inner boundary of another.
I'm looking for suggestions as to how to compute the Voronoi graph of the set of polygons using Emgu CV (or Open CV), hopefully overcoming this second problem I've outlined as well. I'm also open to other suggestions for how to acheive this without using Emgu CV.
If you already have polygons, you can try computing the Straight Skeleton.
I haven't tried it, but CGAL has an implementation. Note that this particular function license is GPL.
A possible issue may be:
The current version of this CGAL package can only construct the
straight skeleton in the interior of a simple polygon with holes, that
is it doesn't handle general polygonal figures in the plane.
Probably there are workarounds for that. For example you can include all polygons in a bigger rectangle (this way the original polygons will be holes of the new rectangle). This may not work well if the original polygons have holes. To solve that, you could execute the algorithm for each polygon with holes and then put all polygons in a rectangle, removing all holes and execute the algorithm again.

Ray tracer for complicated figures

I have implemented realtime ray tracer with MetalFramework for iOS and it is implemented for following optical prisms like dodecahedron, icosahedron, octahedron, cube, etc. All my figures are composed from triangles, for example cube - 12 triangles, octahedron - 4 triangles. I trace rays and search intersection with figure, then I search how ray moves in prism. Then ray leaves figure and I search intersection with skybox. The problem is in complicated figures. When I test cube fps is 60, but when I test dodecahedron fps is 6. In my algorithm intersection with figure is the same as intersection with any triangle. It means that when I check intersection with ray and figure I have to check intersection with all triangles. I need some idea how to do not check intersections for all triangles. Thanks.
let say you have world bounded by some bounding box
create grid (dividing this box to cubes or whatever)
each voxel/cell
Is a list of triangles that intersects or are in it so before rendering for each cell process all triangles and store index of all triangles inside or crossing
rewrite ray-tracer to trace through this voxel map
So just increment the ray through neighboring voxels it is the same as line rasterization on pixels. This way you have partially Z-sort done. So take first voxel hit by ray and test only triangles contained in it. If any hit on voxel was found then stop (no need to test other voxels because they are farer).
further optimizations
You can add flag if triangle has been tested so test only those which where not already tested because many triangles will be multiple times tested otherwise
[notes]
Number of voxels per axis greatly affect performance so you need to play with it a bit to achieve best performance. If you have dynamic objects then the voxel map lists computations must be done once in a while or even per each frame. For static scene there is sufficient to do this just once.
To trace efficiently you'll need to use an acceleration structure, for example a KD-tree or a bounding volume hierarchy(BVH). This is similar to using a binary search tree to find a matching element.
I would suggest using a BVH because it is easier to construct and traverse than a KD-tree. And I would suggest against using a uniform voxel grid structure. A voxel grid can easily have very poor performance when triangles are unevenly distributed through the scene or heavily concentrated in a few voxels.
The BVH is just a tree of bounding volumes, such as an axis aligned bounding box (AABB) which encompass the primitives within it. This way if your a ray misses the bounding volume you know that it does not hit any primitives contained with it.
To construct a BVH:
Put all the triangle in one bounding volume. This will be the root of the tree.
Divide the triangles into two sets where the bounding volume of each set of triangles is minimized. More properly you'd want to follow the surface area heuristic (SAH), where you want to create set of triangles where you minimize the sum of the (surface area of the BVH)/(# triangles) for both sets of triangles.
Repeat step 2 for each node recursively until you the number of triangles you have left hits some threshold (4 is a good number to try).
To traverse
Check if the ray hits the root bounding box, if it does then proceed to step 2 otherwise no hit.
Check if it hits the child bounding boxes. If it does then repeat this step for its children bounding boxes. Otherwise no hit.
When you get the a bounding box which only contains triangles you'll need to test each triangle to see if it is hit just like normal.
This is a basic idea of a BVH. There much more detail that I haven't gone into about the BVH that you'll have to search for, since there are so many variations in the details.
In Short Implement a bounding volume hierarchy to trace.

functions in opencv programming

I am a newbie to opencv and am trying to implement shape context descriptor outlined in the slide http://www.cs.utexas.edu/~grauman/courses/spring2008/slides/ShapeContexts425.pdf
I found the edge points on the shape using canny edge detector for the first part of step 1. Then I need to calculate the Euclidean distance on each edge point to the other ones. Rather than using for-loops to find the distance between each and every point, is there any opencv function that can do this step more efficiently?
Finding all pairwise distances between the set of points is not a standard operation and I don't think you will find something similar in OpenCV. And it is very easy to compute by hand. Given two points a and b, you can calculate distance between them as cv::norm(a - b), as described here.
You might want to utilize the matchShapes function. However, it operates with image moments, not with shape descriptor you mentioned.

Finding simple shapes in 2D point clouds

I am currently looking for a way to fit a simple shape (e.g. a T or an L shape) to a 2D point cloud. What I need as a result is the position and orientation of the shape.
I have been looking at a couple of approaches but most seem very complicated and involve building and learning a sample database first. As I am dealing with very simple shapes I was hoping that there might be a simpler approach.
By saying you don't want to do any training I am guessing that you mean you don't want to do any feature matching; feature matching is used to make good guesses about the pose (location and orientation) of the object in the image, and would be applicable along with RANSAC to your problem for guessing and verifying good hypotheses about object pose.
The simplest approach is template matching, but this may be too computationally complex (it depends on your use case). In template matching you simply loop over the possible locations of the object and its possible orientations and possible scales and check how well the template (a cloud that looks like an L or a T at that location and orientation and scale) matches (or you sample possible locations orientations and scales randomly). The checking of the template could be made fairly fast if your points are organised (or you organise them by e.g. converting them into pixels).
If this is too slow there are many methods for making template matching faster and I would recommend to you the Generalised Hough Transform.
Here, before starting the search for templates you loop over the boundary of the shape you are looking for (T or L) and for each point on its boundary you look at the gradient direction and then the angle at that point between the gradient direction and the origin of the object template, and the distance to the origin. You add that to a table (Let us call it Table A) for each boundary point and you end up with a table that maps from gradient direction to the set of possible locations of the origin of the object. Now you set up a 2D voting space, which is really just a 2D array (let us call it Table B) where each pixel contains a number representing the number of votes for the object in that location. Then for each point in the target image (point cloud) you check the gradient and find the set of possible object locations as found in Table A corresponding to that gradient, and then add one vote for all the corresponding object locations in Table B (the Hough space).
This is a very terse explanation but knowing to look for Template Matching and Generalised Hough transform you will be able to find better explanations on the web. E.g. Look at the Wikipedia pages for Template Matching and Hough Transform.
You may need to :
1- extract some features from the image inside which you are looking for the object.
2- extract another set of features in the image of the object
3- match the features (it is possible using methods like SIFT)
4- when you find a match apply RANSAC algorithm. it provides you with transformation matrix (including translation, rotation information).
for using SIFT start from here. it is actually one of the best source-codes written for SIFT. It includes RANSAC algorithm and you do not need to implement it by yourself.
you can read about RANSAC here.
Two common ways for detecting the shapes (L, T, ...) in your 2D pointcloud data would be using OpenCV or Point Cloud Library. I'll explain steps you may take for detecting those shapes in OpenCV. In order to do that, you can use the following 3 methods and the selection of the right method depends on the shape (Size, Area of the shape, ...):
Hough Line Transformation
Template Matching
Finding Contours
The first step would be converting your point to a grayscale Mat object, by doing that you basically make an image of your 2D pointcloud data and so you can use other OpenCV functions. Then you may smooth the image in order to reduce the noises and the result would be somehow a blurry image which contains real edges, if your application does not need real-time processing, you can use bilateralFilter. You can find more information about smoothing here.
The next step would be choosing the method. If the shape is just some sort of orthogonal lines (such as L or T) you can use Hough Line Transformation in order to detect the lines and after detection, you can loop over the lines and calculate the dot product of the lines (since they are orthogonal the result should be 0). You can find more information about Hough Line Transformation here.
Another way would be detecting your shape using Template Matching. Basically, you should make a template of your shape (L or T) and use it in matchTemplate function. You should consider that the size of the template you want to use should be in the order of your image, otherwise you may resize your image. More information about the algorithm can be found here.
If the shapes include areas you can find contours of the shape using findContours, it will give you the number of polygons which are around your shape you want to detect. For instance, if your shape is L, it would have polygon which has roughly 6 lines. Also, you can use some other filters along with findContours such as calculating the area of the shape.

Automatically rotate a graph

I'm drawing graphs with force-directed layout, and the problem is that the created graphs are oriented randomly and unpredictably, which makes looking at them somewhat confusing. For example, suppose node A is a member of the two separate graphs G1 and G2. With force-directed layout, node A may end up on the left side of G1, but on the right side of G2.
Now I'm trying to reduce the confusion by automatically rotating the graph in a deterministic way after the graph layout algorithm has been applied to it. One could compute the minimum bounding rectangle for this, but it would be nicer if the rotation algorithm could include some of the additional information on the vertices and edges.
In this case, each vertex is a document with a timestamp and a word count, and the edges represent undirected and directed relationships between the documents. Perhaps there's a way to rotate the graph so that older documents concentrate on the left, and newer ones on the right? Same with links: The arrows should point more to the right than to the left. This sounds like a reasonable approach, but I have no idea how to calculate something like this (and Google didn't really help either).
Notes:
I think there are graph layout algorithms that take care of the rotation, but I'd prefer a solution that involves force-directed layout.
One could let the user rotate the graph by hand, but this requires saving the graph orientation, which is something I'd prefer to avoid, cause there's no room for this in the document database.
You can either use
a dynamic force-directed algorithm that preserves a user's mental map between frames (e.g. Graph Drawing in Motion, in Journal of Graph Algorithms and Applications (JGAA), 6(3), 353–-370, 2002), or
Procrustes Analysis to translate, rotate and scale frames so that the relative positions of "landmarks points" are preserved.
You may use a layout which uses a seed to generate random numbers. Try the Yifan Hu multilevel algorithm in Gephi.

Resources