Automatically rotate a graph - graph-layout

I'm drawing graphs with force-directed layout, and the problem is that the created graphs are oriented randomly and unpredictably, which makes looking at them somewhat confusing. For example, suppose node A is a member of the two separate graphs G1 and G2. With force-directed layout, node A may end up on the left side of G1, but on the right side of G2.
Now I'm trying to reduce the confusion by automatically rotating the graph in a deterministic way after the graph layout algorithm has been applied to it. One could compute the minimum bounding rectangle for this, but it would be nicer if the rotation algorithm could include some of the additional information on the vertices and edges.
In this case, each vertex is a document with a timestamp and a word count, and the edges represent undirected and directed relationships between the documents. Perhaps there's a way to rotate the graph so that older documents concentrate on the left, and newer ones on the right? Same with links: The arrows should point more to the right than to the left. This sounds like a reasonable approach, but I have no idea how to calculate something like this (and Google didn't really help either).
Notes:
I think there are graph layout algorithms that take care of the rotation, but I'd prefer a solution that involves force-directed layout.
One could let the user rotate the graph by hand, but this requires saving the graph orientation, which is something I'd prefer to avoid, cause there's no room for this in the document database.

You can either use
a dynamic force-directed algorithm that preserves a user's mental map between frames (e.g. Graph Drawing in Motion, in Journal of Graph Algorithms and Applications (JGAA), 6(3), 353–-370, 2002), or
Procrustes Analysis to translate, rotate and scale frames so that the relative positions of "landmarks points" are preserved.

You may use a layout which uses a seed to generate random numbers. Try the Yifan Hu multilevel algorithm in Gephi.

Related

How to find location and dimensions of walls, columns, beams, doors, windows and room from top layout of a building floor using OpenCV?

Our core aim is:
to use Image Processing to read/scan an architectural Floor Plan Image (exported from a CAD software)
to use Image Processing to read/scan an architectural Floor Plan Image (exported from a CAD software) extract the various lines and curves, group them into Structural Entities like walls, columns, beams etc. – ‘Wall_01’, ‘Beam_03’ and so on
extract the dimensions of each of these Entities based on the scale and the length of the lines in the Floor Plan Image (since AutoCAD lines are dimensionally accurate as per the specified Scale)
and associate each of these Structural Entities (and their dimensions) with a ‘Room’.
We have flexibility in that we can define the exact shapes of the different Structural Entities in the Floor Plan Image (rectangles for doors, rectangles with hatch lines for windows etc.) and export them into a set of images for each Structural Entity (e.g. one image for walls, one for columns, one for doors etc.).
For point ‘B’ above, our current approach based on OpenCV is as follows:
Export each Structural Entity into its own image
Use Canny and HoughLine Transform to identify lines within the image
Group these lines into individual Structural Elements (like ‘Wall_01’)
We have managed to detect/identify the line segments using Canny+HoughLine Transform with a reasonable amount of accuracy.
Original Floor Plan Image
Individual ‘Walls’ Image:
Line Segments identified using Canny+HoughLine:
(I don't have enough reputation to post images yet)
So the current question is - what is the best way to group these lines together into a logical Structural Entity like ‘Wall_01’?
Moreover, are there any specific OpenCV based techniques that can help us group the line segments into logical Entities? Are we approaching the problem correctly? Is there a better way to solve the problem?
Update:
Adding another image of valid wall input image.
You mention "exported from a CAD software". If the export format is PDF, it contains vector data for all graphic elements. You might be better off trying to extract and interpret that. Seems a bit cumbersome to go from a vector format to a pixel format which you then try to bring back to a numerical model.
If you have clearly defined constraints as to what your walls, doors, etc will look like in your image, you would use exactly those. If you are generating the CAD exports yourself, modify the settings there so as to facilitate this
For instance, the doors are all brown and are closed figures.
Same for grouping the walls. In the figures, it looks like you can group based on proximity (i.e, anything within X pixels of each other is one group). Although, the walls to the right of the text 'C7' and below it may get grouped into one.
If you do not have clear definitions, you may be looking at some generic image recognition problems, which means A.I or Machine Learning. This would require a large variety of inputs for it to learn from, and may get very complex

Graph Layout Algorithm of Neo4j

What is the graph layout algorithm that is used in Neo4j?
I would like to have the paper that explain the graph layout algorithm that is shown in NEO4J.
I wan to know why the nodes are organized in the way Neo4j presents them.
The layout algorithm used for visualizing graphs in the Neo4j browser is a force directed algorithm. From Wikipedia:
Their purpose is to position the nodes of a graph in two-dimensional or three-dimensional space so that all the edges are of more or less equal length and there are as few crossing edges as possible, by assigning forces among the set of edges and the set of nodes, based on their relative positions, and then using these forces either to simulate the motion of the edges and nodes or to minimize their energy.
For academic references, there is a chapter from the Handbook of Graph Drawing and Visualization that covers much of the literature here.
In addition to the Neo4j Browser code linked in stdod--'s answer there is a D3 example of force directed layout here.
To visualize a graph in the neo4j-browser is used d3 library.
View implementation of d3.layout.force here: layout.ts.

Shape/Pattern Matching Approach in Computer Vision

I am currently facing a, in my opinion, rather common problem which should be quite easy to solve but so far all my approached have failed so I am turning to you for help.
I think the problem is explained best with some illustrations. I have some Patterns like these two:
I also have an Image like (probably better, because the photo this one originated from was quite poorly lit) this:
(Note how the Template was scaled to kinda fit the size of the image)
The ultimate goal is a tool which determines whether the user shows a thumb up/thumbs down gesture and also some angles in between. So I want to match the patterns against the image and see which one resembles the picture the most (or to be more precise, the angle the hand is showing). I know the direction in which the thumb is showing in the pattern, so if i find the pattern which looks identical I also have the angle.
I am working with OpenCV (with Python Bindings) and already tried cvMatchTemplate and MatchShapes but so far its not really working reliably.
I can only guess why MatchTemplate failed but I think that a smaller pattern with a smaller white are fits fully into the white area of a picture thus creating the best matching factor although its obvious that they dont really look the same.
Are there some Methods hidden in OpenCV I havent found yet or is there a known algorithm for those kinds of problem I should reimplement?
Happy New Year.
A few simple techniques could work:
After binarization and segmentation, find Feret's diameter of the blob (a.k.a. the farthest distance between points, or the major axis).
Find the convex hull of the point set, flood fill it, and treat it as a connected region. Subtract the original image with the thumb. The difference will be the area between the thumb and fist, and the position of that area relative to the center of mass should give you an indication of rotation.
Use a watershed algorithm on the distances of each point to the blob edge. This can help identify the connected thin region (the thumb).
Fit the largest circle (or largest inscribed polygon) within the blob. Dilate this circle or polygon until some fraction of its edge overlaps the background. Subtract this dilated figure from the original image; only the thumb will remain.
If the size of the hand is consistent (or relatively consistent), then you could also perform N morphological erode operations until the thumb disappears, then N dilate operations to grow the fist back to its original approximate size. Subtract this fist-only blob from the original blob to get the thumb blob. Then uses the thumb blob direction (Feret's diameter) and/or center of mass relative to the fist blob center of mass to determine direction.
Techniques to find critical points (regions of strong direction change) are trickier. At the simplest, you might also use corner detectors and then check the distance from one corner to another to identify the place when the inner edge of the thumb meets the fist.
For more complex methods, look into papers about shape decomposition by authors such as Kimia, Siddiqi, and Xiaofing Mi.
MatchTemplate seems like a good fit for the problem you describe. In what way is it failing for you? If you are actually masking the thumbs-up/thumbs-down/thumbs-in-between signs as nicely as you show in your sample image then you have already done the most difficult part.
MatchTemplate does not include rotation and scaling in the search space, so you should generate more templates from your reference image at all rotations you'd like to detect, and you should scale your templates to match the general size of the found thumbs up/thumbs down signs.
[edit]
The result array for MatchTemplate contains an integer value that specifies how well the fit of template in image is at that location. If you use CV_TM_SQDIFF then the lowest value in the result array is the location of best fit, if you use CV_TM_CCORR or CV_TM_CCOEFF then it is the highest value. If your scaled and rotated template images all have the same number of white pixels then you can compare the value of best fit you find for all different template images, and the template image that has the best fit overall is the one you want to select.
There are tons of rotation/scaling independent detection functions that could conceivably help you, but normalizing your problem to work with MatchTemplate is by far the easiest.
For the more advanced stuff, check out SIFT, Haar feature based classifiers, or one of the others available in OpenCV
I think you can get excellent results if you just compute the two points that have the furthest shortest path going through white. The direction in which the thumb is pointing is just the direction of the line that joins the two points.
You can do this easily by sampling points on the white area and using Floyd-Warshall.

How to divide a runtime procedural generated world into chunks

I've been thinking of making a top-down 2D game with a pseudo-infinite runtime procedural generated world. I've read several articles about procedural generation and, maybe I've misread or misunderstood them, but I have yet to come across one explaining how to divide the world into chunks (like Minecraft apparently does).
Obviously, I need to generate only the part of the world that the player can currently see. If my game is tile-based, for example, I could divide the world into n*n chunks. If the player were at the border of such a chunk, I would also generate the adjacent chunk(s).
What I can't figure out is how exactly do I take a procedural world generation algorithm and only use it on one chunk at a time. For example, if I have an algorithm that generates a big structure (e.g. castle, forest, river) that would spread across many chunks, how can I adjust it to generate only one chunk, and afterwards the adjacent chunks?
I apologize if I completely missed something obvious. Thank you in advance!
Study the Midpoint displacement algorithm. Note that the points all along one side are based on the starting values of the corners. You can calculate them without knowing the rest of the grid.
I used this approach to generate terrain. I needed the edges of each 'chunk' of terrain to line up with the adjacent chunks. Using a variation of the Midpoint displacement algorithm I made it so that the height of each point along the edge of a chunk was calculated based only on values at the two corners. If I needed to add randomness, I seeded a random number generator with data from the two corners. This way, any two adjacent chunks could be generated independently and the edges were sure to match.
You can use approaches for height-maps for other things. Instead of height, the data could determine vegetation type, population density, etc. Instead of a chunks of height map where the hills and valleys match up you can have a vegetation map where the forests match up.
It certainly takes some creative programming for any kind of complex world.

How to use the A* path finding algorithm on a grid less 2D plane?

How can I implement the A* algorithm on a gridless 2D plane with no nodes or cells? I need the object to maneuver around a relatively high number of static and moving obstacles in the way of the goal.
My current implementation is to create eight points around the object and treat them as the centers of imaginary adjacent squares that might be a potential position for the object. Then I calculate the heuristic function for each and select the best. The distances between the starting point and the movement point, and between the movement point and the goal I calculate the normal way with the Pythagorean theorem. The problem is that this way the object often ignores all obstacle and even more often gets stuck moving back and forth between two positions.
I realize how silly mu question might seem, but any help is appreciated.
Create an imaginary grid at whatever resolution is suitable for your problem: As coarse grained as possible for good performance but fine-grained enough to find (desirable) gaps between obstacles. Your grid might relate to a quadtree with your obstacle objects as well.
Execute A* over the grid. The grid may even be pre-populated with useful information like proximity to static obstacles. Once you have a path along the grid squares, post-process that path into a sequence of waypoints wherever there's an inflection in the path. Then travel along the lines between the waypoints.
By the way, you do not need the actual distance (c.f. your mention of Pythagorean theorem): A* works fine with an estimate of the distance. Manhattan distance is a popular choice: |dx| + |dy|. If your grid game allows diagonal movement (or the grid is "fake"), simply max(|dx|, |dy|) is probably sufficient.
Uh. The first thing that come to my mind is, that at each point you need to calculate the gradient or vector to find out the direction to go in the next step. Then you move by a small epsilon and redo.
This basically creates a grid for you, you could vary the cell size by choosing a small epsilon. By doing this instead of using a fixed grid you should be able to calculate even with small degrees in each step -- smaller then 45° from your 8-point example.
Theoretically you might be able to solve the formulas symbolically (eps against 0), which could lead to on optimal solution... just a thought.
How are the obstacles represented? Are they polygons? You can then use the polygon vertices as nodes. If the obstacles are not represented as polygons, you could generate some sort of convex hull around them, and use its vertices for navigation. EDIT: I just realized, you mentioned that you have to navigate around a relatively high number of obstacles. Using the obstacle vertices might be infeasible with to many obstacles.
I do not know about moving obstacles, I believe A* doesn't find an optimal path with moving obstacles.
You mention that your object moves back and fourth - A* should not do this. A* visits each movement point only once. This could be an artifact of generating movement points on the fly, or from the moving obstacles.
I remember encountering this problem in college, but we didn't use an A* search. I can't remember the exact details of the math but I can give you the basic idea. Maybe someone else can be more detailed.
We're going to create a potential field out of your playing area that an object can follow.
Take your playing field and tilt or warp it so that the start point is at the highest point, and the goal is at the lowest point.
Poke a potential well down into the goal, to reinforce that it's a destination.
For every obstacle, create a potential hill. For non-point obstacles, which yours are, the potential field can increase asymptotically at the edges of the obstacle.
Now imagine your object as a marble. If you placed it at the starting point, it should roll down the playing field, around obstacles, and fall into the goal.
The hard part, the math I don't remember, is the equations that represent each of these bumps and wells. If you figure that out, add them together to get your final field, then do some vector calculus to find the gradient (just like towi said) and that's the direction you want to go at any step. Hopefully this method is fast enough that you can recalculate it at every step, since your obstacles move.
Sounds like you're implementing The Wumpus game based on Norvig and Russel's discussion of A* in Artifical Intelligence: A Modern Approach, or something very similar.
If so, you'll probably need to incorporate obstacle detection as part of your heuristic function (hence you'll need to have sensors that alert your agent to the signs of obstacles, as seen here).
To solve the back and forth issue, you may need to store the traveled path so you can tell if you've already been to a location and have the heurisitic function examine the past N number of moves (say 4) and use that as a tie-breaker (i.e. if I can go north and east from here, and my last 4 moves have been east, west, east, west, go north this time)

Resources