I have a road map represented as a directed graph of junctions and links leading from one junction to another, each link is weighted with it's own traversal time (the time it takes to cross the link) and im asked to find an algorithm to get from junction A to junction B and back from junction B to junction A so that the total path cost (in time) takes no longer than 10% more time than the optimal path cost (that is the path cost returned by A* algorithm) while keeping the time overlaps of the path to B and the path from B to a minimum, that is if t(x,y) represents the time to cross link (x,y) i need to bring to minimum the sum of t(x,y) + t(y,x) for the links that overlap.
the algorithm should be optimal for the problem at hand and complete (it should also be efficient) and probably use some variants of A* like A*epsilon and the likes...
does anyone have a clue how to go about this problem?
i was thinking of representing the states of this problem as (junction,flag) where flag indicates whether the current node is a part of a path that already passed junction B and the goal state is (A,True) and then using A*epsilon on this... but i don't know how to take into account the time overlap issue.. i guess what im suggesting is not the way im intended to solve this.
any help would be greatly appreciated :)
Related
In games like StarCraft you can have up to 200 units (for player) in a map.
There are small but also big maps.
When you for example grab 50 units and tell them to go to the other side of the map some algorithm kicks in and they find path through the obsticles (river, hills, rocks and other).
My question is do you know how the game doesnt slow down because you have 50 paths to calculate. In the meantime other things happens like drones collecting minerals buildinds are made and so on. And if the map is big it should be harder and slower.
So even if the algorithm is good it will take some time for 100 units.
Do you know how this works maybe the algorithm is similar to other games.
As i said when you tell units to move you did not see any delay for calculating the path - they start to run to the destination immediately.
The question is how they make the units go through the shortest path but fast.
There is no delay in most of the games (StarCraft, WarCraft and so on)
Thank you.
I guess it just needs to subdivide the problem and memoize the results. Example: 2 units. Unit1 goes from A to C but the shortest path goes through B. Unit2 goes from B to C.
B to C only needs to be calculated once and can be reused by both.
See https://en.m.wikipedia.org/wiki/Dynamic_programming
In this wikipedia page it specifically mentions dijkstra's algorithm for path finding that works by subdividing the problem and store results to be reused.
There is also a pretty good looking alternative here http://www.gamasutra.com/blogs/TylerGlaiel/20121007/178966/Some_experiments_in_pathfinding__AI.php where it takes into account dynamic stuff like obstacles and still performs very well (video demo: https://www.youtube.com/watch?v=z4W1zSOLr_g).
Another interesting technique, does a completely different approach:
Calculate the shortest path from the goal position to every point on the map: see the full explanation here: https://www.youtube.com/watch?v=Bspb9g9nTto - although this one is inefficient for large maps
First of all 100 units is not such a large number, pathfinding is fast enough on modern computers that it is not a big resource sink. Even on older games, optimizations are made to make it even faster, and you can see that unit will sometimes get lost or stuck, which shouldn't really happen with a general algorithm like A*.
If the map does not change map, you can preprocess it to build a set of nodes representing regions of the map. For example, if the map is two islands connected by a narrow bridge, there would be three "regions" - island 1, island 2, bridge. In reality you would probably do this with some graph algorithm, not manually. For instance:
Score every tile with distance to nearest impassable tile.
Put all adjacent tiles with score above the threshold in the same region.
When done, gradually expand outwards from all regions to encompass low-score tiles as well.
Make a new graph where each region-region intersection is a node, and calculate shortest paths between them.
Then your pathfinding algorithm becomes two stage:
Find which region the unit is in.
Find which region the target is in.
If different regions, calculate shortest path to target region first using the region graph from above.
Once in the same region, calculate path normally on the tile grid.
When moving between distant locations, this should be much faster because you are now searching through a handful of nodes (on the region graph) plus a relatively small number of tiles, instead of the hundreds of tiles that comprise those regions. For example, if we have 3 islands A, B, C with bridges 1 and 2 connecting A-B and B-C respectively, then units moving from A to C don't really need to search all of B every time, they only care about shortest way from bridge 1 to bridge 2. If you have a lot of islands this can really speed things up.
Of course the problem is that regions may change due to, for instance, buildings blocking a path or units temporarily obstructing a passageway. The solution to this is up to your imagination. You could try to carefully update the region graph every time the map is altered, if the map is rarely altered in your game. Or you could just let units naively trust the region graph until they bump into an obstacle. With some games you can see particularly bad cases of the latter because a unit will continue running towards a valley even after it's been walled off, and only after hitting the wall it will turn back and go around. I think the original Starcraft had this issue when units block a narrow path. They would try to take a really long detour instead of waiting for the crowd to free up a bridge.
There's also algorithms that accomplish analogous optimizations without explicitly building the region graph, for instance JPS works roughly this way.
We are trying to find a way to create a full distance matrix in a neo4j database, where that distance is defined as the length of the shortest path between any two nodes. Of course, there is the shortestPath method but using a loop going through all pairs of nodes and calculating their shortestPaths get very slow. We are explicitely not talking about allShortestPaths, because that returns all shortest paths between 2 specific nodes.
Is there a specific method or approach that is fast for a large number of nodes (>30k)?
Thank you!
j.
There is no easier method; the full distance matrix will take a long time to build.
As you've described it, the full distance matrix must contain the shortest path between any two nodes, which means you will have to get that information at some point. Iterating over each pair of nodes and running a shortest-path algorithm is the only way to do this, and the complexity will be O(n) multiplied by the complexity of the algorithm.
But you can cut down on the runtime with a dynamic programming solution.
You could certainly leverage some dynamic programming methods to cut down on the calculation time. For instance, if you are trying to find the shortest path between (A) and (C), and have already calculated the shortest from (B) to (C), then if you happen to encounter (B) while pathfinding from (A), you do not need to recalculate the rest of the cost of that path; it is known.
However, creating a dynamic programming solution of any reasonable complexity will almost certainly be best done in a separate module for Neo4J that is thrown in into a plugin. If what you are doing is a one-time operation or an operation that won't be run frequently, it might be easier to just do the naive solution of calling shortestPath between each pair, but if you plan to be running it fairly frequently on dynamic data, it might be worth authoring a custom plugin. It totally depends on your needs.
No matter what, though, it will take some time to calculate. The dynamic programming solution will cut down on the time greatly (especially in a densely-connected graph), but it will still not be very fast.
What is the end game? Is this a one-time query that resets some property or creates new edges. Or a recurring frequent effort. If it's one-time, you might create edges between the two nodes at each step creating a transitive closure environment. The edge would point between the two nodes and have, as a property, the distance.
Thus, if the path is a>b>c>d, you would create the edges
a>b 1
a>c 2
a>d 3
b>c 1
b>d 2
c>d 1
The edges could be named distinctively to distinguish them from the original path edges. This could create circular paths, which may neither negate this strategy or need a constraint. if you are dealing with directed acyclic graphs it would work well.
I am struggling to find 1 efficient algorithm which will give me all possible paths between 2 nodes in a directed graph.
I found RGL gem, fastest so far in terms of calculations. I am able to find the shortest path using the Dijkstras Shortest Path Algorithm from the gem.
I googled, inspite of getting many solutions (ruby/non-ruby), either couldn't convert the code or the code is taking forever to calculate (inefficient).
I am here primarily if someone can suggest to find all paths using/tweaking various algorithms from RGL gem itself (if possible) or some other efficient way.
Input of directed graph can be an array of arrays..
[[1,2], [2,3], ..]
P.S. : Just to avoid negative votes/comments, unfortunately I don't have inefficient code snippet to show as I discarded it days ago and didn't save it anywhere for the record or reproduce here.
The main problem is that the number of paths between two nodes grows exponentially in the number of overall nodes. Thus any algorithm finding all paths between two nodes, will be very slow on larger graphs.
Example:
As an example imagine a grid of n x n nodes each connected to their 4 neighbors. Now you want to find all paths from the bottom left node to the top right node. Even when you only allow for moves to the right (r) and moves up (u) your resulting paths can be described by any string of length 2n with equal number of (r)'s and (u)'s. This will give you "2n choose n" number of possible paths (ignoring other moves and cycles)
I am learning about SARSA algorithm implementation and had a question. I understand that the general "learning" step takes the form of:
Robot (r) is in state s. There are four actions available:
North (n), East (e), West (w) and South (s)
such that the list of Actions,
a = {n,w,e,s}
The robot randomly picks an action, and updates as follows:
Q(a,s) = Q(a,s) + L[r + DQ(a',s1) - Q(a,s)]
Where L is the learning rate, r is the reward associated to (a,s), Q(s',a') is the expected reward from an action a' in the new state s' and D is the discount factor.
Firstly, I don't undersand the role of the term - Q(a,s), why are we re-subtracting the current Q-value?
Secondly, when picking actions a and a' why do these have to be random? I know in some implementations or SARSA all possible Q(s', a') are taken into account and the highest value is picked. (I believe this is Epsilon-Greedy?) Why not to this also to pick which Q(a,s) value to update? Or why not update all Q(a,s) for the current s?
Finally, why is SARSA limited to one-step lookahead? Why, say, not also look into an hypothetical Q(s'',a'')?
I guess overall my questions boil down to what makes SARSA better than another breath-first or depth-first search algorithm?
Why do we subtract Q(a,s)? r + DQ(a',s1) is the reward that we got on this run through from getting to state s by taking action a. In theory, this is the value that Q(a,s) should be set to. However, we won't always take the same action after getting to state s from action a, and the rewards associated with going to future states will change in the future. So we can't just set Q(a,s) equal to r + DQ(a',s1). Instead, we just want to push it in the right direction so that it will eventually converge on the right value. So we look at the error in prediction, which requires subtracting Q(a,s) from r + DQ(a',s1). This is the amount that we would need to change Q(a,s) by in order to make it perfectly match the reward that we just observed. Since we don't want to do that all at once (we don't know if this is always going to be the best option), we multiply this error term by the learning rate, l, and add this value to Q(a,s) for a more gradual convergence on the correct value.`
Why do we pick actions randomly? The reason to not always pick the next state or action in a deterministic way is basically that our guess about which state is best might be wrong. When we first start running SARSA, we have a table full of 0s. We put non-zero values into the table by exploring those areas of state space and finding that there are rewards associated with them. As a result, something not terrible that we have explored will look like a better option than something that we haven't explored. Maybe it is. But maybe the thing that we haven't explored yet is actually way better than we've already seen. This is called the exploration vs exploitation problem - if we just keep doing things that we know work, we may never find the best solution. Choosing next steps randomly ensures that we see more of our options.
Why can't we just take all possible actions from a given state? This will force us to basically look at the entire learning table on every iteration. If we're using something like SARSA to solve the problem, the table is probably too big to do this for in a reasonable amount of time.
Why can SARSA only do one-step look-ahead? Good question. The idea behind SARSA is that it's propagating expected rewards backwards through the table. The discount factor, D, ensures that in the final solution you'll have a trail of gradually increasing expected rewards leading to the best reward. If you filled in the table at random, this wouldn't always be true. This doesn't necessarily break the algorithm, but I suspect it leads to inefficiencies.
Why is SARSA better than search? Again, this comes down to an efficiency thing. The fundamental reason that anyone uses learning algorithms rather than search algorithms is that search algorithms are too slow once you have too many options for states and actions. In order to know the best action to take from any other state action pair (which is what SARSA calculates), you would need to do a search of the entire graph from every node. This would take O(s*(s+a)) time. If you're trying to solve real-world problems, that's generally too long.
Firstly , For those of your who dont know - Anytime Algorithm is an algorithm that get as input the amount of time it can run and it should give the best solution it can on that time.
Weighted A* is the same as A* with one diffrence in the f function :
(where g is the path cost upto node , and h is the heuristic to the end of path until reaching a goal)
Original = f(node) = g(node) + h(node)
Weighted = f(node) = (1-w)g(node) +h(node)
My anytime algorithm runs Weighted A* with decaring weight from 1 to 0.5 until it reaches the time limit.
My problem is that most of the time , it takes alot time until this it reaches a solution , and if given somthing like 10 seconds it usaully doesnt find solution while other algorithms like anytime beam finds one in 0.0001 seconds.
Any ideas what to do?
If I were you I'd throw the unbounded heuristic away. Admissible heuristics are much better in that given a weight value for a solution you've found, you can say that it is at most 1/weight times the length of an optimal solution.
A big problem when implementing A* derivatives is the data structures. When I implemented a bidirectional search, just changing from array lists to a combination of hash augmented priority queues and array lists on demand, cut the runtime cost by three orders of magnitude - literally.
The main problem is that most of the papers only give pseudo-code for the algorithm using set logic - it's up to you to actually figure out how to represent the sets in your code. Don't be afraid of using multiple ADTs for a single list, i.e. your open list. I'm not 100% sure on Anytime Weighted A*, I've done other derivatives such as Anytime Dynamic A* and Anytime Repairing A*, not AWA* though.
Another issue is when you set the g-value too low, sometimes it can take far longer to find any solution that it would if it were a higher g-value. A common pitfall is forgetting to check your closed list for duplicate states, thus ending up in a (infinite if your g-value gets reduced to 0) loop. I'd try starting with something reasonably higher than 0 if you're getting quick results with a beam search.
Some pseudo-code would likely help here! Anyhow these are just my thoughts on the matter, you may have solved it already - if so good on you :)
Beam search is not complete since it prunes unfavorable states whereas A* search is complete. Depending on what problem you are solving, if incompleteness does not prevent you from finding a solution (usually many correct paths exist from origin to destination), then go for Beam search, otherwise, stay with AWA*. However, you can always run both in parallel if there are sufficient hardware resources.