Modifying Dijkstra to find path with max coloured node - graph-algorithm

I just saw a solution of a question that modifying Dijkstra to get the shortest path with a max of K coloured edge. I am wondering what if we want find the shortest path with coloured node instead of edge, how are we gonna modify Dijkstra to do the trick?
What I come up with is that on top of Dijkstra, I add an integer variable let say i. Then make a map to record how many coloured node it takes to get there, and if there is a way that passed through less coloured node, update it. And we will take the path with least coloured node. But this seems something is wrong, any suggestion?
Algorithm Dijkstra ( G , s in V(G), c(v) in{black, white}, K )
1. for each vertex u in V(G) do dist[u] <- +infinity
2. dist[s] <- 0 ; p[s] <- null
3. c(s)=black? r <- 1 : r <- 0
4. Q <- ConstructMinHeap(V(G), dist)
5. M <- Map(s, r)
6. while Q != null do
7. u <- DeleteMin(Q)
8. for each v in Adj[u] do
9. if M.value(u) is null then do
10. M <- Map(u, M.value(v) + c(u)=black? 1 : 0)
11. else
12. M.value(u) < (M.value(v) + c(u)=black? 1 : 0)? Update : do nothing
13. end-if
14. if dist[v] > dist[u] + w(u,v) and M.value < K then do
15. dist[v] <- dist[u] + w(u,v)
16. p[v] <- u
17. UpHeap(v,Q)
18. end-if
19. end-for
20. end-while
end

If you use a priority queue to rank your options, consider using both the distance so far and the number of coloured nodes passed through to determine the order of priority. In this manner, you can use the traditional Dijkstra and let the determination of a minimum path be determined by your priority ranking.

Related

How do I extract the trees from a graph using Answer Set Programming?

There is an undirected graph (V,E), weights on the edges w : E → N, a
target k ∈ N, and a threshold O ∈ N. Find a k-vertices tree of the
graph of weight less than the threshold. In other words, select k
vertices and k - 1 edges from V and E respectively such that they
constitute a tree, and the sum of the weights of the selected edges
are less than O.
Write an ASP program that takes V , E, w, k, and O as input, and finds
a selection of edges satisfying the constraints, or outputs
‘unsatisfiable’ if the constraints cannot be satisfied. Selecting the
edges implicitly induces a selection of the vertices, so there is no
need for the selected vertices to be explicitly displayed.
An instance to this problem is provided through predicates vertex/1,
weight/3, target/1, and threshold/1. All edges have weights, so
statements of the form weight(a, b, 10). can be used to declare the
existence of an edge between vertices a and b at the same time as
declaring their weight, and there is no need for any redundant edge/2
predicate.
I tried the following:
% instance
vertex ( v1 ). vertex ( v2 ). vertex ( v3 ).
vertex ( v4 ). vertex ( v5 ). vertex ( v6 ).
vertex ( v7 ). vertex ( v8 ). vertex ( v9 ).
weight ( v1 , v2 ,3). weight ( v1 , v3 ,3).
weight ( v2 , v4 ,1). weight ( v2 , v5 ,5).
weight ( v3 , v4 ,3). weight ( v3 , v6 ,4).
weight ( v4 , v5 ,4). weight ( v4 , v7 ,1).
weight ( v5 , v7 ,7).
weight ( v6 , v7 ,2). weight ( v6 , v8 ,2).
weight ( v7 , v9 ,3).
weight ( v8 , v9 ,2).
target (4).
threshold (4).
% encoding
(P-1) {select(X, Y) : weight(X, Y, Z)} (Q-1) :- target(P), target(Q).
sum(S) :- S = #sum {W,X,Y : select(X,Y), weight(X,Y,W); W,X,Z : select(X,Z), weight(X,Z,W) }.
:- sum(S),threshold(M), S > M.
:- select(A,B), select(C,D), A == C ; A == D ; B == C ; B == D.
#show select/2.
And I get the following output:
clingo version 5.5.0
Reading from stdin
Solving...
Answer: 1
select(v2,v4) select(v4,v7) select(v6,v7)
Answer: 2
select(v2,v4) select(v4,v7) select(v6,v8)
Answer: 3
select(v2,v4) select(v4,v7) select(v8,v9)
SATISFIABLE
Models : 3
Calls : 1
Time : 0.013s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s)
CPU Time : 0.000s
I was expecting just
select(v2,v4) select(v4,v7) select(v6,v7)
because the others clearly are not tress.
I think this is because of the problematic line:
:- select(A,B), select(C,D), A == C ; A == D ; B == C ; B == D.
How do I correct this?
Ok, that was rather complicated. I'm pretty sure my solution is not perfect, I'm a beginner too.
Before we start with the code, let's check up the question once more: The requirement is to select k nodes and k-1 edges. If you think about it a bit this can form exactly two patterns: one connected tree or multiple non-connected graphs where there is at least one cycle. So if you make sure not to have a cycle you get one connected tree.
I added some nodes to the facts to check if a tree was formed or if the cheap unconnected cycle was found, and for doing so I had to change target and threshold to higher values.
1
#const n = 5.
vertex ( v1; v2; v3; v4; v5; v6; v7; v8; v9 ).
vertex ( m1; m2; m3 ).
weight ( v1 , v2 ,3). weight ( v1 , v3 ,3).
weight ( v2 , v4 ,1). weight ( v2 , v5 ,5).
weight ( v3 , v4 ,3). weight ( v3 , v6 ,4).
weight ( v4 , v5 ,4). weight ( v4 , v7 ,1).
weight ( v5 , v7 ,7).
weight ( v6 , v7 ,2). weight ( v6 , v8 ,2).
weight ( v7 , v9 ,3).
weight ( v8 , v9 ,2).
weight ( m1 , m2 ,0).
weight ( m2 , m3 ,0).
weight ( m3 , m1 ,0).
target (n).
threshold (6).
And now comes the code, followed by an explanation.
% select subset of nodes and vertices
(P) {select(X) : vertex(X)} (P) :- target(P).
(P-1) {select(X, Y) : weight(X, Y, Z)} (Q-1) :- target(P), target(Q).
% postion does not matter in an undirected graph.
directed(A,B):-select(A,B).
directed(B,A):-select(A,B).
% for every selected edge all nodes are selected
:- directed(A,_), vertex(A), not select(A).
% for every selected node there exists at least one edge
:- select(A), {directed(A,B):vertex(B)}0.
% select a direction for each selected edge
{dir(A,B);dir(B,A)}==1 :- select(A,B).
% force them in an order
{ found(X,1..n) } == 1 :- select(X).
{ found(X,N):select(X) } == 1 :- N = 1..n.
% reject if one edge does not follow the order
:- found(X,NX), found(Y,NY), dir(X,Y), NY<NX.
% reject if 2 different edges end in the same vertex
:- dir(X,Z), dir(Y,Z), X!=Y.
:- threshold(M), M < #sum {W,X,Y : select(X,Y), weight(X,Y,W); W,X,Z : select(X,Z), weight(X,Z,W) }.
#show select/2.
Explanation:
To make it easier for me I added the selected vertices in the select/1 predicate.
Since dealing with undirected graphs always has to check both postions I added the directed/2 predicate which is a directed graph version of the selected edges.
Next I made sure every selected vertex has a selected edge and vice versa.
Now comes the complicated part: to detect cycles. For this I forced every selected edge in one of its two directions by using the predicate dir/2. Testing for a tree is easier in a directed graph.
Next I put an order found/2 to the vertices. The directed edges dir/2 where only allowed to go with this order. This forces cycles to a certain behavior.
Now comes the cycle destroyer: if the selected graph has a cycle then two edges from dir/2 will end in the same vertex. REJECT. If this was just an unlucky guess from clingo then it will find a luckier guess which fullfills this criterion.
Computation of the sum was copy and paste from you.
The output is 16 times
select(v2,v4) select(v4,v7) select(v6,v7) select(v6,v8)
The dublicates come from the fact that the order of the vertices in found/2 can differ but still get the same result.

How can I modify Dijkstra's algorithm to get the longest path most of the time?

I know that finding the longest path is an NP Hard problem.
What was asked from us was to change Dijkstra's algorithm to find the longest path by adding another parameter to the algorithm. A parameter like the distance from the source to the given vertex, the vertex predecessors, successors, number of edges...For example, we could extract the vertex from the queue depending on a parameter other than the max distance or we could another queue...
What we did first was to change the initialization so that all vertices distances = 0 except the source node, that = infinity. Then we extracted from the queue of vertices the one with the biggest distance. Then, we inverted the relaxation sign, so that the vertex saves the distance if it is bigger that its current distance.
What parameter could I add that would improve Dijkstra's performance in finding a longest path? It doesn't have to work 100% of the time.
This is Dijkstra's algorithm:
ShortestPath(G, v)
init D array entries to infinity
D[v]=0
add all vertices to priority queue Q
while Q not empty do
u = Q.removeMin()
for each neighbor, z, of u in Q do
if D[u] + w(u,z) < D[z] then
D[z] = D[u] + w(u,z)
Change key of z in Q to D[z]
return D as shortest path lengths
This is our modified version:
ShortestPath(G, v)
init D array entries to 0
D[v]=infinity //source vertex
add all vertices to priority queue Q
while Q not empty do
u = Q.removeMax()
for each neighbor, z, of u in Q do
if D[u] + w(u,z) > D[z] then
D[z] = D[u] + w(u,z)
Change key of z in Q to D[z]
return D as shortest path lengths

minimum weight shortest path for all pairs of vertices with exactly k blue edges

Basically a subset of the graph has edges which are blue
So I know how to find all pairs shortest paths with DP in O(n^3), but how do I account for color and the exactness of the number of edges required for the problem?
This can be done in O(k^2 . n^3) using a variant of Floydd-Warshall.
Instead of keeping track of the minimum path weight d(i,j) between two nodes i and j, you keep track of the minimum path weight d(i,j,r) for paths between i and j with exactly r blue edges, for 0 ≤ r ≤ k.
The update step when examining paths through a node m, where normally d(i,j) would be updated with the sum of d(i,m) and d(m,j) if it is smaller, becomes:
for u: 0 .. k
for v: 0 .. (k-u)
s = d(i,m,u) + d(m,j,v)
d(i,j,u+v) = min( s, d(i,j,u+v) )
At the end, you then read off d(i,j,k).

Heuristic for finding the shortest path on a map

I have used the Dijkstra algorithm for finding the shortest path between two stations on a map. The cost of going from one station to another is the same at every link.
However, the problem is that Dijkstra tries to find the least cost path from the source to all the stations. I want the search to stop once the least cost path to the destination has been found.
So, I decided to use an A* algorithm for this. But I am not able to think of a good heuristic in this case. What can I possibly use as a heuristic?
If you only
want the search to stop once the least cost path to the destination
has been found
, Dijkstra's algorithm can already do that efficiently. You can have the algorithm to return once the target node's status is changed from "grey" to "final". Quoting wikipedia
If we are only interested in a shortest path between vertices source
and target, we can terminate the search at line 13 if u = target.
1 function Dijkstra(Graph, source):
2 dist[source] := 0 // Distance from source to source
3 for each vertex v in Graph: // Initializations
4 if v ≠ source
5 dist[v] := infinity // Unknown distance function from source to v
6 previous[v] := undefined // Previous node in optimal path from source
7 end if
8 add v to Q // All nodes initially in Q
9 end for
10
11 while Q is not empty: // The main loop
12 u := vertex in Q with min dist[u] // Source node in first case
13 remove u from Q
14
15 for each neighbor v of u: // where v has not yet been removed from Q.
16 alt := dist[u] + length(u, v)
17 if alt < dist[v]: // A shorter path to v has been found
18 dist[v] := alt
19 previous[v] := u
20 end if
21 end for
22 end while
23 return dist[], previous[]
24 end function
A* solves a different aspect, and is only useful when you have a meaningful heuristic estimate of how close you are to the target node.
Also, if
The cost of going from one station to another is the same at every
link
,i.e. if path length is the number of links from origin to destination, then shortest path reduces to a depth first search. Using a DFS algorithm may be more efficient.
Additional Note:
In Dijkstra's algorithm, when a node is extracted from the top element u of the priority queue in line 12, its distance label is fixed, and it's impossible to find a smaller distance label than what u currently has. That is why u can be removed in line 13. You can prove this via techniques similar to mathematical induction. If other words, after u is removed from Q, it is not possible for Dijkstra to find a shorter path.

constraint programming mesh network

I have a mesh network as shown in figure.
Now, I am allocating values to all edges in this sat network. I want to propose in my program that, there are no closed loops in my allocation. For example the constraint for top-left most square can be written as -
E0 = 0 or E3 = 0 or E4 = 0 or E7 = 0, so either of the link has to be inactive in order not to form a loop. However, in this kind of network, there are many possible loops.
For example loop formed by edges - E0, E3, E7, E11, E15, E12, E5, E1.
Now my problem is that I have to describe each possible combination of loop which can occur in this network. I tried to write constraints in one possible formula, however I was not able to succeed.
Can anyone throw any pointers if there is a possible way to encode this situation?
Just for information, I am using Z3 Sat Solver.
The following encoding can be used with any graph with N nodes and M edges. It is using (N+1)*M variables and 2*M*M 3-SAT clauses. This ipython notebook demonstrates the encoding by comparing the SAT solver results (UNSAT when there is a loop, SAT otherwise) with the results of a straight-forward loop finding algorithm.
Disclaimer: This encoding is my ad-hoc solution to the problem. I'm pretty sure that it is correct but I don't know how it compares performance-wise to other encodings for this problem. As my solution works with any graph it is to be expected that a better solution exists that uses some of the properties of the class of graphs the OP is interested in.
Variables:
I have one variable for each edge. The edge is "active" or "used" if its corresponding variable is set. In my reference implementation the edges have indices 0..(M-1) and this variables have indices 1..M:
def edge_state_var(edge_idx):
assert 0 <= edge_idx < M
return 1 + edge_idx
Then I have an M bits wide state variable for each edge, or a total of N*M state bits (nodes and bits are also using zero-based indexing):
def node_state_var(node_idx, bit_idx):
assert 0 <= node_idx < N
assert 0 <= bit_idx < M
return 1 + M + node_idx*M + bit_idx
Clauses:
When an edge is active, it links the state variables of the two nodes it connects together. The state bits with the same index as the node must be different on both sides and the other state bits must be equal to their corresponding partner on the other node. In python code:
# which edge connects which nodes
connectivity = [
( 0, 1), # edge E0
( 1, 2), # edge E1
( 2, 3), # edge E2
( 0, 4), # edge E3
...
]
cnf = list()
for i in range(M):
eb = edge_state_var(i)
p, q = connectivity[i]
for k in range(M):
pb = node_state_var(p, k)
qb = node_state_var(q, k)
if k == i:
# eb -> (pb != qb)
cnf.append([-eb, -pb, -qb])
cnf.append([-eb, +pb, +qb])
else:
# eb -> (pb == qb)
cnf.append([-eb, -pb, +qb])
cnf.append([-eb, +pb, -qb])
So basically each edge tries to segment the graph it is part of into a half that is on one side of the edge and has all the state bits corresponding to the edge set to 1 and a half that is on the other side of the edge and has the state bits corresponding to the edge set to 0. This is not possible for a loop where all nodes in the loop can be reached from both sides of each edge in the loop.

Resources