I was working on to find the shortest path between two nodes in undirected acyclic graph using Dijkstra's algorithms. I wanted to find the longest path that is possible by the same algorithm. I also want to avoid few routes with 0 edge values. How do I do that using Dijkstra's algorithm?
Now after searching through Stackoverflow I came across one given solution which just states that we need to modify the relaxation part to find the longest path.
Like:
if(distanceValueOfNodeA< EdgeValueofNodeBtoA )
{
distanceValueOfNodeA = EdgeValueofNodeBtoA;
}
But we are not considering adding distanceValueOfNodeB
But for shortest paths we calculate:
distanceValueOfNodeA = distanceValueOfNodeB+EdgeValueofNodeBtoA
Should we ignore distanceValueOfNodeB to calculate distanceValueOfNodeA ?
I am sorry to disappoint you but that problem is known as Longest path in a graph and there isn't an efficient algorithm to solve it, so niether Djikstra algorithm with any modification can.
It belongs to a class of problems known as NP-hard,those are problems for which there isn't (at the moment) an algorithm to solve them in faster time complexity compared to exponential.
Related
I am having a discussion with a friend if the following will work:
We recently learned in a lecture about Breadth-First-Search. I know that it is a special case of Dijkstra where each edge weight is set to one. Assume now we are given a graph where the edges have integer weights of more than one. Then I would modify this graph by introducing additional vertices and connecting them by edges with weight one, e.g. assume we have an edge of weight 3 connecting the vertices u and v, then I would introduce dummy-vertices d1, d2, remove the edge connecting u and v and instead add edges {u, d1}, {d1, d2}, {d2,v} of weight one.
If I modify my whole graph this way and then apply breadth-first search starting from one of the original vertices, wouldn't this work as well?
Thank you very much in advance!
Since BFS is guaranteed to return an optimal path on unweighted graphs, and you've created the unweighted equivalent of your original graph, you'll be guaranteed to get the shortest path.
What you lose by doing this over Dijkstra's algorithm is runtime optimality. Now the runtime of your algorithm is dependent on the edge weights, whereas Dijkstra's is only dependent on the number of edges.
This sort of thought experiment is a great way to understand how Dijkstra's algorithm works (eg. how would you modify your algorithm to not require creating a new graph? Or not take 100 steps for an edge with weight 100?). In fact this is probably how Dijkstra discovered the algorithm to begin with.
I am looking to solve a problem where I have a weighted directed graph and I must start at the origin, visit all vertices at least once and return to the origin in the shortest path possible. Essentially this would be a classic example of TSP, except I DO NOT have the constraint that each vertex can only be visited once. In my case any vertex excluding the origin can be visited any number of times along the path, if this makes the path shorter. So for example in a graph containing the vertices V1, V2, V3 a path like this would be valid, given that it is the shortest path:
ORIGIN -> V1 -> V2 -> V1 -> V3 -> V1 -> ORIGIN
As a result, I am a bit stuck on what approach to take in order to solve this, as a classic dynamic programming algorithm approach which is usually used to solve TSP problems in exponential time is not suitable.
The typical approach is to create a distance matrix that gives the shortest-path distance between any two nodes. So d(i,j) = shortest path (following the edges of the network) from i to j. This can be done using Dijkstra's algorithm.
Now just solve a classical TSP with distances d(i,j). Your TSP doesn't "know" that the actual route followed might involve visiting a node multiple times. At the same time, it will ensure that the vehicle stops at every node.
Now, as for efficiency: As #Codor points out, TSP is NP-hard and so is your variant of it, so you are not going to find a provably optimal, polynomial-time algorithm. However, there are still many, many good algorithms (both heuristic and exact) for TSP, and most of them should be suitable for your problem. (In general, DP is not the way to go for TSP.)
To answer the question in part, the problem described in the question does not admit a polynomial-time algorithm unless P=NP by the following argument. Clearly, the proposed problem includes instances which are Euclidean. However, no optimal solution to a Euclidean instance has repeated nodes, as such a solution can be improved by deleting additional nodes, using the triangle inequality. However, according to the Wikipedia article on TSP, Euclidean TSP is still NP-hard. This means that any polynomial-time algorithm for the problem in the question would be able to solve the Euclidean TSP to optimality on polynomial time, which is impossible unless P=NP.
I am looking for an Algorithm that is able to solve this problem.
The problem:
I have the following set points:
I want to group the points that represents a line (with some epsilon) in one group.
So, the optimal output will be something like:
Some notes:
The point belong to one and only line.
If the point can be belong to two lines, it should belong to the strongest.
A line is considered stronger that another when it has more belonging points.
The algorithm should not cover all points because they may be outliers.
The space contains many outliers it may hit 50% of the the total space.
Performance is critical, Real-Time is a must.
The solutions I found till now:
1) Dealing with it as clustering problem:
The main drawback of this method is that there is no direct distance metric between points. The distance metric is on the cluster itself (how much it is linear). So, I can not use traditional clustering methods and I have to (as far as I thought) use some kind of, for example, clustering us genetic algorithm where the evaluation occurs on the while cluster not between two points. I also do not want to use something like Genetic Algorithm While I am aiming real-time solution.
2) accumulative pairs and then do clustering:
While It is hard to make clustering on points directly, I thought of extracting pairs of points and then try to cluster them with others. So, I have a distance between two pairs that can represents the linearity (two pairs are in real 4 points).
The draw-back of this method is how to choose these pairs? If I depend on the Ecledian-Distance between them, it may not be accurate because two points may be so near to each other but they are so far from making a line with others.
I appreciate any solution, suggest, clue or note. Please you may ask about any clarification.
P.S. You may use any ready OpenCV function in thinking of any solution.
As Micka advised, I used Sequential-RANSAC to solve my problem. Results were fantastic and exactly as I want.
The idea is simple:
Apply RANSAC with fit-line model on the points.
Delete all points that are in-liers of the output of RANSAC.
While there are 2 or more points go to 1.
I have implemented my own fit-line RANSAC but unfortnantly I can not share code because it belongs to the company I work for. However, there is an excellent fit-line RANSAC here on SO that was implemented by Srinath Sridhar. The link of the post is : RANSAC-like implementation for arbitrary 2D sets.
It is easy to make a Sequential-RANSAC depending on the 3 simple steps I mentioned above.
Here are some results:
Given that there is a 15-square puzzle and we will solve the puzzle using a-star search. The heuristic function is Manhattan distance.
Now a solution is provided by someone with cost T and we are not sure if this solution is optimal. With this information provided,
Is it possible to find a better solution with cost < T?
Is it possible to optimize the performance of searching algorithm?
For this question, I have considered several approaches.
h(x) = MAX_INT if g(x) >= T. That is, the f(x) value will be maximum if the solution is larger than T.
Change the search node as CLOSED state if g(x) >= T.
Is it possible to find a better solution?
You need to know if T is the optimal solution. If you do not know the optimal solution, use the average cost; a good path is better than the average. If T is already better than average, you don't need to find a new path.
Is it possible to optimize the performance of the searching algorithm?
Yes. Heuristics are assumptions that help algorithms to make good decisions. The A* algorithm makes the following assumptions:
The best path costs the least (Djikstra's Algorithm - stay near origin of search)
The best path is the most direct path (Greedy Search - minimize distance to goal)
Good heuristics vastly improve performance (A* is useful for this reason). Bad heuristics lead the search away from good solutions and obliterate performance. My advice is to know the game you are searching; in chess, it's generally best to avoid losing a queen, so that may be a good heuristic to use.
Heuristics will have the largest impact on performance, especially in the case of a 15x15 search space. In larger search spaces (2000x2000), good use of high efficiency data structures like arrays and integers may improve performance.
Potential solutions
Both the solutions you provide are effectively the same; if the path isn't as good as the other paths you have, ignore them. Search algorithms like A* do this for you, as j_random_hacker has said in a roundabout manner.
The OPEN list is a set of possible moves; select the best and ignore the rest. The CLOSED list is the set of moves that have already been selected, not the ones you wish to ignore.
(1) d(x) = Djikstra's Algorithm
(2) g(x) = Greedy Search
(3) a*(x) = A* Algorithm = d(x) + g(x)
To make your A* more greedy (prefer suboptimal but fast solutions), multiply the cost of g(x) to favour a greedy search; (4) a*(x) = d(x) + 1.1 * g(x)
I actually tested this in to a search space of 1500x2000. (3), a standard A*, took about 5 seconds to find the goal on the opposite side. (4) took only milliseconds to find the goal, demonstrating the value of using heuristics well.
You may also add other heuristics to A*, such as:
Depth-first search (prefer a greater amount of moves)
Bread-first (prefer a smaller amount of moves)
Stick to Roads (if terrain determines movement speed, increase the cost of choosing bad terrain)
Stay out of enemy territory (if you want to avoid losing units, don't put them in harms way)
I was just wondering if all algorithms for the TSP will give the same optimum routes? I thought that this would be the case but ive implemented branch and bound and A* and they both give very different results to the same input, I was just wondering if this is normal?
The route my differ, but the cost of all optimal solution should be the same.
If your A* solution is more expensive, than your heuristic is wrong.
Have a look at wikipedia A* algorithm for proofs that it always finds an optimal solution.
No. Provided more than one optimal route exists, different algorithms will not necesarily find the same path. It will depend on the implementation, and I assume it will also depend on how you label the graph, so that different labelings will make the same algorithm find different routes.