i need an example of the shortest path of directed graph cycle bye one node (it should reach to all nodes of graph from anode will be the input) please if there is an example i need it in c++ or algorithm thanks very much.........
You require to find the minimum spanning tree for it.
For directed graph according to wikipedia you can use this algorithm.
In Pseudocode:
//INPUT: graph G = (V,E)
//OUTPUT: shortest cycle length
min_cycle(G)
min = ∞
for u in V
len = dij_cyc(G,u)
if min > len
min = len
return min
//INPUT: graph G and vertex s
//OUTPUT: minimum distance back to s
dij_cyc(G,s)
for u in V
dist(u) = ∞
//makequeue returns a priority queue of all V
H = makequeue(V) //using dist-values as keys with s First In
while !H.empty?
u = deletemin(H)
for all edges (u,v) in E
if dist(v) > dist(u) + l(u,v) then
dist(v) = dist(u) + l(u,v)
decreasekey(H,v)
return dist(s)
This runs a slightly different Dijkstra's on each vertex. The mutated Dijkstras
has a few key differences. First, all initial distances are set to ∞, even the
start vertex. Second, the start vertex must be put on the queue first to make
sure it comes off first since they all have the same priority. Finally, the
mutated Dijkstras returns the distance back to the start node. If there was no
path back to the start vertex the distance remains ∞. The minimum of all these
returns from the mutated Dijkstras is the shortest path. Since Dijkstras runs
at worst in O(|V|^2) and min_cycle runs this form of Dijkstras |V| times, the
final running time to find the shortest cycle is O(|V|^3). If min_cyc returns
∞ then the graph is acyclic.
To return the actual path of the shortest cycle only slight modifications need to be made.
Related
I have a problem where I need to do a linear interpolation on some data as it is acquired from a sensor (it's technically position data, but the nature of the data doesn't really matter). I'm doing this now in matlab, but since I will eventually migrate this code to other languages, I want to keep the code as simple as possible and not use any complicated matlab-specific/built-in functions.
My implementation initially seems OK, but when checking my work against matlab's built-in interp1 function, it seems my implementation isn't perfect, and I have no idea why. Below is the code I'm using on a dataset already fully collected, but as I loop through the data, I act as if I only have the current sample and the previous sample, which mirrors the problem I will eventually face.
%make some dummy data
np = 109; %number of data points for x and y
x_data = linspace(3,98,np) + (normrnd(0.4,0.2,[1,np]));
y_data = normrnd(2.5, 1.5, [1,np]);
%define the query points the data will be interpolated over
qp = [1:100];
kk=2; %indexes through the data
cc = 1; %indexes through the query points
qpi = qp(cc); %qpi is the current query point in the loop
y_interp = qp*nan; %this will hold our solution
while kk<=length(x_data)
kk = kk+1; %update the data counter
%perform online interpolation
if cc<length(qp)-1
if qpi>=y_data(kk-1) %the query point, of course, has to be in-between the current value and the next value of x_data
y_interp(cc) = myInterp(x_data(kk-1), x_data(kk), y_data(kk-1), y_data(kk), qpi);
end
if qpi>x_data(kk), %if the current query point is already larger than the current sample, update the sample
kk = kk+1;
else %otherwise, update the query point to ensure its in between the samples for the next iteration
cc = cc + 1;
qpi = qp(cc);
%It is possible that if the change in x_data is greater than the resolution of the query
%points, an update like the above wont work. In this case, we must lag the data
if qpi<x_data(kk),
kk=kk-1;
end
end
end
end
%get the correct interpolation
y_interp_correct = interp1(x_data, y_data, qp);
%plot both solutions to show the difference
figure;
plot(y_interp,'displayname','manual-solution'); hold on;
plot(y_interp_correct,'k--','displayname','matlab solution');
leg1 = legend('show');
set(leg1,'Location','Best');
ylabel('interpolated points');
xlabel('query points');
Note that the "myInterp" function is as follows:
function yi = myInterp(x1, x2, y1, y2, qp)
%linearly interpolate the function value y(x) over the query point qp
yi = y1 + (qp-x1) * ( (y2-y1)/(x2-x1) );
end
And here is the plot showing that my implementation isn't correct :-(
Can anyone help me find where the mistake is? And why? I suspect it has something to do with ensuring that the query point is in-between the previous and current x-samples, but I'm not sure.
The problem in your code is that you at times call myInterp with a value of qpi that is outside of the bounds x_data(kk-1) and x_data(kk). This leads to invalid extrapolation results.
Your logic of looping over kk rather than cc is very confusing to me. I would write a simple for loop over cc, which are the points at which you want to interpolate. For each of these points, advance kk, if necessary, such that qp(cc) is in between x_data(kk) and x_data(kk+1) (you can use kk-1 and kk instead if you prefer, just initialize kk=2 to ensure that kk-1 exists, I just find starting at kk=1 more intuitive).
To simplify the logic here, I'm limiting the values in qp to be inside the limits of x_data, so that we don't need to test to ensure that x_data(kk+1) exists, nor that x_data(1)<pq(cc). You can add those tests in if you wish.
Here's my code:
qp = [ceil(x_data(1)+0.1):floor(x_data(end)-0.1)];
y_interp = qp*nan; % this will hold our solution
kk=1; % indexes through the data
for cc=1:numel(qp)
% advance kk to where we can interpolate
% (this loop is guaranteed to not index out of bounds because x_data(end)>qp(end),
% but needs to be adjusted if this is not ensured prior to the loop)
while x_data(kk+1) < qp(cc)
kk = kk + 1;
end
% perform online interpolation
y_interp(cc) = myInterp(x_data(kk), x_data(kk+1), y_data(kk), y_data(kk+1), qp(cc));
end
As you can see, the logic is a lot simpler this way. The result is identical to y_interp_correct. The inner while x_data... loop serves the same purpose as your outer while loop, and would be the place where you read your data from wherever it's coming from.
Given a undirected graph with vertices form 0 to n-1, write a function that will find the longest path (by number of edges) which vertices make an increasing sequence.
What kind of approach would you recommend for solving this puzzle?
You can transform the original graph into a Directed Acyclic Graph by replacing each of the (undirected) edges by a directed edge going towards the node with bigger number.
Then you end up with this: https://www.geeksforgeeks.org/find-longest-path-directed-acyclic-graph/
I would do a Dynamic Programming algorithm. Denote L(u) to be the longest valid path starting at node u. Your base case is L(n-1) = [n-1] (i.e., the path containing only node n-1). Then, for all nodes s from n-2 to 0, perform a BFS starting at s in which you only allow traversing edges (u,v) such that v > u. Once you hit a node for which you've already started at (i.e., a node u such that you've already computed L(u)), L(s) = longest path from s to u + L(u) out of all possible u > s.
The answer to your problem is the node u that has the maximum value of L(u), and this algorithm is O(E), where E is the number of edges in your graph. I don't think you can do faster than this asymptotically
EDIT: Actually, the "BFS" isn't even a BFS: it's simply traversing the edges (s,v) such that v > s (because you have already visited all nodes v > s, so there's no traversal: you'll immediately hit a node you've already started at)
So actually, the simplified algorithm would be this:
longest_path_increasing_nodes():
L = Hash Map whose keys are nodes and values are paths (list of nodes)
L[n-1] = [n-1] # base case
longest_path = L[n-1]
for s from n-2 to 0: # recursive case
L[s] = [s]
for each edge (s,v):
if v > s and length([s] + L[v]) > length(L[s]):
L[s] = [s] + L[v]
if length(L[s]) > length(longest_path):
longest_path = L[s]
return longest_path
EDIT 2022-03-01: Fixed typo in the last if-statement; thanks user650654!
There are algorithms like Dijkastras algorithm which can be modified to find the longest instead of the shortest path.
Here is a simple approach:
Use a recursive algorithm to find all paths between 2 nodes.
Select the longest path.
If you need help with the recursive algorithm just ask.
Could someone provide a step by step pseudocode using BFS to search a cycle in a directed/undirected graph?
Can it get O(|V|+|E|) complexity?
I have seen only DFS implementation so far.
You can take a non-recursive DFS algorithm for detecting cycles in undirected graphs where you replace the stack for the nodes by a queue to get a BFS algorithm. It's straightforward:
q <- new queue // for DFS you use just a stack
append the start node of n in q
while q is not empty do
n <- remove first element of q
if n is visited
output 'Hurray, I found a cycle'
mark n as visited
for each edge (n, m) in E do
append m to q
Since BFS visits each node and each edge only once, you have a complexity of O(|V|+|E|).
I find BFS algorithm perfect for that.
Time complexity is the same.
You want something like this(Edited from Wikipedia):
Cycle-With-Breadth-First-Search(Graph g, Vertex root):
create empty set S
create empty queue Q
root.parent = null
Q.enqueueEdges(root)
while Q is not empty:
current = Q.dequeue()
for each node n that is adjacent to current:
if n is not in S:
add n to S
n.parent = current
Q.enqueue(n)
else //We found a cycle
n.parent = current
return n and current
I added only the else its a cycle block for the cycle detection and removed the original if reached target block for target detection. In total it's the same algorithm.
To find the exact cycle you will have to find a common ancestor for n and current. The lowest one is available in O(n) time. Than the cycle is known. ancestor to n and current. current and n are connected.
For more info about cycles and BFS read this link https://stackoverflow.com/a/4464388/6782134
I've been working on a generalized version of the sliding tile puzzle where the tiles do not have numbers. Instead, each location either has a tile or a hole and is represented with a boolean as true or false (tile or hole).
The point of the search is to take an initial state with n tiles and a goal state with n target locations and use A* to find the solution of how to move the tiles so that every target location is populated. Here is an example below for a 4x3 grid:
Initial State:
T F T F
F F T F
F F T T
Goal State
T T T T
T F F F
F F F F
I had been working on different heuristics to do this and the most successful had a logic that went something like this:
int heuristicVal = 0
for every tile (i)...
int closest = infinity
for every goal location (j)...
if (manhattan distance of ij < closest) closest = manhattan distance of ij
end for
heuristicVal += closest
end for
return heuristicVal
Unfortunately, this was still too slow in situations where two or more tiles were being guided by the heuristic to the same target location. I tried multiplying heuristicVal by the number of tiles and suddenly there was an exponential speed-up. Problems that were taking 28 seconds before were taking less than 1 second.
Edit: It turns out it is not always producing optimal solutions after all with this change. However, I don't understand why it sped up so much or why it is still finding the correct (although suboptimal) answer despite no longer being admissible.
If you break admissability, A* no longer works correctly. Note that no longer works correctly doesn't mean you're never gonna get an optimal result - you're just no longer guaranteed to get one. You can also end up converging faster on solution, but what's the point if that solution is not the right one?
in fibonacci series let's assume nth fibonacci term is T. F(n)=T. but i want to write a a program that will take T as input and return n that means which term is it in the series( taken that T always will be a fibonacci number. )i want to find if there lies an efficient way to find it.
The easy way would be to simply start generating Fibonacci numbers until F(i) == T, which has a complexity of O(T) if implemented correctly (read: not recursively). This method also allows you to make sure T is a valid Fibonacci number.
If T is guaranteed to be a valid Fibonacci number, you can use approximation rules:
Formula
It looks complicated, but it's not. The point is: from a certain point on, the ratio of F(i+1)/F(i) becomes a constant value. Since we're not generating Fibonacci Numbers but are merely finding the "index", we can drop most of it and just realize the following:
breakpoint := f(T)
Any f(i) where i > T = f(i-1)*Ratio = f(T) * Ratio^(i-T)
We can get the reverse by simply taking Log(N, R), R being Ratio. By adjusting for the inaccuracy for early numbers, we don't even have to select a breakpoint (if you do: it's ~ correct for i > 17).
The Ratio is, approximately, 1.618034. Taking the log(1.618034) of 6765 (= F(20)), we get a value of 18.3277. The accuracy remains the same for any higher Fibonacci numbers, so simply rounding down and adding 2 gives us the exact Fibonacci "rank" (provided that F(1) = F(2) = 1).
The first step is to implement fib numbers in a non-recursive way such as
fib1=0;fib2=1;
for(i=startIndex;i<stopIndex;i++)
{
if(fib1<fib2)
{
fib1+=fib2;
if(fib1=T) return i;
if(fib1>T) return -1;
}
else
{
fib2+=fib1;
if(fib2=T) return i;
if(fib2>t) return -1;
}
}
Here startIndex would be set to 3 stopIndex would be set to 10000 or so. To cut down in the iteration, you can also select 2 seed number that are sequential fib numbers further down the sequence. startIndex is then set to the next index and do the computation with an appropriate adjustment to the stopIndex. I would suggest breaking the sequence up in several section depending on machine performance and the maximum expected input to minimize the run time.