Given an undirected connected simple graph G, let Td be a DFS tree and Tb be a BFS tree of G starting at some fixed vertex.
If Tb and Td are identical then G is an acyclic graph. Is this always true?
I worked on the following example :
let Td and Tb be :
(A,B)
(A,C)
(B,D)
(B,E)
(C,F)
(C,G)
and then graph could be :
(A,B)
(A,C)
(B,D)
(B,E)
(C,F)
(C,G)
(C,A)
Am i thinking in the right direction? Because the answer given for this problem is true which according to me should be false.
Please help.
Related
I'm trying to find a way to encode a sort of basic subgraph isomorphism in Z3 (preferably z3py). While I know there are papers on this in the abstract, finding any mechanism to do it has eluded me even for very trivial cases, because I'm very new to Z3 in general!
Suppose you have just about the most basic subgraph with nodes (0,1,2) and edges (0,1) with node 2 off on its own, and the supergraph has nodes (0,1,2) and edges (1,2) with node 0 off on its own. You could map the nodes of the subgraph into the supergraph with
0->1,
1->2,
2->0
...as one possible mapping that would satisfy "if these two nodes are connected in the subgraph, their mapped nodes are connected in the supergraph"
So okay :) I tried
from networkx import Graph
from networkx.linalg.graphmatrix import adjacency_matrix
subgraph = Graph()
subgraph.add_nodes_from([0,1,2])
subgraph.add_edges_from([(0,1)])
supergraph = Graph()
supergraph.add_nodes_from([0,1,2])
supergraph.add_edges_from([(1,2)])
s = Solver()
assignments = [Int(f'n{node}') for node in subgraph.nodes]
# each bit assignment in the subgraph belongs to one in the supergraph
assignment_constraint = [ And(assignments[i] >= 0, assignments[i] <= max(supergraph.nodes)) for i in subgraph.nodes ]
# subgraph bits can't be assigned to the same supergraph bits
assignment_distinct = [ Distinct([assignments[i] for i in subgraph.nodes])]
which just gets me as far as "each assignment from subgraph to supergraph should map a node in the subgraph to some node in the supergraph and no two subgraph nodes can be assigned to the same supergraph node"
...but then I get stuck because I keep thinking along the lines of
for edge in subgraph.edges:
s.add( (assignments[edge[0]], assignments[edge[1]]) in supergraph.edges )
...but of course that doesn't work because pythonically those aren't the right sort of keys so that's always false or broken.
So how does one approach that? I can add constraints like "this_var == 1" but get very confused on things like checking membership, ie
>>> assignments[0] == 1.0
n0 == 1 # so that's OK then
>>> assignments[0] in [1.0, 2.0, 3.0]
False # woops, that fails horribly
and I feel like I'm missing a very basic "frame of mind" thing here.
It is relatively straightforward to encode subgraph isomorphism in z3, pretty much along the lines of how you described. However, this encoding is unlikely to scale to large graphs. As you no doubt know, subgraph isomorphism is NP-complete in general, and this encoding will cause z3 to simply enumerate all possibilities and thus will blow up exponentially.
Having said that, here's a straightforward encoding:
from z3 import *
# Subgraph, number of nodes and edges.
# Nodes will be named implicitly from 0 to noOfNodesA - 1
noOfNodesA = 3
edgesA = [(0, 1)]
# Supergraph:
noOfNodesB = 3
edgesB = [(1, 2)]
# Mapping of subgraph nodes to supergraph nodes:
mapping = Array('Map', IntSort(), IntSort())
s = Solver()
# Check that elt is between low and high, inclusive
def InRange(elt, low, high):
return And(low <= elt, elt <= high)
# Check that (x, y) is in the list
def Contains(x, y, lst):
return Or([And(x == x1, y == y1) for x1, y1 in lst])
# Make sure mapping is into the supergraph
s.add(And([InRange(Select(mapping, n1), 0, noOfNodesB-1) for n1 in range(noOfNodesA)]))
# Make sure we map nodes to distinct nodes
s.add(Distinct([Select(mapping, n1) for n1 in range(noOfNodesA)]))
# Make sure edges are preserved:
for x, y in edgesA:
s.add(Contains(Select(mapping, x), Select(mapping, y), edgesB))
# Solve:
r = s.check()
if r == sat:
m = s.model()
for x in range(noOfNodesA):
print ("%s -> %s" % (x, m.evaluate(Select(mapping, x))))
else:
print ("Solver said: %s" % r)
I've added comments along the way, so hopefully you should be able to read the code through; feel free to ask specific questions.
When I run this, I get:
$ python a.py
0 -> 1
1 -> 2
2 -> 0
which finds exactly the mapping you alluded to in your question.
Best of luck!
how does bfs on directed and undirected graph differ in implementation.
i found the following pseudocode on web. i am ok with undirected graph. but can't figure out how to implement it for directed graph.
frontier = new Queue()
mark root visited (set root.distance = 0)
frontier.push(root)
while frontier not empty {
Vertex v = frontier.pop()
for each successor v' of v {
if v' unvisited {
frontier.push(v')
mark v' visited (v'.distance = v.distance + 1)
}
}
}
The implementation in pseudocode is the same, except that the notion of successor would mean neighbor for an undirected graph but child (or similar) for a directed graph.
I know it can be solved with master method but how ? please help ?
i am not sure if this is correct:
a = sqrt(2)
b = 2
f(n) = log n
log(b) a = log (2) sqrt(2) = 1/2
log n in O[n^(1/2)]
so the runtime of finding the logarithm of a number n is in O{n^(1/2)} (however the Master Theorem can not be applied here)
The solution is in following threads:Solving master theorem with log n: T(n) = 2T(n/4) + log n
Overall, we see that your recurrence is O(n1/2) and Ω(n1/2) by upper- and lower-bounding your recurrence by larger and smaller recurrences. Therefore, even though the Master Theorem doesn't apply here, you can still use the Master Theorem to claim that the runtime will be Θ(n1/2).
Master's theorem with f(n)=log n
Usually, f(n) must be polynomial for the master theorem to apply - it doesn't apply for all functions. However, there is a limited "fourth case" for the master theorem, which allows it to apply to polylogarithmic functions.
https://en.wikipedia.org/wiki/Master_theorem
https://en.wikipedia.org/wiki/Big_O_notation
Ralf is not correct by telling you that you can't apply masters theorem.
The only constrains here is that a >=1 and b >= 1, a, b can be irrational and f(n) can be anything.
Log2(sqrt(2)) is 1/2, which puts you in the first case and your solution is O(n^0.5).
I am having trouble understanding the likelihood function for GDA given in Andrew Ng's CS229 notes.
l(φ,µ0,µ1,Σ) = log (product from i to m) {p(x(i)|y(i);µ0,µ1,Σ)p(y(i);φ)}
The link is http://cs229.stanford.edu/notes/cs229-notes2.pdf Page 5.
For Linear regression the function was product from i to m p(y(i)|x(i);theta)
which made sense to me.
Why is there a change here saying it is given by p(x(i)|y(i) and that is multiplied by p(y(i);phi)?
Thanks in advance
The starting formula on page 5 is
l(φ,µ0,µ1,Σ) = log <product from i to m> p(x_i, y_i;µ0,µ1,Σ,φ)
leaving out the parameters φ,µ0,µ1,Σ for now, that can be simplified to
l = log <product> p(x_i, y_i)
using the chain rule you can convert that to either
l = log <product> p(x_i|y_i)p(y_i)
or
l = log <product> p(y_i|x_i)p(x_i).
In the page 5 formula, the φ is moved to p(y_i), because only p(y) depends on it.
The likelihood starts with the joint probability distribution p(x,y) instead of the conditional probability distribution p(y|x), which is why GDA is called a generative model (models from x to y and from y to x), while logistic regression is considered a discriminatory model (models from x to y, one-way). Both have their advantages and disadvantages. There seems to be a chapter about that further below.
I want to understand how to compute big-O for a dense versus sparse graph.
"Algorithms in a nutshell" says that for sparse graph, O(E) is O(V) and for dense graph O(E) is closer to O(V^2). Does anyone know how is that derived?
Assuming the graph is simple - at the worst case every node can be connected to all |V|-1 other nodes, resulting in [in not directed graph:] |E| = (|V|-1) + (|V| -2) + ... + 1 <= |V| * (|V| -1) = O(|V|^2). And in directed graph: |E| = |V| * (|V|-1) = O(|V|^2).
A good example for a dense graph is a clique - which have all the edges.
For sparsed graph - we assume the number of edges connected to each vertex is bounded by a constant. Let this constant be k. Thus: |E| <= k* |V|, and we get |E| = O(|V|)
A good example for a sparsed graph is the internet, where every URL is a node and every link is an edge.
Note that if the graph is not simple, you cannot bound |E| with any function of |V|.
It's not derived, it's a definition. In a fully connected (directed) graph with self-loops, the number of edges |E| = |V|² so the definition of a dense graph is reasonable. The definition of a sparse graph is one where O(|E|) = O(|V|), so there's a constant maximum number of edges per vertex.
Note that if the number of edges is much smaller, e.g. O(lg |V|), then it's still O(|V|) as well. One could imagine a "semi-sparse" class of graphs with |E| = O(|V| lg |V|) or something like that, but I personally have never encountered such a class in practice.