I have a question about the traversal of a tree.
When we print the values of a binary search tree using in order traversal are the values printed in an ascending order??
Yes, the normal implementation of a binary search tree is in ascending order, i.e. nodes to the left are smaller than nodes to the right.
As the concepts of "left" and "right" are what we specify, and "lower" and "higher" depend on what the keys really represent, it's of course possible to implement the tree as a descending tree (or just a reverse traversal). In that case you might want to add "reverse" or "descending" to the name of the tree to signify the uncommon implementation.
Related
I am new to Neo4j and currently playing with this tree structure:
The numbers in the yellow boxes are a property named order on the relationship CHILD_OF.
My goal was
a) to manage the sorting order of nodes at the same level through this property rather than through directed relationships (like e.g. LEFT, RIGHT or IS_NEXT_SIBLING, etc.).
b) being able to use plain integers instead of complete paths for the order property (i.e. not maintaining sth. like 0001.0001.0002).
I can't however find the right hint on how or if it is possible to recursively query the graph so that it keeps returning the nodes depth-first but for the sorting at each level consider the order property on the relationship.
I expect that if it is possible it might include matching the complete path iterating over it with the collection utilities of Cypher, but I am not even close enough to post some good starting point.
Question
What I'd expect from answers to this question is not necessarily a solution, but a hint on whether this is a bad approach that would perform badly anyways. In terms of Cypher I am interested if there is a practical solution to this.
I have a general idea on how I would tackle it as a Neo4j server plugin with the Java traversal or core api (which doesn't mean that it would perform well, but that's another topic), so this question really targets the design and Cypher aspect.
This might work:
match path = (n:Root {id:9})-[:CHILD_OF*]->(m)
WITH path, extract(r in rels(path) | r.order) as orders
ORDER BY orders
if it complains about sorting arrays then computing a number where each digit (or two digits) are your order and order by that number
match path = (n:Root {id:9})-[:CHILD_OF*]->(m)
WITH path, reduce(a=1, r in rels(path) | a*10+r.order) as orders
ORDER BY orders
I have a dag (Tree) in which the directed edges are only of three kinds :
Left to right (siblings)
Child to Parent
Parent to a child
Specifically , the problem is to evaluate an attribute parse tree, but it doesn't matter what the specific problem is.
Sort of :
What traversal is guaranteed to give a topological sort of the nodes ?
I think inorder will fail but some places it is suggested that inorder is the way to go. I know reverse post order woeks on general DAGS but I think there must be a simpler traversal for my case.
Since your graph is a DAG, and you therefore have no back-edges, you can use depth-first search to traverse your graph and add nodes to your sorted list in the order in which they come off the DFS stack.
I have matrix in C with size m x n. Size isn't known. I must to have operations on matrix such as : delete first element and find i-th element. (where size woudn't be too big , from 10 to 50 columns of matrix). What is more efficient to use, linked list or hash table? How can I map column of matrix to one element of linked list or hash table depens what I choose to use?
Thanks
Linked lists don't provide very good random access, so from that perspective, you might not want to look in to using them to represent a matrix, since your lookup time will take a hit for each element you attempt to find.
Hashtables are very good for looking up elements as they can provide near constant time lookup for any given key, assuming the hash function is decent (using well established hashtable implementations would be wise)
Provided with the constraints that you have given though, a hashtable of linked lists might be a suitable solution, though it would still present you with the problem of finding the ith element, as you'd still need to iterate through each linked list to find the element you want. This would give you O(1) lookup for the row, but O(n) for the column, where n is the column count.
Furthermore, this is difficult because you'd have to make sure EVERY list in your hashtable is updated with the appropriate number of nodes as the number of columns grows/shrinks, so you're not buying yourself much in terms of space complexity.
A 2D array is probably best suited for representing a matrix, where you provide some capability of allowing the matrix to grow by efficiently managing memory allocation and copying.
An alternate method would be to look at something like the std::vector in lieu of the linked list, which acts like an array in that it's contiguous in memory, but will allow you the flexibility of dynamically growing in size.
if its for work then use hash table, avg runtime would be O(1).
for deletion/get/set given indices at O(1) 2d arr would be optimal.
I have a graph that contains many 'subtrees' of items where an original item can be cloned which results in
(clone:Item)-[:clones]->(original:Item)
and a cloned item can also be cloned:
(newclone:Item)-[:clones]->(clone:Item)
the first item is created by a user:
(:User)-[:created]->(:item)
and the clones are collected by a user:
(:User)-[:collected]->(:item)
Given any item in the tree, I want to be able to match all the items in the tree. I'm using:
(1) match (known:Item)-[:clones*]-(others:Item)
My understanding is that this implements a 'greedy' match, traversing the tree in all directions, matching all items.
In general this works, however in some circumstances it doesn't seem to match all the items in the tree. For example, in the following query, this doesn't seem to be matching the whole subtree.
match p = (known:Item)-[r:clones*]-(others:Item) where not any(x in nodes(p) where (x)<-[:created]-(:User)) return p
Here I'm trying to find subtrees which are missing a 'created' Item (which were deleted in the source SQL database.
What I'm finding is that it giving me false positives because it's matching only part of a particular tree. For example, if there is a tree with 5 items structured properly as described above, it seems (in some cases) to be matching a subset of the tree (maybe 2 out of 5 items) and that subset doesn't contain the created card and so is returned by the query when I didn't expect it to.
Question
Is my logic correct or am I misunderstanding something? I'm suspecting that I'm misunderstanding paths, but I'm confused by the fact that the basic 'greedy' match works in most cases.
I think that my problem is that I've been confused because the query is finding multiple paths in the tree, some of which satisfy the test in the query and some don't. When viewed in the neo4j visualisation, the multiple paths are consolidated into what looks like the whole tree whereas the tabular results show that the match (1) above actually gives multiple paths.
I'm now thinking that I should be using collections rather than paths for this.
You are quite right that the query matches more paths than what is apparent in the browser visualization. The query is greedy in the sense that it has no upper bound for depth, but it also has no lower bound (well, strictly the lower bound is 1), which means it will emit a short path and a longer path that includes it if there are such. For data like
CREATE
(u)-[:CREATED]->(i)<-[:CLONES]-(c1)<-[:CLONES]-(c2)
the query will match paths
i<--c1
i<--c1<--c2
c1<--c2
c2-->c1
c2-->c1-->i
c1-->i
Of these paths, only the ones containing i will be filtered by the condition NOT x<-[:CREATED]-(), leaving paths
c1<--c2
c2-->c1
You need a further condition in your pattern before that filter, a condition such that each path that passes it should contain some node x where x<-[:CREATED]-(). That way that filter condition is unequivocal. Judging from the example model/data in your question, you could try matching all directed variable depth (clone)-[:CLONES]->(cloned) paths, where the last cloned does not itself clone anything. That last cloned should be a created item, so each path found can now be expected to contain a b<-[:CREATED]-(). That is, if created items don't clone anything, something like this should work
MATCH (a)-[:CLONES*]->(b)
WHERE NOT b-[:CLONES]->()
AND NOT b<-[:CREATED]-()
This relies on only matching paths where a particular node in each path can be expected to be created. An alternative is to work on each whole tree by itself by getting a single pointer into the tree, and test the entire tree for any created item node. Then the problem with your query could be said to be that it treats c1<--c2 as if it's a full tree and he solution is a pattern that only matches once for a tree. You can then collect the nodes of the tree with the variable depth match from there. You can get such a pointer in different ways, easiest is perhaps to provide a discriminating property to find a specific node and collect all the items in that node's tree. Perhaps something like
MATCH (i {prop:"val"})-[:CLONES*]-(c)
WITH i, collect(distinct c) as cc
WHERE NOT (
i<-[:CREATED]-() OR
ANY (c IN cc WHERE c<-[:CREATED]-()
) //etc
This is not a generic query, however, since it only works on the one tree of the one node. If you have a property pattern that is unique per tree, you can use that. You can also model your data so that each tree has exactly one relationship to a containing 'forest'.
MATCH (forest)-[:TREE]->(tree)-->(item)-[:CLONES*]-(c) // etc
If your [:COLLECTED] or some other relationship, or a combination of relationships and properties make a unique pattern per tree, these can also be used.
Suppose you have two circular linked lists , one is of size M and the other is of size N and M < N. If you don't know which list is of size M, what is the worst-case complexity to concatenate the two lists into a single list?
I was thinking O(M) but that is not correct. And no, I guess there is no specific place to concatenate at.
If there are no further restrictions, and your lists are mutable (like normal linked lists in languages like C, C#, Java, ...), just split the two lists open at whatever nodes you have and join them together (involves up to four nodes). Since it's homework, I leave working out the complexity to you, but it should be easy, there's a strong hint in the preceding.
If the lists are immutable, as would normally be the case in a pure functional language, you'd have to copy a number of nodes and get a different complexity. What complexity exactly would depend on restrictions on the sort of result (e.g. does it have to be a circular linked list?).