I am confused about in order, pre-order and post-order traversals, specifically
this one, Pre-Order: ABAB, Post Order: BABA, In Order: AABB.
I understand that the root is the first and last element of Pre and Post, but I fail to understand how to finish constructing the Binary Tree.
Your post is vague, and doesn't make much sense, but I'll explain in, pre, post order and constructing a binary tree for you.
One of the reasons your question doesn't make sense is you haven't established an order to the elements you describe in ordering, ABAB BABA and AABB means absolutely nothing with out a tree to properly show where each element goes (and is each element a letter? why do they duplicate)
Another reason why your question doesn't make sense is that you appear to think that pre, pos and in order have something to do with creating a binary tree, they don't.
Pre ordering, In Ordering, and Post Ordering are all types of Depth First Search algorithms for tree traversal. That is to say they are ways of navigating a tree, not creating one. You may use these algorithms to find elements, or to simply print out all the contents of a tree, this is especially useful to a tree who's nodes are only linked via pointers (as apposed to say, an array based binary heap).
Imagine the following binary tree (the same for all examples)
A
B C
D E F G
Pre order traversal is a type of tree traversal algorithm where you always take the left most path first. When you can't go farther, you take the next most left path, and do the same thing recursively on the next node. In the above example tree, pre order traversal would start at the root, (A) go left (A,B) go left again (D) couldn't go left so then would go right (E) and in the end you would end up with the following traversal sequence: A B D E C F G
In order traversal is similar to pre-order traversal, but instead of displaying at each step, in order traversal goes the deepest left it can go, then displays, and if it can't go deep enough any more, it goes back up, displays (hence 'in' order), and tries the same thing to the right again recursively until its done. In the tree example, we'd actually print D first, go back up to B, and print B, then E, then back up to A, and so on, so the final output would be D B E A F G C. Note Wikipedias example may make more sense as it is more complicated.
In post order, we print from bottom up essentially, we find the deepest node in the left subtree, and print the deepest nodes in there recursively until we're done, go to the right subtree and finally print the root eg: D E B F G C A. Again this example makes more sense with wikipedia, since they have a more complicated tree.
If you want to construct a tree, there are many ways to do so but it depends entirely what kind of ordering structure you want. Do you want to have a binary structure or n-ary structure? Do you care about which element is on top, or do you only want the min/max (like a pairing heap or binary heap priority queue)? Do you have a search condition, such that the roots of each part of a tree must be larger/smaller/other condition relative to the children or their parents? (like a binary search tree)
This post is also good explaining the traversals if this isn't sufficient, it also explains why you need different types of ordering in order to construct a tree from a sequence of nodes with proper connections (if your original intent was to copy a binary tree structure)
Related
I was going through the text Compilers Principles, Techniques and Tools by Ullman et. al where I came across the excerpt where the authors try to justify why stack is the best data structure of shift reduce parsing. They said that it is so because of the fact that
"The handle will always eventually appear on top of the stack, never inside."
The Excerpt
This fact becomes obvious when we consider the possible forms of two successive steps in any rightmost derivation. These two steps can be of the form
In case (1), A is replaced by , and then the rightmost nonterminal B in that right side is replaced by . In case (2), A is again replaced first, but this time the right side is a string y of terminals only. The next rightmost nonterminal B will be somewhere to the left of y.
Let us consider case (1) in reverse, where a shift-reduce parser has just reached the configuration
The parser now reduces the handle to B to reach the configuration
in which is the handle, and it gets reduced to A
In case (2), in configuration
the handle is on top of the stack. After reducing the handle to B, the parser can shift the string xy to get the next handle y on top of the stack,
Now the parser reduces y to A.
In both cases, after making a reduction the parser had to shift zero or more symbols to get the next handle onto the stack. It never had to go into the stack to find the handle. It is this aspect of handle pruning that makes a stack a particularly convenient data structure for implementing a shift-reduce parser.
My reasoning and doubts
Intuitively this is how I feel that the statement in can be justified
If there is an handle on the top of the stack, then the algorithm, will first reduce it before pushing the next input symbol on top of the stack. Since before the push any possible handle is reduced, so there is no chance of an handle being on the top of the stack and then pushing a new input symbol thereby causing the handle to go inside the stack.
Moreover I could not understand the logic the authors have given in highlighted portion of the excerpt justifying that the handle cannot occur inside the stack, based on what they say about B and other facts related to it.
Please can anyone help me understand the concept.
The key to the logic expressed by the authors is in the statement at the beginning (emphasis added):
This fact becomes obvious when we consider the possible forms of two successive steps in any rightmost derivation.
It's also important to remember that a bottom-up parser traces out a right-most derivation backwards. Each reduction performed by the parser is a step in the derivation; since the derivation is rightmost the non-terminal being replaced in the derivation step must be the last non-terminal in the sentential form. So if we write down the sequence of reduction actions used by the parser and then read the list backwards, we get the derivation. Alternatively, if we write down the list of productions used in the rightmost derivation and then read it backwards, we get the sequence of parser reductions.
Either way, the point is to prove that the successive handles in the derivation steps correspond to monotonically non-retreating prefixes in the original input. The authors' proof takes two derivation steps (any two derivation steps) and shows that the end of the handle of the second derivation step is not before the end of the handle of the first step (although the ends of the two handles may be at the same point in the input).
Is it OK for a custom language plugin using IntelliJ's plugin SDK to produce a PSIElement tree where
some PSIElements have no associated ASTNode, so where myPsiElement.getNode() == null
some PSIElements have children out of order, e.g. myPsiElement.children()[0].getStartOffsetInParent() > myPsiElement.children()[1].getStartOffsetInParent()
some PSIElements correspond to zero characters in the source: myPsiElement.getTextLength() == 0
Would any of these properties make it harder to take advantage of language plugin SDK features?
For background:
I'm creating a custom language plugin for IntelliJ per "Implementing a Parser and PSI."
The bottom of the diagram from the docs shows the relationship between ASTNodes and PsiElements.
IIUC, first, a lexer segments the text into tokens. Then a parser drops node start and end marks between tokens to specify the parse tree structure. Intellij internals lift that token stream with markers into a (not-very abstract) ASTNode tree. Finally, language-specific plugin code builds a PSI tree from the AST.
It looks like, from that diagram, that every node in the Psi tree is associated with an ASTElement.
This relationship seems bidirectional per PsiElement.getNode(). The diagram doesn't show an arrow between MyPsiFile and the MyElementType.FILE ASTNode but
PsiFile.getNode() suggests there has to be one.
For language-specific reasons, my existing parser produces a tree that don't clearly fit this model.
Nodes can appear out of order.
The expression a + b parses to a node like (call + a b). Note that infix and postfix operators are shifted left to before their first operand.
The parser synthesizes some nodes that correspond to no tokens, and so the relationship between them and an AST node is unclear. For example I need to produce distinct trees for
for (x;;) body
for (;x;) body
for (;;x) body
so the parser inserts symbols like the init= in (for init= x body) which the definition of for can use to decide what to do with x.
There is a concept of a "non-physical" PSIElement:
/**
* Checks if an actual source or class file corresponds to the element. Non-physical elements include,
* for example, PSI elements created for the watch expressions in the debugger.
* Non-physical elements do not generate tree change events.
* Also, {#link PsiDocumentManager#getDocument(PsiFile)} returns null for non-physical elements.
but it's unclear to me if those are associated with ASTNodes or if that's right for nodes that are implied by program text.
I have a problem trying to "decypher" a logical tree with Neo4js Cypher.
I have a logical tree of Operation to Leaves. I want to collect valid sets of Leaves.
I am currently trying to collect valid Sets of Leaves on a Valid Configuration Node. So I can later quickly path through that Configuration node.
Example
(1 AND 2) AND (3 AND 4)
Is easy to match (rule)-[AND*]->(leaf) return collect(leaf)
However
(1 XOR 2) AND (3 XOR 4)
Is a problem because whenever I collect 1,2,3,4 in a single variable, I cannot later properly get the cartesian product of the AND Operation. (13,14,23,24) would be valid.
In general I have a tree of variable depth (upto max about 3-4)
Operations are XOR, AND, Not AND, Not XOR
Is there a simple way in Cypher I am missing for navigating such trees?
Is trying to merge Valid Sets in a ValidConfiguration Node a good idea for fast Queries?
Later it should support a query of the form
(:Model)->(:ValidConf)->(:Leaf:Option)->(:Feature)
then return all models that have a certain Feature in a valid configuration.
Or multiple Features at a certain configuration price.
Do I need UDFs or ObjectGraphMapper to get this problem solved?
Are there any UDFs that work with such decision trees which I can use?
Any help would be highly appreciated.
Create Example
CREATE (r:Rule{id:123})-[:COMPOSITION]->
startOp:AndOperation:Operation:Operand)
CREATE (startOp)-[:AND]->(intermediateOp1:OrOperation:Operation:Operand)
CREATE (startOp)-[:AND]->(intermediateOp2:OrOperation:Operation:Operand)
CREATE (intermediateOp1)-[:XOR]->(o1:Option:Operand{id:321})
CREATE (intermediateOp1)-[:XOR]->(o2:Option:Operand{id:564})
CREATE (intermediateOp2)-[:XOR]->(o3:Option:Operand{id:876})
CREATE (intermediateOp2)-[:XOR]->(o4:Option:Operand{id:227})
CREATE (o1)-[:CONSISTS_OF]->(f1:Feature{text:"magicwand"})
....
This tree is symmetric but they usually aren't. I need to make o1 + o4 be valid and o1 + o2 to not be valid. The OR are to be understood as XOR.
I don't think Cypher is going to work for evaluating a boolean binary expression tree. To quote cybersam's answer to a related question:
This is because Cypher has no looping statements powerful enough to
iteratively calculate subresults (in the correct order) for trees of
arbitrary depth.
You're going to have to look for some additional system to do the evaluation.
If you can code Java, you should be able to do this by implementing your own custom procedure to evaluate a boolean expression tree in the correct order.
I'm implementing the automatic construction of an LALR parse table for no reason at all. There are two flavors of this parser, LALR(0) and LALR(1), where the number signifies the amount of look-ahead.
I have gotten myself confused on what look-ahead means.
If my input stream is 'abc' and I have the following production, would I need 0 look-ahead, or 1?
P :== a E
Same question, but I can't choose the correct P production in advance by only looking at the 'a' in the input.
P :== a b E
| a b F
I have additional confusion in that I don't think the latter P-productions really happen in when building a LALR parser generator. The reason is that the grammar is effectively left-factored automatically as we compute the closures.
I was working through this page and was ok until I got to the first/follow section. My issue here is that I don't know why we are calculating these things, so I am having trouble abstracting this in my head.
I almost get the idea that the look-ahead is not related to shifting input, but instead in deciding when to reduce.
I've been reading the Dragon book, but it is about as linear as a Tarantino script. It seems like a great reference for people who already know how to do this.
The first thing you need to do when learning about bottom-up parsing (such as LALR) is to remember that it is completely different from top-down parsing. Top-down parsing starts with a nonterminal, the left-hand-side (LHS) of a production, and guesses which right-hand-side (RHS) to use. Bottom-up parsing, on the other hand, starts by identifying the RHS and then figures out which LHS to select.
To be more specific, a bottom-up parser accumulates incoming tokens into a queue until a right-hand side is at the right-hand end of the queue. Then it reduces that RHS by replacing it with the corresponding LHS, and checks to see whether an appropriate RHS is at the right-hand edge of the modified accumulated input. It keeps on doing that until it decides that no more reductions will take place at that point in the input, and then reads a new token (or, in other words, takes the next input token and shifts it onto the end of the queue.)
This continues until the last token is read and all possible reductions are performed, at which point if what remains is the single non-terminal which is the "start symbol", it accepts the parse.
It is not obligatory for the parser to reduce a RHS just because it appears at the end of the current queue, but it cannot reduce a RHS which is not at the end of the queue. That means that it has to decide whether to reduce or not before it shifts any other token. Since the decision is not always obvious, it may examine one or more tokens which it has not yet read ("lookahead tokens", because it is looking ahead into the input) in order to decide. But it can only look at the next k tokens for some value of k, typically 1.
Here's a very simple example; a comma separated list:
1. Start -> List
2. List -> ELEMENT
3. List -> List ',' ELEMENT
Let's suppose the input is:
ELEMENT , ELEMENT , ELEMENT
At the beginning, the input queue is empty, and since no RHS is empty the only alternative is to shift:
queue remaining input action
---------------------- --------------------------- -----
ELEMENT , ELEMENT , ELEMENT SHIFT
At the next step, the parser decides to reduce using production 2:
ELEMENT , ELEMENT , ELEMENT REDUCE 2
Now there is a List at the end of the queue, so the parser could reduce using production 1, but it decides not to based on the fact that it sees a , in the incoming input. This goes on for a while:
List , ELEMENT , ELEMENT SHIFT
List , ELEMENT , ELEMENT SHIFT
List , ELEMENT , ELEMENT REDUCE 3
List , ELEMENT SHIFT
List , ELEMENT SHIFT
List , ELEMENT -- REDUCE 3
Now the lookahead token is the "end of input" pseudo-token. This time, it does decide to reduce:
List -- REDUCE 1
Start -- ACCEPT
and the parse is successful.
That still leaves a few questions. To start with, how do we use the FIRST and FOLLOW sets?
As a simple answer, the FOLLOW set of a non-terminal cannot be computed without knowing the FIRST sets for the non-terminals which might follow that non-terminal. And one way we can decide whether or not a reduction should be performed is to see whether the lookahead is in the FOLLOW set for the target non-terminal of the reduction; if not, the reduction certainly should not be performed. That algorithm is sufficient for the simple grammar above, for example: the reduction of Start -> List is not possible with a lookahead of ,, because , is not in FOLLOW(Start). Grammars whose only conflicts can be resolved in this way are SLR grammars (where S stands for "Simple", which it certainly is).
For most grammars, that is not sufficient, and more analysis has to be performed. It is possible that a symbol might be in the FOLLOW set of a non-terminal, but not in the context which lead to the current stack configuration. In order to determine that, we need to know more about how we got to the current configuration; the various possible analyses lead to LALR, IELR and canonical LR parsing, amongst other possibilities.
I'm trying to learn how to think in a functional programming way, for this, I'm trying to learn Erlang and solving easy problems from codingbat. I came with the common problem of comparing elements inside a list. For example, compare a value of the i-th position element with the value of the i+1-th position of the list. So, I have been thinking and searching how to do this in a functional way in Erlang (or any functional language).
Please, be gentle with me, I'm very newb in this functional world, but I want to learn
Thanks in advance
Define a list:
L = [1,2,3,4,4,5,6]
Define a function f, which takes a list
If it matches a list of one element or an empty list, return the empty list
If it matches the first element and the second element then take the first element and construct a new list by calling the rest of the list recursivly
Otherwise skip the first element of the list.
In Erlang code
f ([]) -> [];
f ([_]) -> [];
f ([X, X|Rest]) -> [X | f(Rest)];
f ([_|Rest]) -> f(Rest).
Apply function
f(L)
This should work... haven't compiled and run it but it should get you started. Also in case you need to do modifications to it to behave differently.
Welcome to Erlang ;)
I try to be gentle ;-) So main thing in functional approach is thinking in terms: What is input? What should be output? There is nothing like comparing the i-th element with the i+1-th element alone. There have to be always purpose of it which will lead to data transformation. Even Mazen Harake's example doing it. In this example there is function which returns only elements which are followed by same value i.e. filters given list. Typically there are very different ways how do similar thing which depends of purpose of it. List is basic functional structure and you can do amazing things with it as Lisp shows us but you have to remember it is not array.
Each time you need access i-th element repeatable it indicates you are using wrong data structure. You can build up different data structures form lists and tuples in Erlang which can serve your purposes better. So when you face problem to compare i-th with i+1-th element you should stop and think. What is purpose of it? Do you need perform some stream data transformation as Mazen Harake does or You need random access? If second you should use different data structure (array for example). Even then you should think about your task characteristics. If you will be mostly read and almost never write then you can use list_to_tuple(L) and then read using element/2. When you need write occasionally you will start thinking about partition it to several tuples and as your write ratio will grow you will end up with array implementation.
So you can use lists:nth/2 if you will do it only once or several times but on short list and you are not performance freak as I'm. You can improve it using [X1,X2|_] = lists:nthtail(I-1, L) (L = lists:nthtail(0,L) works as expected). If you are facing bigger lists and you want call it many times you have to rethink your approach.
P.S.: There are many other fascinating data structures except lists and trees. Zippers for example.