Why a node with only one child is impossible in Huffman Tree? - huffman-code

To construct a coding tree, we must avoid prefix coding in order not to encounter ambiguous meaning.
So, we should have the encoded characters be leaves.
But why it's impossible for a node to have only one child, it does meet the requirement and not cause the ambiguity.
I konw Huffman algorithm that constructs the coding tree from buttom to top can't make it happen.
Are there another encoding algorithms can make encoding tree node have only one child?

I can only make sense out the last line of your question. If a node has only one child, then you end up with an unused code or codes, since only the 0 or 1 branch occurs. Codes with a 1 or 0 respectively in that position have no meaning or use. (Note that there is no ambiguity problem here. It is just that some sequences of bits cannot be decoded.)
There is no reason to not collapse that node and its child to just the child, eliminating the unused code. If there were an encoding algorithm that could result in such a situation, it would be producing suboptimal codes. That could easily be remedied by applying another trivial algorithm to go through and collapse those cases, eliminating the problem, using all the codes, and improving the compression.

Related

Data model for transforming source into AST and back?

I am working on a custom programming language. On compiling it, the parser first converts the text into a simple stream of tokens. The tokens are then converted into a simple tree. The tree is then converted into an object graph (with holes in it, as the types haven't yet been necessarily fully figured out). The holey tree is then transformed into a compact object graph.
Then we can go further and compile it to, say, JavaScript. The compact object graph is then transformed into a JavaScript AST. The JS AST is then transformed into a "concrete" syntax tree (with whitespace and such), and then that is converted into the JS text.
So in going from text to compact object graph, there are 5 transformation steps (text -> token_list -> tree -> holey_graph -> graph). In other situations (other languages), you might have more or less.
The way I am doing this transformation now is very ad-hoc and not keeping track of line numbers, so it's impossible to really tell where an error is coming from. I would like to fix that.
In my case, I am wondering how you could create a data model to keep track of the line of text where something was defined. This way, you could report any compilation errors nicely to the developer. The way I have modeled that so far is with a sort of "folding" model as I'm calling it. The initial "fold" is on the text -> token_list transformation. For each token, it keeps track of 3 things: the line, the column, and the text length, for the token. At first you may model it like this:
{
token: 'function',
line: 10,
column: 2,
size: 8
}
But that is tying two concepts into one object: the token itself, and the "fold" as I am calling it. Really it would be better like this:
fold = {
line: 10,
column: 2,
size: 8
}
token = {
value: 'function'
}
// bind the two together.
fold.data = token
token.fold = fold
Then, you transform from token to AST node in the simple tree. That might be like:
treeNode = {
type: 'function'
}
fold = {
previous: tokenFold,
data: treeNode
}
And so connecting the dots like this. In the end, you would have a fold list, which could be traversed theorertically from the compact object graph, to the text, so if there was a compile eror when doing typechecking for example, you could report the exact line number and everything to the developer. The navigation would look something like this:
data = compactObjectGraph
.fold
.previous.previous.previous.previous
.data
data.line
data.column
data.size
In theory. But the problem is, the "compact object graph" might have been created not from a simple linear chain of inputs, but from a suite of inputs. While I have modeled this on paper so far, I am starting to think there isn't actually in reality a clear way of mapping from object to object how it was transfformed, using this sort of "fold" sort of system.
The question is, how can I define the data model to allow for getting back to the source text line/column number, given there is a complex sequence of transformations from one data structure to the next? That is, at a high level, what is a way to model this that will allow you to isolate the transformation data structures, yet be able to map from the last generated one to the first, to find how some compact object graph node was actually represented in the original source text?
I would create a data structure containing the filename, line and column. In C++ it may work well to store a reference to this structure, rather than copy it to many places.
There isn't really that many ways to solve this, but having a single structure that is re-usable across your other data structures is almost certainly the right solution.
I answered your question on Quora in July, so maybe you missed it: https://qr.ae/pvkrwJ
Basically you have to stamp all the compiler artifacts with source information from which they are derived. Best represented a some kind of structure (Mats' response). Yeah, that takes effort, because
you have to do it everywhere in the compiler.
To do a perfect job, you'd need to stamp it with the complete set of source items that caused its generation; you're essentially producing a dependency graph. (You could represent such sets as trees of subsets to maximize sharing). Then any complaint the compiler issued could clearly identify the set of causes.
To do a less perfect job, you can pick any of of the contributing items and use that as the source location dependency. That means that a compiler complaint will only identify one source location that might be the cause, and the reader will have to guess at others if that isn't the principal source of the problem. Judicious choice of which-cause source information can arrange it so the answer is right much of the time and that's probably good enough.

In shift reduce parsing why the handle always eventually appear on top of the stack and never inside?

I was going through the text Compilers Principles, Techniques and Tools by Ullman et. al where I came across the excerpt where the authors try to justify why stack is the best data structure of shift reduce parsing. They said that it is so because of the fact that
"The handle will always eventually appear on top of the stack, never inside."
The Excerpt
This fact becomes obvious when we consider the possible forms of two successive steps in any rightmost derivation. These two steps can be of the form
In case (1), A is replaced by , and then the rightmost nonterminal B in that right side is replaced by . In case (2), A is again replaced first, but this time the right side is a string y of terminals only. The next rightmost nonterminal B will be somewhere to the left of y.
Let us consider case (1) in reverse, where a shift-reduce parser has just reached the configuration
The parser now reduces the handle to B to reach the configuration
in which is the handle, and it gets reduced to A
In case (2), in configuration
the handle is on top of the stack. After reducing the handle to B, the parser can shift the string xy to get the next handle y on top of the stack,
Now the parser reduces y to A.
In both cases, after making a reduction the parser had to shift zero or more symbols to get the next handle onto the stack. It never had to go into the stack to find the handle. It is this aspect of handle pruning that makes a stack a particularly convenient data structure for implementing a shift-reduce parser.
My reasoning and doubts
Intuitively this is how I feel that the statement in can be justified
If there is an handle on the top of the stack, then the algorithm, will first reduce it before pushing the next input symbol on top of the stack. Since before the push any possible handle is reduced, so there is no chance of an handle being on the top of the stack and then pushing a new input symbol thereby causing the handle to go inside the stack.
Moreover I could not understand the logic the authors have given in highlighted portion of the excerpt justifying that the handle cannot occur inside the stack, based on what they say about B and other facts related to it.
Please can anyone help me understand the concept.
The key to the logic expressed by the authors is in the statement at the beginning (emphasis added):
This fact becomes obvious when we consider the possible forms of two successive steps in any rightmost derivation.
It's also important to remember that a bottom-up parser traces out a right-most derivation backwards. Each reduction performed by the parser is a step in the derivation; since the derivation is rightmost the non-terminal being replaced in the derivation step must be the last non-terminal in the sentential form. So if we write down the sequence of reduction actions used by the parser and then read the list backwards, we get the derivation. Alternatively, if we write down the list of productions used in the rightmost derivation and then read it backwards, we get the sequence of parser reductions.
Either way, the point is to prove that the successive handles in the derivation steps correspond to monotonically non-retreating prefixes in the original input. The authors' proof takes two derivation steps (any two derivation steps) and shows that the end of the handle of the second derivation step is not before the end of the handle of the first step (although the ends of the two handles may be at the same point in the input).

LL top-down parser, from CST to AST

I am currently learning about syntax analysis, and more especially, top-down parsing.
I know the terminology and the difference with bottom-up LR parsers, and since top-down LL parsers are easier to implement by hand, I am looking forward to make my own.
I have seen two kinds of approach:
The recursive-descent one using a collection of recursive functions.
The stack-based and table-driven automaton as shown here on Wikipedia.
I am more interested by the latter, for its power and its elimination of call-stack recursion. However, I don't understand how to build the AST from the implicit parse tree.
This code example of a stack-based finite automaton show the parser analyzing the input buffer, only giving a yes/no answer if the syntax has been accepted.
I have heard of stack annotations in order to build the AST, but I can't figure out how to implement them. Can someone provide a practical implementation of such technique ?
"Top-down" and "bottom-up" are excellent descriptions of the two parsing strategies, because they describe precisely how the syntax tree would be constructed if it were constructed. (You can also think of it as the traversal order over the implicit parse tree but here we're actually interested in real parse trees.)
It seems clear that there is an advantage to bottom-up tree construction. When it is time to add a node to the tree, you already know what its children are. You can construct the node fully-formed in one (functional) action. All the child information is right there waiting for you, so you can add semantic information to the node based on the semantic information of its children, even using the children in an order other than left-to-right.
By contrast, the top-down parser constructs the node without any children, and then needs to add each child in turn to the already constructed node. That's certainly possible, but it's a bit ugly. Also, the incremental nature of the node constructor means that semantic information attached to the node also needs to be computed incrementally, or deferred until the node is fully constructed.
In many ways, this is similar to the difference between evaluating expressions written in Reverse Polish Notation (RPN) from expressions written in (Forward) Polish Notation [Note 1]. RPN was invented precisely to ease evaluation, which is possible with a simple value stack. Forward Polish expressions can be evaluated, obviously: the easiest way is to use a recursive evaluator but in environments where the call stack can not be relied upon it is possible to do it using an operator stack, which effectively turns the expression into RPN on the fly.
So that's probably the mechanism of choice for building syntax trees from top-down parsers as well. We add a "reduction" marker to the end of every right-hand side. Since the marker goes at the end of the right-hand side, so it is pushed first.
We also need a value stack, to record the AST nodes (or semantic values) being constructed.
In the basic algorithm, we now have one more case. We start by popping the top of the parser stack, and then examine this object:
The top of the parser stack was a terminal. If the current input symbol is the same terminal, we remove the input symbol from the input, and push it (or its semantic value) onto the value stack.
The top of the parser stack was a marker. The associated reduction action is triggered, which will create the new AST node by popping an appropriate number of values from the value stack and combining them into a new AST node which is then pushed onto the value stack. (As a special case, the marker action for the augmented start symbol's unique production S' -> S $ causes the parse to be accepted, returning the (only) value in the value stack as the AST.)
The top of the parser stack was a non-terminal. We then identify the appropriate right-hand side using the current input symbol, and push that right-hand side (right-to-left) onto the parser stack.
You need to understand the concept behind. You need to understand the concept of pushdown automaton. After you understand how to make computation on paper with pencil you will be able to understand multiple ways to implement its idea, via recursive descent or with stack. The ideas are the same, when you use recursive descent you implicitly have the stack that the program use for execution, where the execution data is combined with the parsing automaton data.
I suggest you to start with the course taught by Ullman (automata) or Dick Grune, this one is the best focused on parsing. (the book of Grune is this one), look for the 2nd edition.
For LR parsing the essential is to understand the ideas of Earley, from these ideas Don Knuth created the LR method.
For LL parsing, the book of Grune is excellent, and Ullman presents the computation on paper, the math background of parsing that is essential to know if you want to implement your own parsers.
Concerning the AST, this is the output of parsing. A parser will generate a parsing tree that is transformed in AST or can generate and output directly the AST.

simplify equations/expressions using Javacc/jjtree

I have created a grammar to read a file of equations then created AST nodes for each rule.My question is how can I do simplification or substitute vales on the equations that the parser is able to read correctly. in which stage? before creating AST nodes or after?
Please provide me with ideas or tutorials to follow.
Thank you.
I'm assuming you equations are something like simple polynomials over real-value variables, like X^2+3*Y^2
You ask for two different solutions to two different problems that start with having an AST for at least one equation:
How to "substitute values" into the equation and compute the resulting value, e.g, for X==3 and Y=2, substitute into the AST for the formula above and compute 3^2+3*2^2 --> 21
How to do simplification: I assume you mean algebraic simplification.
The first problem of substituting values is fairly easy if yuo already have the AST. (If not, parse the equation to produce the AST first!) Then all you have to do is walk the AST, replacing every leaf node containing a variable name with the corresponding value, and then doing arithmetic on any parent nodes whose children now happen to be numbers; you repeat this until no more nodes can be arithmetically evaluated. Basically you wire simple arithmetic into a tree evaluation scheme.
Sometimes your evaluation will reduce the tree to a single value as in the example, and you can print the numeric result My SO answer shows how do that in detail. You can easily implement this yourself in a small project, even using JavaCC/JJTree appropriately adapted.
Sometimes the formula will end up in a state where no further arithmetic on it is possible, e.g., 1+x+y with x==0 and nothing known about y; then the result of such a subsitution/arithmetic evaluation process will be 1+y. Unfortunately, you will only have this as an AST... now you need to print out the resulting AST in order for the user to see the result. This is harder; see my SO answer on how to prettyprint a tree. This is considerably more work; if you restrict your tree to just polynomials over expressions, you can still do this in small project. JavaCC will help you with parsing, but provides zero help with prettyprinting.
The second problem is much harder, because you must not only accomplish variable substitution and arithmetic evaluation as above, but you have to somehow encode knowledge of algebraic laws, and how to match those laws to complex trees. You might hardwire one or two algebraic laws (e.g., x+0 -> x; y-y -> 0) but hardwiring many laws this way will produce an impossible mess because of how they interact.
JavaCC might form part of such an answer, but only a small part; the rest of the solution is hard enough so you are better off looking for an alternative rather than trying to build it all on top of JavaCC.
You need a more organized approach for this: a Program Transformation System (PTS). A typical PTS will allow you specify
a grammar for an arbitrary language (in your case, simply polynomials),
automatically parses instance to ASTs and can regenerate valid text from the AST. A good PTS will let you write source-to-source transformation rules that the PTS will apply automatically the instance AST; in your case you'd write down the algebraic laws as source-to-source rules and then the PTS does all the work.
An example is too long to provide here. But here I describe how to define formulas suitable for early calculus classes, and how to define algebraic rules that simply such formulas including applying some class calculus derivative laws.
With sufficient/significant effort, you can build your own PTS on top of JavaCC/JJTree. This is likely to take a few man-years. Easier to get a PTS rather than repeat all that work.

How to read through an LL parsing ?

I read that the LL parser is a Top down parser. So logically I suppose that we read throughout from the top to the down.
However, there's many way for read from the top to the down.
I found on wikipedia a page which talk about the depth first which speak of the course in an tree data structure (binary tree).
Otherwise, there is 3 kind of depth first: Pre-order, In-order, Post-order.
In my mind, I suppose that I need to use the Post-order one but how to be sure ?
how to know which kind of depth first need I to use for the LL parsing ?
depth first : https://en.wikipedia.org/wiki/Tree_traversal
Thank's
There's usually an infinite number of ways to traverse a grammar, just like there's an infinite number of possible inputs that adhere to the grammar.
When you walk the grammar you typically don't do it like you would a traditional tree or graph structure. Rather, your walk is dictated by an input stream of tokens coming from the lexer.
E.g. if you find yourself at a place in the grammar where it has has a production where either an identifier or an integer literal may occur, the branch taken is dictated by whether the current token is one or the other (or something else, which would then be a syntax error for that input).

Resources