In the process of learning about compilers a wrote a simple tokenizer and parser (recursive descent). The parser constructs an abstract syntax tree. Now I am going with semantic analysis. However I have a few questions about a construction of semantic analyser. Should I analyse the code semantically on the generated abstract syntax tree using recursive calls through the tree or maybe should I construct another tree (using a visitor pattern for example) for the purpose of semantic analysis. I found a document online which says that I should analyse the code semantically during the process of parsing, but it does not comply with a rule of single responsibility and makes the whole parser more prone to errors. Or maybe should I make semantic analysis a part of a intermediate representation generator? Maybe I am missing something, I would be grateful if someone could clarify this thing for me.
You are learning. Keep it simple; build a tree and run the semantic analyzer over the tree when parsing is completed.
If you decide (someday) to build a fast compiler, you might consider implementing some of that semantic analysis as you parse. This makes building both the parser and the semantic analyzer harder because they are now interacting (tangled is a better word, read about why most C++ parsers are implemented with a so-called "lexer hack" if you want to get ill). You'll also find that sometimes the information you need isn't available ("is the target of that goto defined so far?" so you usually can't do a complete job as parse runs, or you may have to delay some semantic processing for later in the parse and that's tricky to arrange. I don't recommend adding this kind of complexity early in your compiler education.
Start simple and focus on learning what semantic analysis is about.
You can optimize later when it is clear what you have to optimize and why you should do it.
Related
Because of only curiosity, I'd like to write hand-written parser, I know there are many compiler-compilers though.
Ironically saying, since I'm new in this branch, I have no confidence to understand underlying theory and then apply theoretical knowledge to build parser and further full compiler.
So learning-by-doing approach might be helpful. Are there good books/articles/tutorials met my desire in above manner?
In my application I am using Stanford CoreNLP for parsing english text into a graph data structure (Universal Dependencies).
After some modifications of the graph I need to generate a natural language output for which I am using SimpleNLG: https://github.com/simplenlg/simplenlg
However SimpleNLG is using Phrase Grammar.
Therefore in order to successfully use SimpleNLG for natural language generation I need to convert from Universal Dependencies into Phrase Grammar.
What is the easiest way of achieving this?
So far I have only come across this article on this topic:
http://delivery.acm.org/10.1145/1080000/1072147/p14-xia.pdf?ip=86.52.161.138&id=1072147&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&CFID=642131329&CFTOKEN=21335001&acm=1468166339_844b802736ce07dab89064efb7f8ede9
I am hoping that someone might have some more practical code examples to share on this issue?
Phrase-structure trees contain more information than dependency trees and therefore you cannot deterministically convert dependency trees to phrase-structure trees.
But if you are using CoreNLP to parse the sentences, take a look at the parse annotator. Unlike the dependency parser, this parser also outputs phrase-structure trees, so you can use this annotator to directly parse your sentences to phrase-structure trees.
It seems that recursive-descent parsers are not only the simplest to explain, but also the simplest to design and maintain. They aren't limited to LALR(1) grammars, and the code itself can be understood by mere mortals. In contrast, bottom up parsers have limits on the grammars they are able to recognize, and need to be generated by special tools (because the tables that drive them are next-to-impossible to generate by hand).
Why then, is bottom-up (i.e. shift-reduce) parsing more common than top-down (i.e. recursive descent) parsing?
If you choose a powerful parser generator, you can code your grammar without worrying about peculiar properties. (LA)LR means you don't have to worry about left recursion, one less headache. GLR means you don't have to worry about local ambiguity or lookahead.
And the bottom-up parsers tend to be pretty efficient. So, once you've paid the price of a bit of complicated machinery, it is easier to write grammars and the parsers perform well.
You should expect to see this kind of choice wherever there is some programming construct that commonly occurs: if it is easier to specify, and it performs pretty well, even if the machinery is complicated, complex machinery will win. As another example, the database world has gone to relational tools, in spite of the fact that you can hand-build an indexed file yourself. It's easier to write the data schemas, it's easier to specify the indexes, and with complicated enough machinery behind (you don't have to look at the gears, you just use them), they can be pretty fast with almost no effort. Same reasons.
It stems from a couple different things.
BNF (and the theory of grammars and such) comes from computational linguistics: folks researching natural language parsing. BNF is a very attractive way of describing a grammar, and so it's natural to want to consume these notation to produce a parser.
Unfortunately, top-down parsing techniques tend to fall over when applied to such notations, because they cannot handle many common cases (e.g., left recursion). This leaves you with the LR family, which performs well and can handle the grammars, and since they're being produced by a machine, who cares what the code looks like?
You're right, though: top-down parsers work more "intuitively," so they're easier to debug and maintain, and once you have a little practice they're just as easy to write as those generated by tools. (Especially when you get into shift/reduce conflict hell.) Many of the answers talk about parsing performance, but in practice top-down parsers can often be optimized to be as fast as machine-generated parsers.
That's why many production compilers use hand-written lexers and parsers.
Recursive-descent parsers try to hypothesize the general structure of the input string, which means a lot of trial-and-error occurs before the end of the string is reached. This makes them less efficient than bottom-up parsers, which need no such inference engines.
The performance difference becomes magnified as the complexity of the grammar increases.
To add to other answers, it is important to realize that besides efficiency, bottom-up parsers can accept significantly more grammars than recursive-descent parsers do. Top-down parsers -whether predictive or not- can only have 1 lookahead token and fail if current token and whatever immediately follows the token can be derived using two different rules. Of course, you could implement the parser to have more lookaheads (e.g. LL(3)), but how far are you willing to push it before it becomes as complex as a bottom-up parser? Bottom-up parsers (LALR specifically) on the other hand, maintain a list of firsts and follows and can handle cases where top-down parsers can't.
Of course, computer science is about trade-offs. If you grammar is simple enough, it makes sense to write a top-down parser. If it's complex (such as grammars of most programming languages), then you might have to use a bottom-up parser to successfully accept the input.
I have two guesses, though I doubt that either fully explains it:
Top-down parsing can be slow. Recursive-descent parsers may require exponential time to complete their job. That would put severe limitations on the scalability of a compiler which uses a top-down parser.
Better tools. If you can express the language in some variant of EBNF, then chances are you can Lex/Yacc your way past a whole lot of tedious code. There don't seem to be as many tools to help automate the task of putting together a top-down parser. And let's face it, grinding out parser code just isn't the fun part of toying with languages.
I never saw a real comparison between top-down and shift-reduce parser :
just 2 small programs ran in the same time side-by-side, one using the top-down approach and the over one using the bottom-up approach, each of about ~200 lines of code,
able to parse any kind of custom binary operator and mathematical expression, the two sharing the same grammar declaration format, and then possibly adding variables declarations and affectations to show how 'hacks' (non context-free) can be implemented.
so, how to speak honestly of something we never did : comparing rigorously the two approach ?
A lot of websites states that packrat parsers can parse input in linear time.
So at the first look they me be faster than LALR parser contructed by the tools yacc or bison.
I wanted to know if the performance of packrat parsers is better/worse than the performance of LALR parser when tested with common input (like programming language source files) and not with any theoretical inputs.
Does anyone can explain the main differences between the two approaches.
Thanks!
I'm not an expert at packrat parsing, but you can learn more at Parsing expression grammar on Wikipedia.
I haven't dug into it so I'll assume the linear-time characterization of packrat parsing is correct.
L(AL)R parsers are linear time parsers too. So in theory, neither packrat nor L(AL)R parsers are "faster".
What matters, in practice, of course, is implementation. L(AL)R state transitions can be executed in very few machine instructions ("look token code up in vector, get next state and action") so they can be extremely fast in practice. By "compiling" L(AL)R parsing to machine code, you can end up with lightning fast parsers, as shown by this 1986 Tom Pennello paper on Very Fast LR parsing. (Machines are now 20 years faster than when he wrote the paper!).
If packrat parsers are storing/caching results as they go, they may be linear time, but I'd guess the constant overhead would be pretty high, and then L(AL)R parsers in practice would be much faster. The YACC and Bison implementations from what I hear are pretty good.
If you care about the answer, read the basic technical papers closely; if you really care, then implement one of each and check out the overhead constants. My money is strongly on L(AL)R.
An observation: most language front-ends don't spend most of their time "parsing"; rather, they spend a lot of time in lexical analysis. Optimize that (your bio says you are), and the parser speed won't matter much.
(I used to build LALR parser generators and corresponding parsers. I don't do that anymore; instead I use GLR parsers which are linear time in practice but handle arbitrary context-free grammmars. I give up some performance, but I can [and do, see bio] build dozens of parsers for many languages without a lot of trouble.).
I am the author of LRSTAR, an open-source LR(k) parser generator. Because people are showing interest in it, I have put the product back online here LRSTAR.
I have studied the speed of LALR parsers and DFA lexers for many years. Tom Pennello's paper is very interesting, but is more of an academic exercise than a real-world solution for compilers. However, if all you want is a pattern recognizer, then it may be the perfect solution for you.
The problem is that real-world compilers usually need to do more than pattern recognition, such as symbol-table look-up for incoming symbols, error recovery, provide an expecting list (statement completion information), and build an abstract-syntax tree while parsing.
In 1989, I compared the parsing speed of LRSTAR parsers to "yacc" and found that they are 2 times the speed of "yacc" parsers. LRSTAR parsers use the ideas published in the paper: "Optimization of Parser Tables for Portable Compilers".
For lexer (lexical analysis) speed I discovered in 2009 that "re2c" was generating the fastest lexers, about twice the speed of those generated by "flex". I was rewriting the LRSTAR lexer generator section at that time and found a way to make lexers that are almost as fast as "re2c" and much smaller. However, I prefer the table-driven lexers that LRSTAR generates, because they are almost as fast and the code compiles much quicker.
BTW, compiler front-ends generated by LRSTAR can process source code at a speed of 2,400,000 lines per second or faster. The lexers generated by LRSTAR can process 30,000,000 tokens per second. The testing computer was a 3.5 GHz machine (from 2010).
[2015/02/15] here is the 1986 Tom Pennello paper on Very Fast LR parsing
http://www.genesishistory.org/content/ProfPapers/VF-LRParsing.pdf
I know this is an old post, but a month or so ago, I stumbled on this paper: https://www.mercurylang.org/documentation/papers/packrat.pdf and accidentally saw this post today.
The watered-down version of that the paper says: packrat memoisation is a mixed blessing. The best results can be achieved if you have some heuristics wrt how often this or another rule is going to match. Essentially, it only makes sense to memoise the rules that have two following properties: (1) few elements, (2) very common.
Performance is mostly a matter of language design. For each language, there will be an approach, technology, or parser generator that will make best fit.
I can't prove it without more thought, but I think that nothing can beat a top-down descent parser in which the semantics drive the parser, and the parser drives the lexer, performance-wise. It would also be among the most versatile and easier to maintain among the implementations.
I am asking this question because I know there are a lot of well-read CS types on here who can give a clear answer.
I am wondering if such an AI exists (or is being researched/developed) that it writes programs by generating and compiling code all on it's own and then progresses by learning from former iterations. I am talking about working to make us, programmers, obsolete. I'm imagining something that learns what works and what doesn't in a programming languages by trial and error.
I know this sounds pie-in-the-sky so I'm asking to find out what's been done, if anything.
Of course even a human programmer needs inputs and specifications, so such an experiment has to have carefully defined parameters. Like if the AI was going to explore different timing functions, that aspect has to be clearly defined.
But with a sophisticated learning AI I'd be curious to see what it might generate.
I know there are a lot of human qualities computers can't replicate like our judgement, tastes and prejudices. But my imagination likes the idea of a program that spits out a web site after a day of thinking and lets me see what it came up with, and even still I would often expect it to be garbage; but maybe once a day I maybe give it feedback and help it learn.
Another avenue of this thought is it would be nice to give a high-level description like "menued website" or "image tools" and it generates code with enough depth that would be useful as a code completion module for me to then code in the details. But I suppose that could be envisioned as a non-intelligent static hierarchical code completion scheme.
How about it?
Such tools exist. They are the subject of a discipline called Genetic Programming. How you evaluate their success depends on the scope of their application.
They have been extremely successful (orders of magnitude more efficient than humans) to design optimal programs for the management of industrial process, automated medical diagnosis, or integrated circuit design. Those processes are well constrained, with an explicit and immutable success measure, and a great amount of "universe knowledge", that is a large set of rules on what is a valid, working, program and what is not.
They have been totally useless in trying to build mainstream programs, that require user interaction, because the main item a system that learns needs is an explicit "fitness function", or evaluation of the quality of the current solution it has come up with.
Another domain that can be seen in dealing with "program learning" is Inductive Logic Programming, although it is more used to provide automatic demonstration or language / taxonomy learning.
Disclaimer: I am not a native English speaker nor an expert in the field, I am an amateur - expect imprecisions and/or errors in what follow. So, in the spirit of stackoverflow, don't be afraid to correct and improve my prose and/or my content. Note also that this is not a complete survey of automatic programming techniques (code generation (CG) from Model-Driven Architectures (MDAs) merits at least a passing mention).
I want to add more to what Varkhan answered (which is essentially correct).
The Genetic Programming (GP) approach to Automatic Programming conflates, with its fitness functions, two different problems ("self-compilation" is conceptually a no-brainer):
self-improvement/adaptation - of the synthesized program and, if so desired, of the synthesizer itself; and
program synthesis.
w.r.t. self-improvement/adaptation refer to Jürgen Schmidhuber's Goedel machines: self-referential universal problem solvers making provably optimal self-improvements. (As a side note: interesting is his work on artificial curiosity.) Also relevant for this discussion are Autonomic Systems.
w.r.t. program synthesis, I think is possible to classify 3 main branches: stochastic (probabilistic - like above mentioned GP), inductive and deductive.
GP is essentially stochastic because it produces the space of likely programs with heuristics such as crossover, random mutation, gene duplication, gene deletion, etc... (than it tests programs with the fitness function and let the fittest survive and reproduce).
Inductive program synthesis is usually known as Inductive Programming (IP), of which Inductive Logic Programming (ILP) is a sub-field. That is, in general the technique is not limited to logic program synthesis or to synthesizers written in a logic programming language (nor both are limited to "..automatic demonstration or language/taxonomy learning").
IP is often deterministic (but there are exceptions): starts from an incomplete specification (such as example input/output pairs) and use that to constraint the search space of likely programs satisfying such specification and then to test it (generate-and-test approach) or to directly synthesize a program detecting recurrences in the given examples, which are then generalized (data-driven or analytical approach). The process as a whole is essentially statistical induction/inference - i.e. considering what to include into the incomplete specification is akin to random sampling.
Generate-and-test and data-driven/analytical§ approaches can be quite fast, so both are promising (even if only little synthesized programs are demonstrated in public until now), but generate-and-test (like GP) is embarrassingly parallel and then notable improvements (scaling to realistic program sizes) can be expected. But note that Incremental Inductive Programming (IIP)§, which is inherently sequential, has demonstrated to be orders of magnitude more effective of non-incremental approaches.
§ These links are directly to PDF files: sorry, I am unable to find an abstract.
Programming by Demonstration (PbD) and Programming by Example (PbE) are end-user development techniques known to leverage inductive program synthesis practically.
Deductive program synthesis start with a (presumed) complete (formal) specification (logic conditions) instead. One of the techniques leverage automated theorem provers: to synthesize a program, it constructs a proof of the existence of an object meeting the specification; hence, via Curry-Howard-de Bruijn isomorphism (proofs-as-programs correspondence and formulae-as-types correspondence), it extracts a program from the proof. Other variants include the use of constraint solving and deductive composition of subroutine libraries.
In my opinion inductive and deductive synthesis in practice are attacking the same problem by two somewhat different angles, because what constitute a complete specification is debatable (besides, a complete specification today can become incomplete tomorrow - the world is not static).
When (if) these techniques (self-improvement/adaptation and program synthesis) will mature, they promise to rise the amount of automation provided by declarative programming (that such setting is to be considered "programming" is sometimes debated): we will concentrate more on Domain Engineering and Requirements Analysis and Engineering than on software manual design and development, manual debugging, manual system performance tuning and so on (possibly with less accidental complexity compared to that introduced with current manual, not self-improving/adapting techniques). This will also promote a level of agility yet to be demonstrated by current techniques.