In the programming languages specifications, why is it that lexical analysis not translatable? - parsing

In all of the standard specifications for programming languages, why is it that you cannot directly translate the lexical analysis/layout to a grammar that is ready to be plugged into and working?
I can understand that it would be impossible to adapt it for the likes of Flex/Bison, Lex/Yacc, Antlr and so on, and furthermore to make it readable for humans to understand.
But surely, if it is a standard specification, it should be a simple copy/paste the grammar layout and instead end up with loads of shift/reduce errors as a result which can back fire and hence, produce an inaccurate grammar.
In other words, why did they not make it readable for use by a grammar/parser tool straight-away?
Maybe it is a debatable thing I don't know...
Thanks,
Best regards,
Tom.

In other words, why did they not make
it readable for use by a
grammar/parser tool straight-away?
Standards documents are intended to be readable by humans, not parser generators.

It is easy for humans to look at a grammar and know what the author intended, however, a computer needs to have a lot more hand holding along the way.
Specifically, these specifications are generally not LL(1) or LR(1). As such, lookaheads are needed, conflicts need to be resolved. True, this could be done in the language specification, but then it is source code for a lexical analyzer, not a language specification.

I agree with your sentiment, but the guys writing standards can't win on this.
To make the lexer/grammar work for a parser generator directly-out-of-standard, the standard writers would have to choose a specific one. (What choice would the COBOL standard folks have made in 1958?)
The popular ones (LEX, YACC, etc.)
are often not capable of handling reference grammars, written for succinctness and clarity, and so would be a poor (e.g. non-)choice.
More exotic ones (Earley, GLR) might be more effective because they allow infinite lookahead and ambiguity, but are harder to find. So if a specific tool like this
was chosen you would not get what you wanted, which is a grammar that works with the parser generator you have.
Having said that, the DMS Software Reengineering Toolkit uses a GLR parser generator. We don't have to massage reference grammars a lot to get them to work, and DMS now handles a lot of languages, including ones that are famously hard such as C++. IMHO, this is as close to your ideal as you are likely to get.

Related

Writing a Parser for mixed Languages

I am trying to write a Parser that can analyze mixed Languages and generate an AST of it. I first tried to build it from scratch on my own in Java and failed, because this is quite a hard topic for a Parser-beginner. Then i googled and found http://www2.cs.tum.edu/projects/cup/examples.php and JFlex.
The question now is: What is the best way to do it?
For example i have a Codefile, that contains several Tags, JS Code, and some $CMS_SET(x,y)$ Code. Is the best way to solve this to define a grammar for all those things in CUP and let CUP generate a Parser based on my grammer that can analyze those mixed Language files and generate and AST Tree of it?
Thanks for all helpful answers. :)
EDIT: I need to do it in Java...
This topic is quite hard even for an expert in this area, which I consider myself to be; check my bio.
The first issue is to build individual parsers for each sublanguage. The first thing you will discover is that defining parsers for specific languages is actually hard; you can read the endless list of SO requests for "can I get a parser for X" or "how do I fix my parser for X". Mostly I think these requests end up not going anywhere; the parsing engines aren't very good, you have to twist the grammars and the parsers to make them work on real languages, there isn't any such thing as "pure HTML", the standards documents disagree and your customer always has some twist in his code that you're not prepared for. Finally there's the glitches related to character set encodings, variations in newline endings, and preprocessors to complicate the parsing parsing problem. The C++ preprocessor is a lot more complex than you might think, and you have to have this right. The easiest way to defeat this problem is to find some parser generator with the languages already predefined. ANTLR has a bunch for the now deprecated ANTLR3; there's no assurance these parsers are robust let alone compatible for your purposes.
CUP isn't a particularly helpful parser generator; none of the LL(x) or LALR(x) parser generators are really helpful, because no real langauge matches the categories of things they can parse. The consequence: an endless stream of requests (on SO!) for help "resolving my shift-reduce conflict", or "eliminating right recursion". The only parser generator IMHO that has stood the test of time is a GLR parser generator (I hear good things about GLL but that's pretty recent). We've done 40+ languages with one GLR parser generator, including production IBM COBOL, full C++14 and Java8.
You second problem will be building ASTs. You can hand code the AST building process, but that gets old fast when you have to change the grammars often and/or you have many grammars as you are effectively contemplating. This you can beat your way throught with sweat. (We chose to push the problem of building ASTs into the parser so we didn't have to put any energy this in building a grammar; to do this, your parser engine has to offer you this help and none of the mainstream ones do.).
Now you need to compose parsers. You need to have one invoke the other as the need comes up; of course your chosen parser isn't designed to do this so you'll have to twist it. The first hard part is provide a parser with clues that a sublanguage is coming up in the input stream, and for it to hand off that parsing to the sublangauge parser, and get it to pass a tree back to be incorporated in the parent parser's tree, presumably with some kind of marker so you can tell where the transitions between different sublangauges are in the tree. You can often do this by hacking one language's lexer, when it sees the clue, to invoke the other; but then what you do with the tree it returns? There's no way to give that tree to the current parser and say "integrate this". You get around this by modifying the parsing machinery in arcane ways.
But all of the above isn't where the problem is.
Parsing is inconvenient, but only a small part of what you need to analyze your programs in any interesting way; you need symbol tables, control and data flow analysis, maybe points-to analyses, and the engineering on these will swamp the work listed above. See my essay on "Life After Parsing" (google or via my bio) for a long discussion of what else you need.
In short, I think you are biting off an enormous task in just "parsing", and you haven't even told us what you intend to do with the result. You are welcome to start down this path, but very few people have succeeded; my team spent over a 50 man years of PhD level engineering to get where we are, and we are hardly done.
Java won't make the solution any easier or harder; the langauge in which you solve all of the above is irrelevant.

Why some compilers prefer hand-crafted parser over parser generators?

According to Vala documentation: "Before 0.3.1, Vala's parser was the classic flex scanner and Bison LALR parser combination. But as of commit eba85a, the parser is a hand-crafted recursive descent parser."
My question is: Why?
The question could be addressed to any compiler which isn't using parser generator. What are pros and cons for such a move from parser generator to hand-crafted parser? What are disadvantages of using parser generators (Bison, ANTLR) for compilers?
As a side comment: I'm interested in Vala specifically because I like the idea of having language with modern features and clean syntax but compilable into "native" and "unmanaged" high-level language (C in case of Vala). I have found only Vala so far. I'm thinking of having fun by making Vala (or similar language) compilable to C++ (backed by Qt libs). But since I don't want to invent completely new language I'm thinking of taking some existing grammar. Obviously hand-crafted parsers don't have written formal grammar I might reuse. Your comments on this idea are welcome (is the whole idea silly?).
I have written half a dozen hand crafted parsers (in most cases recursive descent parser AKA top-down parser) in my career and have seen parsers generated by parser generators and I must admit I am biased against parser generators.
Here are some pros and cons for each approach.
Parser generators
Pros:
Quickly get a working parser (at least if you do not know how to hand code it).
Cons:
Generated code is hard to understand and debug.
Difficult to implement proper error handling. The generator will create a correct parser for syntactically correct code but will choke on incorrect code and in most cases will not be able to provide proper error messages.
A bug in parser generator may halt your project. You need to fix the bug in somebody else's code (if source code is available), wait for the author to fix it or workaround the bug (if possible at all).
Hand crafted recursive descent parser
Pros:
Generated code is easy to understand. Recursive parsers usually have one function corresponding to each language construct, e.g. parseWhile to parse a 'while' statement, parseDeclaration to parse a declaration and so on. Understanding and debugging the parser is easy.
It is easy to provide meaningful error messages, to recover from errors and continue parsing in the way that makes most sense in a particular situation.
Cons:
It will take some time to hand code the parser especially if you do not have an experience with this stuff.
The parser may be somewhat slow. This applies to all recursive parsers not just hand written ones. Having one function corresponding to each language construct to parse a simple numeric literal the parser may make a dozen or more nested calls starting from e.g. parseExpression through parseAddition, parseMultiplication, etc. parseLiteral. Function calls are relatively inexpensive in a language like C but still cam sum up to a significant time.
One solution to speedup a recursive parser is to replace parts of your recursive parser by a bottom-up sub-parser which often is much faster. The natural candidates for such sub-parser are the expressions which have almost uniform syntax (i.e. binary and unary expressions) with several precedence levels. The bottom-up parser for an expression is usually also simple to hand code, it is often just one loop getting input tokens from the lexer, a stack of values and a lookup table of operator precedence’s for operator tokens.
LR(1) and LALR(1) parsers are really, really annoying for two reasons:
The parser generator isn't very good at producing useful error messages.
Certain kinds of ambiguity, like C-style if-else blocks, make writing the grammar very painful.
On the other hand, LL(1) grammar are much better at both of these things. The structure of LL(1) grammars makes them very easy to encode as recursive descent parsers, and so dealing with a parser generator is not really a win.
Also, in the case of Vala, the parser and the compiler itself as presented as a library, so you can build a custom backend for the Vala compiler using the Vala compiler library and get all the parsing and type checking and such for free.
I know this isn't going to be definitive, and if your questions weren't specifically Vala-related I wouldn't bother, but since they are...
I wasn't too heavily involved with the project back then so I'm not that clear on some of the details, but the big reason I remember from when Vala switched was dogfooding. I'm not certain it was the primary motivation, but I do remember that it was a factor.
Maintainability was also an issue. That patch replaced a larger parser written in C/Bison/YACC (relatively few people have significant experience with the latter two) with a smaller parser in Vala (which pretty much anyone interested in working on valac probably knows and is comfortable with).
Better error reporting was also a goal, IIRC.
I don't know if it was a factor at all, but the hand-written parser is a recursive descent parser. I know ANTLR generates those, the ANTLR is written in Java, which is a pretty heavy dependency (yes, I know it's not a run-time dependency, but still).
As a side comment: I'm interested in Vala specifically because I like the idea of having language with modern features and clean syntax but compilable into "native" and "unmanaged" high-level language (C in case of Vala). I have found only Vala so far. I'm thinking of having fun by making Vala (or similar language) compilable to C++ (backed by Qt libs). But since I don't want to invent completely new language I'm thinking of taking some existing grammar. Obviously hand-crafted parsers don't have written formal grammar I might reuse. Your comments on this idea are welcome (is the whole idea silly?).
A lot of Vala is really a reflection of decisions made by GObject, and things may or may not work the same way in C++/Qt. If your goal is to replace GObject/C in valac with Qt/C++, you're probably in for more work than you expect. If, however, you just want to make C++ and Qt libraries accessible from Vala, that should certainly be possible. In fact, Luca Bruno started working on that about a year ago (see the wip/cpp branch). It hasn't seen activity for a while due to lack of time, not technical issues.
According to Vala documentation: "Before 0.3.1, Vala's parser was the
classic flex scanner and Bison LALR parser combination. But as of
Commit eba85a, the parser is a hand-crafted recursive descent parser."
My question is: Why?
Here I'm specifically asking about Vala, although the question could
be addressed to any compiler which isn't using parser generator. What
are pros and cons for such a move from parser generator to
hand-crafted parser? What are disadvantages of using parser generators
(Bison, ANTLR) for compilers?
Perhaps the programmers spotted some avenues for optimisation that the parser generator didn't spot, and these avenues for optimisation required an entirely different parsing algorithm. Alternatively, perhaps the parser generator generated code in C89, and the programmers decided refactoring for C99 or C11 would improve legibility.
As a side comment: I'm interested in Vala specifically because I like
the idea of having language with modern features and clean syntax but
compilable into "native" and "unmanaged" high-level language (C in
case of Vala).
Just a quick note: C isn't native. It has it's roots in portability, as it was designed to abstract away those hardware/OS-specific details that caused programmers so much grief when porting. For example, think of the pain of using an entirely different fopen for each OS and/or filesystem; I don't just mean different in functionality, but also different in input and output expectations eg. different arguments, different return values. Similarly, C11 introduces portable threads; Code that uses threads will be able to use the same C11-compliant code to target all OSes that implement threads.
I have found Vala so far. I'm thinking of having fun by making Vala
(or similar language) compilable to C++ (backed by Qt libs). But since
I don't want to invent completely new language I'm thinking of taking
some existing grammar. Obviously hand-crafted parsers don't have
written formal grammar I might reuse. Your comments on this idea are
welcome (is the whole idea silly?).
It may be feasible to use the hand-crafted parser to produce C++ code with little effort, so I wouldn't toss this option aside so quickly; The old flex/bison parser generator may or may not be more useful. However, this isn't your only option. In any case, I'd be tempted to study the specification extensively.
It is curious that these authors went from bison to RD. Most people would go in the opposite direction.
The only real reason I can see to do what the Vala authors did would be better error recovery, or maybe their grammar isn't very clean.
I think you will find that most new languages start out with hand-written parsers as the authors get a feel for their own new language and figure out exactly what it is they want to do. Also in some cases as the authors learn how to write compilers. C is a classic example, as is C++. Later on in the evolution a generated parser may be substituted. Compilers for existing standard languages on the other hand can more rapidly be developed via a parser generator, possibly even via an existing grammar: time to market is a critical business parameter of these projects.

Can Xtext be used for parsing general purpose programming languages?

I'm currently developing a general-purpose agent-based programming language (its syntaxt will be somewhat inspired by Java, and we are also using object in this language).
Since the beginning of the project we were doubtful about the fact of using ANTLR or Xtext. At that time we found out that Xtext was implementing a subset of the feature of ANTLR. So we decided to use ANLTR for our language losing the possibility to have a full-fledged Eclipse editor for free for our language (such a nice features provided by Xtext).
However, as the best of my knowledge, this summer the Xtext project has done a big step forward. Quoting from the link:
What are the limitations of Xtext?
Sven: You can implement almost any kind of programming language or DSL
with Xtext. There is one exception, that is if you need to use so
called 'Semantic Predicates' which is a rather complicated thing I
don't think is worth being explained here. Very few languages really
need this concept. However the prominent example is C/C++. We want to
look into that topic for the next release.
And that is also reinforced in the Xtext documentation:
What is Xtext? No matter if you want to create a small textual domain-specific language (DSL) or you want to implement a full-blown
general purpose programming language. With Xtext you can create your
very own languages in a snap. Also if you already have an existing
language but it lacks decent tool support, you can use Xtext to create
a sophisticated Eclipse-based development environment providing
editing experience known from modern Java IDEs in a surprisingly short
amount of time. We call Xtext a language development framework.
If Xtext has got rid of its past limitations why is it still not possible to find a complex Xtext grammar for the best known programming languages (Java, C#, etc.)?
On the ANTLR website you can find tons of such grammar examples, for what concerns Xtext instead the only sample I was able to find is the one reported in the documentation. So maybe Xtext is still not mature to be used for implementing a general purpose programming language? I'm a bit worried about this... I would not start to re-write the grammar in Xtext for then to recognize that it was not suited for that.
I think nobody implemented Java or C++ because it is a lot of work (even with Xtext) and the existing tools and compilers are excellent.
However, you could have a look at Xbase and Xtend, which is the expression language we ship with Xtext. It is built with Xtext and is quite a good proof for what you can build with Xtext. We have done that in about 4 person months.
I did a couple of screencasts on Xtend:
http://blog.efftinge.de/2011/03/xtend-screencast-part-1-basics.html
http://blog.efftinge.de/2011/03/xtend-screencast-part-2-switch.html
http://blog.efftinge.de/2011/03/xtend-screencast-part-3-rich-strings-ie.html
Note, that you can simply embed Xbase expressions into your language.
I can't speak for what Xtext is or does well.
I can speak to the problem of developing robust tools for processing real languages, based on our experience with the DMS Software Reengineering Toolkit, which we imagine is a language manipulation framework.
First, parsing of real languages usually involves something messy in lexing and/or parsing, due to the historical ways these languages have evolved. Java is pretty clean. C# has context-dependent keywords and a rudimentary preprocessor sort of like C's. C has a full blown preprocessor. C++ is famously "hard to parse" due to ambiguities in the grammar and shenanigans with template syntax. COBOL is fairly ugly, doesn't have any reference grammars, and comes in a variety of dialects. PHP will turn you to stone if you look at it because it is so poorly defined. (DMS has parsers for all of these, used in anger on real applications).
Yet you can parse all of these with most of the available parsing technologies if you try hard enough, usually by abusing the lexer or the parser to achieve your goals (how the GNU guys abused Bison to parse C++ by tangling lexical analysis with symbol table lookup is a nice ugly case in point). But it takes a lot of effort to get the language details right, and the reference manuals are only close approximations of the truth with respect to what the compilers really accept.
If Xtext has a decent parsing engine, one can likely do this with Xtext. A brief perusal of the Xtext site sounds like the lexers and parsers are fairly decent. I didn't see anything about the "Semantic Predicate"s; we have them in DMS and they are lifesavers in some of the really dark corners of parsing. Even using the really good parsing technology (we use GLR parsers), it would be very hard to parse COBOL data declarations (extracting their nesting structure during the parse) without them.
You have an interesting problem in that your language isn't well defined yet. That will make your initial parsers somewhat messy, and you'll revise them a lot. Here's where strong parsing technology helps you: if you can revise your grammar easily you can focus on what you want your language to look like, rather than focusing on fighting the lexer and parser. The fact that you can change your language definition means in fact that if Xtext has some limitations, you can probably bend your language syntax to match without huge amounts of pain. ANTLR does have the proven ability to parse a language pretty much as you imagine it, modulo the usual amount of parser hacking.
What is never discussed is what else is needed to process a language for real. The first thing you need to be able to do is to construct ASTs, which ANTLR and YACC will help you do; I presume Xtext does also. You also need symbol tables, control and data flow analysis (both local and global), and machinery to transform your language into something else (presumably more executable). Doing just symbol tables you will find surprisingly hard; C++ has several hundred pages of "how to look up an identifier"; Java generics are a lot tougher to get right than you might expect. You might also want to prettyprint the AST back to source code, if you want to offer refactorings. (EDIT: Here both ANTLR and Xtext offer what amounts to text-template driven code generation).
Yet these are complex mechanisms that take as much time, if not more than building the parser. The reason DMS exists isn't because it can parse (we view this just as the ante in a poker game), but because all of this other stuff is very hard and we wanted to amortize the cost of doing it all (DMS has, we think, excellent support for all of these mechanisms but YMMV).
On reading the Xtext overview, it sounds like they have some support for symbol tables but it is unclear what kind of assumption is behind it (e.g., for C++ you have to support multiple inheritance and namespaces).
If you are already started down the ANTLR road and have something running, I'd be tempted to stay the course; I doubt if Xtext will offer you a lot of additional help. If you really really want Xtext's editor, then you can probably switch at the price of restructuring what grammar you have (this is a pretty typical price to pay when changing parsing paradigms). Expect most of your work to appear after you get the parser right, in an ad hoc way. I doubt you will find Xtext or ANTLR much different here.
I guess the most simple answer to your question is: Many general purpose languages can be implemented using Xtext. But since there is no general answer to which parser-capabilities a general purpose languages needs, there is no general answer to your questions.
However, I've got a few pointers:
With Xtext 2.0 (released this summer), Xtext supports syntactic predicates. This is one of the most requested features to handle ambiguous syntax without enabling antlr's backtracking.
You might want to look at the brand-new languages Xbase and Xtend, which are (judging based on their capabilities) general-purpose and which are developed using Xtext. Sven has some nice screen casts in his blog: http://blog.efftinge.de/
Regarding your question why we don't see Xtext-grammars for Java, C++, etc.:
With Xtext, a language is more than just a grammar, so just having a grammar that describes a language's syntax is a good starting point but usually not an artifact valuable enough for shipping. The reason is that with an Xtext-grammar you also define the AST's structure (Abstract Syntax Tree, and an Ecore Model in fact) including true cross references. Since this model is the main internal API of your language people usually spend a lot of thought designing it. Furthermore, to resolve cross references (aka linking) you need to implement scoping (as it is called in Xtext). Without a proper implementation of scoping you can either not have true cross references in your model or you'll get many lining errors.
A guess my point is that creating a grammar + designing the AST model + implementing scoping is just a little more effort that taking a grammar from some language-zoo and translating it to Xtext's syntax.

What is the easiest way of telling whether a BNF grammar is ambiguous or not?

Namely, is there a tool out there that will automatically show the full language for a given grammar, including highlighting ambiguities (if any)?
There might be some peculiarity about BNF-style grammars, but in general, deciding whether a given context-free grammar (such as BNF) is ambiguous is not possible.
In short, there does not exist a tool because in general, that tool is mathematically impossible. There might be some special cases that could work for you, though.
In general, no.
But as a practical approach, what you can do, is given a grammar, is for each rule, to enumerate possible strings of valid terminals/nonterminals, to see if any rule has two or more equivalent derivations (which would be an ambiguity).
Our DMS Software Reengineering Toolkit is a program transformation system for arbitrary computer langauges, driven by explicit grammar descriptions. DMS uses a parser generator to drive its GLR parsing engine.
DMS's parser generator will optionally the ambiguity check sketched above, by running an iterative deepening search across all grammar rules. This is practical because it has the parse tables to efficiently guide the enumeration of choices. You can tell it to run this check up to some chosen depth. It can take a long time if you choose a depth of any interesting size, but in fact a depth of 3 or 4 is sufficient to find many stupid ambiguities introduced in a large grammar. We generally do this during our initial grammar debugging, and at the point where we think we have it pretty much right.

Most effective way to parse C-like definition strings?

I've got a set of function definitions written in a C-like language with some additional keywords that can be put before some arguments(the same way as "unsigned" or "register", for example) and I need to analyze these lines as well as some function stubs and generate actual C code from them.
Is that correct that Flex/Yacc are the most proper way to do it?
Will it be slower than writing a Shell or Python script using regexps(which may become big pain, as I suppose, if the number of additional keywords becomes bigger and their effects would be rather different) provided that I have zero experience with analysers/parsers(though I know how LALR does its job)?
Are there any good materials on Lex/Yacc that cover similar problems? All papers I could find use the same primitive example of a "toy" calculator.
Any help will be appreciated.
ANTLR is commonly used (as are Lex\Yacc).
ANTLR, ANother Tool for Language
Recognition, is a language tool that
provides a framework for constructing
recognizers, interpreters, compilers,
and translators from grammatical
descriptions containing actions in a
variety of target languages.
There is also the Lemon Parser, which features a less restrictive grammar. The down side is you're married to lemon, re-writing a parser's grammar to something else when you discover some limitation sucks. The up side is its really easy to use .. and self contained. You can drop it in tree and not worry about checking for the presence of others.
SQLite3 uses it, as do several other popular projects. I'm not saying use it because SQLite does, but perhaps give it a try if time permits.
That entirely depends on your definition of "effective". If you have all the time of the world, the fastest parser would be a hand-written pull parser. They take a long time to debug and develop but today, no parser generator beats hand-written code in terms of runtime performance.
If you want something that can parse valid C within a week or so, use a parser generator. The code will be fast enough and most parser generators come with a grammar for C already which you can use as a starting point (avoiding 90% of the common mistakes).
Note that regexps are not suitable for parsing recursive structures. This approach would both be slower than using a generator and more error prone than a hand-written pull parser.
actually, it depends how complex is your language and whether it's really close to C or not...
Still, you could use lex as a first step even for regular expression ....
I would go for lex + menhir and o'caml....
but any flex/yacc combination would be fine..
The main problem with regular bison (the gnu implementation of yacc) stems from the C typing.. you have to describe your whole tree (and all the manipulation functions)... Using o'caml would be really easier ...
For what you want to do, our DMS Software Reengineering Toolkit is likely a very effective solution.
DMS is designed specifically to support customer analyzers/code generators of the type you are discussing. It provides very strong facilities for defining arbitrary language parsers/analyzers (tested on 30+ real languages including several complete dialects of C, C++, Java, C#, and COBOL).
DMS automates the construction of ASTs (so you don't have to do anything but get the grammar right to have a usable AST), enables the construction of custom analyses of exactly the pattern-directed inspection you indicated, can construct new C-specific ASTs representing the code you want to generate, and spit them out as compilable C source text. The pre-existing definitions of C for DMS can likely be bent to cover your C-like language.

Resources