What are the reasons for using parser combinators? - parsing

I'm looking at the following approach to using parser combinators in Haskell. The author gives the following example of Parser Combinators:
windSpeed :: String -> Maybe Int
windSpeed windInfo =
parseMaybe windSpeedParser windInfo
windSpeedParser :: ReadP Int
windSpeedParser = do
direction <- numbers 3
speed <- numbers 2 <|> numbers 3
unit <- string "KT" <|> string "MPS"
return speed
The author gives the following reasons for this approach:
easy to read (I agree with this)
similar format to the specification ie the parser itself is basically a description of what it parses (I agree with this)
I can't help but feel I'm missing some of the reasons for choosing parser combinators. Some benefit of either using Haskell, compile-time guarantees, elimination of runtime errors. Or some subsequent benefit when you starting parsing DSLs and using free monads.
My question is: What are the reasons for using parser combinators?

I see several benefits of using parser combinators:
Parser combinators are a generalization of hand-written top-down parsers. In the case that you hand-write a parser, use parser combinators to abstract away common patterns.
Unlike parser generators, parser combinators are potentially dynamic, allowing for decisions during runtime. This aspect may be useful if the language's grammar may be redefined based on the input.
Parsers are first-class objects.

Related

Providing a type system for the parser/lexer specification?

It is often believed that functional programming languages such as Ocaml or F# have a type system allowing us to spend more time in writing code, and less time in debugging, compared with coding in dynamic languages like Python or JavaScript.
Now I am writing a parser specification to be used with FsLexYacc, the F# Lexer and Parser package. The parser specification has something like this, for parsing integers and identifier names:
%token <int> CSTINT
%token <string> NAME
...
Expr:
NAME {VAR $1}
|CSTINT {CSTI $1}
Such kind of code is no more written in function programming languages, and therefore, they are not "protected" by the type system any more. Strange bugs can easily slip in I guess (not quite sure though).
Question: Is there any work (theoretical or practical) that tries to address this issue by providing a kind of type system also for the parser/lexer specification?
Yes, there has been a certain amount of research into this area.
It should be noted that although the semantic value stack used in LR parsing algorithms has heterogenous types, the algorithm itself is typesafe. But of course there could be mistakes in the implementation of the algorithm, so it would be ideal if the compiler itself could verify that the code produced by the parser generator is correctly typed. That turns out to be possible, and a number of implementations have been produced.
I don't have a complete bibliography handy, but I do have at hand two papers both published in 2006, which seem relevant:
Towards Efficient, Typed LR Parsers by François Pottier and Yann Régis-Gianas:
The LR parser generators that are bundled with many functional programming language implementations produce code that is untyped, needlessly inefficient, or both. We show that, using generalized algebraic data types, it is possible to produce parsers that are well-typed (so they cannot unexpectedly crash or fail) and nevertheless efficient.
Derivation of a Typed Functional LR Parser by Ralf Hinze and Ross Paterson:
This paper describes a purely functional implementation of LR parsing.
We formally derive our parsers in a series of steps starting from the inverse of printing. In contrast to traditional implementations of LR parsing, the resulting parsers are fully typed, stackless and table-free. The parsing functions pursue alternatives in parallel with each alternative represented by a continuation argument. The direct implementation presents many opportunities for optimization and initial measurements show excellent performance in comparison with conventional table-driven parsers.
You could define your parser using FParsec. This way you'll have all the benefits of using the F# type system. The above could be described as follows using FParsec in F#:
type Expression = StringExpr of String | NumberExpr of int
let alphanumericString = letter .>>. (manyChars (letter <|> digit))
>>= (fun (c,s)-> preturn (StringExpr(c.ToString()+s)))
let number = manyChars digit
>>= (fun n -> preturn (NumberExpr (Int32.Parse n)))
let expression = alphanumericString <|> number

Add operators during parsing

I try a bit the parser generators with Haskell, using Happy here. I used to use parser combinators before, such as Parsec, and one thing I can't achieve now with that is the dynamic addition (during execution) of new externally defined operators. For example, Haskell has some basic operators, but we can add more, giving them precedence and fixity. So I would like to know how to reproduce this with Happy, following the Haskell design (view example code bellow to be parsed), if it is not trivially feasible, or if it should perhaps be done through the parser combinators.
-- Adding the new operator
infixl 5 ++
(++) :: [a] -> [a] -> [a]
[] ++ ys = ys
(x:xs) ++ ys = x : xs ++ ys
-- Using the new operator taking into consideration fixity and precedence during parsing
example = "Hello, " ++ "world!"
Haskell only allows a few precedence levels. So you don't strictly need a dynamic grammar; you could just write out the grammar using precedence-level token classes instead of individual operators, leaving the lexer with the problem of associating a given symbol with a given precedence level.
In effect, that moves the dynamic addition of operators to the lexer. That's a slightly uncomfortable design decision, although in some cases it may not be too difficult to implement. It's uncomfortable design because it requires semantic feedback to the lexer; at a minimum, the lexer needs to consult the symbol table to figure out what type of token it is looking at. In the case of Haskell, at least, this is made more uncomfortable by the fact that fixity declarations are scoped, so in order to track fixity information, the lexer would also need to understand scoping rules.
In practice, most languages which allow program text to define operators and operator precedence work in precisely the same way the Haskell compiler does: expressions are parsed by the grammar into a simple list of items (where parenthesized subexpressions count as a single item), and in a later semantic analysis the list is rearranged into an actual tree taking into account precedence and associativity rules, using a simple version of the shunting yard algorithm. (It's a simple version because it doesn't need to deal with parenthesized subconstructs.)
There are several reasons for this design decision:
As mentioned above, for the lexer to figure out what the precedence of a symbol is (or even if the symbol is an operator with precedence) requires a close collaboration between the lexer and the parser, which many would say violates separation of concerns. Worse, it makes it difficult or impossible to use parsing technologies without a small fixed lookahead, such as GLR parsers.
Many languages have more precedence levels than Haskell. In some cases, even the number of precedence levels is not defined by the grammar. In Swift, for example, you can declare your own precedence levels, and you define a level not with a number but with a comparison to another previously defined level, leading to a partial order between precedence levels.
IMHO, that's actually a better design decision than Haskell, in part because it avoids the ambiguity of a precedence level having both left- and right-associative operators, but more importantly because the relative precedence declarations both avoid magic numbers and allow the parser to flag the ambiguous use of operators from different modules. In other words, it does not force a precedence declaration to mechanically apply to any pair of totally unrelated operators; in this sense it makes operator declarations easier to compose.
The grammar is much simpler, and arguably easier to understand since most people anyway rely on precedence tables rather than analysing grammar productions to figure out how operators interact with each other. In that sense, having precedence set by the grammar is more a distraction than documentation. See the C++ grammar as a good example of why precedence tables are easier to read than grammars.
On the other hand, as the C++ grammar also illustrates, a grammar is a lot more general than simple precedence declarations because it can express asymmetric precedences. (The grammar doesn't always express these gracefully, but they can be expressed.) A classic example of an asymmetric precedence is a lambda construct (λ ID expr) which binds very loosely to the right and very tightly to the left: the expected parse of a &compfn; λ b b &compfn; a does not ever consult the associativity of &compfn; because the λ comes between them.
In practice, there is very little cost to building the tree later. The algorithm to build the tree is well-known, simple and cheap.

Parser library that can handle ambiguity

I'm looking for a mature parser library, either for Scala or Haskell.
The most important point is, that the library can handle ambiguity.
If an expression is ambiguous, I want every possible abstract syntax tree, that matches the expression.
Simple example: The expression a ⊗ b ⊗ c can be seen as (a ⊗ b) ⊗ c or a ⊗ (b ⊗ c), and I need both variants.
Thanks!
I feel like the old guy for remembering when Walder's papers like Comprehending Monads (the precursor to the do notation) were exciting and new. The idea is that you (to quote) replace a failure by a list of successes, meaning maintain a list of all the possible parses. At the end you normally just take the first match, but with this setup, you can take all of them.
These aren't all that efficient for a deterministic parser, which is why they're less in fashion, but they are what you need.
Have a look at polyparse, and in particular Text.ParserCombinators.HuttonMeijer and Text.ParserCombinators.HuttonMeijerWallace.
(Hutton & Meijer translated the parser library to Haskell (from Gofer) and Wallace added extra features.)
Make sure you check it out on simple cases like parsing "aaaa" with
testP = do
a <- many $ char 'a'
b <- many $ char 'a'
return (a,b)
to see if it has the semantics you seek.
You asked for mature. These libraries are part of pure functional programming's heritage! Having said that, I'd call parsec more mature, even though it's younger.
(Speculation: I don't think parsec can do what you want. Its standard choice combinator is deterministic. I haven't looked into tweaking or replacing that behaviour, and I wouldn't want to I'm afraid.)
This question immediately reminded me of the Yacc is dead / No, it's not debate from the end of 2010. The authors of the Yacc is dead paper provide a library in Scala (unmaintained), Haskell and Racket. In the Yacc is alive response, Russ Cox points out that the code runs in exponential time for ambiguous grammars.
It's well-known that it is possible to parse ambiguous grammars in O(n^3), although obviously it can take exponential time to enumerate all the parse trees in the case that there are exponentially many of them -- and there will be in the case of x1 + x2 + x3 ... + xn. bison implements the GLR algorithm which does so; unfortunately, while bison is certainly mature (if not actually moribund), it is written neither in Haskell nor in Scala.
Daniel Spiewak implemented a GLL parser in Scala IIRC, but last time I looked at it, it suffered from some performance issues. So I'm not sure that it could be described as mature, either.
I can't speak to how mature it is or give you any usage examples, but I've had the scala gll-combinators library open in a tab for a few days. It handles ambiguous grammars and looks pretty nifty.
At the end the the choice fell on the Syntax Definition Formalism (SDF2)
with an sdf table generator here
and JSGLR as parser generator.

Does the recognition of numbers belong in the scanner or in the parser?

When you look at the EBNF description of a language, you often see a definition for integers and real numbers:
integer ::= digit digit* // Accepts numbers with a 0 prefix
real ::= integer "." integer (('e'|'E') integer)?
(Definitions were made on the fly, I have probably made a mistake in them).
Although they appear in the context-free grammar, numbers are often recognized in the lexical analysis phase. Are they included in the language definition to make it more complete and it is up to the implementer to realize that they should actually be in the scanner?
Many common parser generator tools -- such as ANTLR, Lex/YACC -- separate parsing into two phases: first, the input string is tokenized. Second, the tokens are combined into productions to create a concrete syntax tree.
However, there are alternative techniques that do not require tokenization: check out backtracking recursive-descent parsers. For such a parser, tokens are defined in a similar way to non-tokens. pyparsing is a parser generator for such parsers.
The advantage of the two-step technique is that it usually produces more efficient parsers -- with tokens, there's a lot less string manipulation, string searching, and backtracking.
According to "The Definitive ANTLR Reference" (Terence Parr),
The only difference between [lexers and parsers] is that the parser recognizes grammatical structure in a stream of tokens while the lexer recognizes structure in a stream of characters.
The grammar syntax needs to be complete to be precise, so of course it includes details as to the precise format of identifiers and the spelling of operators.
Yes, the compiler engineer decides but generally it is pretty obvious. You want the lexer to handle all the character-level detail efficiently.
There's a longer answer at Is it a Lexer's Job to Parse Numbers and Strings?

Python3 parser generator

I'm looking for a parser generator for a reasonably complex language (similar in complexity to Python itself) which works with Python3. If it can generate an AST automatically, this would be a bonus, but I'm fine if it just calls rules while parsing. I have no special requirements, nor does it have to be very efficient/fast.
LEPL isn't exactly a parser generator - it's better! The parsers are defined in Python code and constructed at runtime (hence some inefficiency, but much easier to use). It uses operator overloading to construct a quite readable DSL. Things like c = a & b | b & c for the BNF c := a b | b c..
You can pass the results of a (sub-)parser to an abritary callable, and this is very usable for AST generation (also useful for converting e.g. number literals to Python-level number objects). It's a recursive descent parser, so you better avoid left recursion in the grammar (there are memoization objets that can make left recursion work, but "Lepl's support for them has historically been unreliable (buggy)").
ANTLR can generate a lexer and/or parser in Python. You can also use it to create AST's and iterator-like structures to walk the AST (called tree grammars).
See ANTLR get and split lexer content for an ANTLR demo that produces an AST with the Python target.

Resources