How to report multiple errors using megaparsec? - parsing

Per megaparsec docs, "Since version 8, reporting multiple parse errors at once has become much easier." I haven't been able to find a single example of doing it. The only one I find is this. However it only shows how to parse a newline delimited toy language and also does not show how to combine multiple errors into ParseErrorBundle. This SO discussion is not conclusive.

You want to use withRecovery to recover from Megaparsec-generated errors in conjunction with registerParseError (or registerFailure or registerFancyFailure) to "register" those errors (or your own generated errors) for delayed processing.
At the end of the parse, if no parse errors have been registered, then parsing succeeds, while if one or more parse errors have been registered, they are all printed at once. If you register parse errors and then also trigger an unrecovered error, parsing immediately terminates and the registered errors and the final unrecovered error will all be printed.
Here's a very simple example that parses a comma-separated list of numbers:
import Data.Void
import Text.Megaparsec
import Text.Megaparsec.Char
type Parser = Parsec Void String
numbers :: Parser [Int]
numbers = sepBy number comma <* eof
where number = read <$> some digitChar
comma = recover $ char ','
-- recover to next comma
recover = withRecovery $ \e -> do
registerParseError e
some (anySingleBut ',')
char ','
On good input:
> parseTest numbers "1,2,3,4,5"
[1,2,3,4,5]
and on input with multiple errors:
> parseTest numbers "1.2,3e5,4,5x"
1:2:
|
1 | 1.2,3e5,4,5x
| ^
unexpected '.'
expecting ','
1:6:
|
1 | 1.2,3e5,4,5x
| ^
unexpected 'e'
expecting ','
1:12:
|
1 | 1.2,3e5,4,5x
| ^
unexpected 'x'
expecting ',', digit, or end of input
There are some subtleties here. For the following, only the first parse error is handled:
> parseTest numbers "1,2,e,4,5x"
1:5:
|
1 | 1,2,e,4,5x
| ^
unexpected 'e'
expecting digit
and you have to study the parser carefully to see why. The sepBy successfully applies the number and comma parser in alternating sequence to parse "1,2,". When it gets to e, it applies the number parser which fails (because some digitChar requires at least one digit char). This is an unrecovered error, so parsing ends immediately with no other errors registered, so only the one error is printed.
Also, if you dropped the <* eof from the definition of numbers (e.g., to make it part of a larger parser), you'd discover that:
> parseTest numbers "1,2,3.4,5"
gives a parse error on the period, but:
> parseTest numbers "1,2,3.4"
parses fine. On the other hand:
> parseTest numbers "1,2,3.4\n hundreds of lines without commas\nfinal line, with comma"
gives parse errors on the period and the comma at the end of the file.
The issue is that the comma parser is used by sepBy to determine when the comma-separated list of numbers has ended. If the parser succeeds (which it can do via recovery, gobbling up hundreds of lines to the next comma), sepBy will try to keep running; if the parser fails (both initially, and because the recovery code can't find a comma after scanning the entire file), sepBy will complete.
Ultimately, writing recoverable parsers is kind of tricky.

Related

Haskell: How to integrate semantic whitespacing into a parser?

I'm currently writing a language in Haskell, https://github.com/EdONeill1/HENRYGCL, and I'm having trouble in figuring out how to allow for a program to be written on multiple lines. Take the following loop that adds 1 onto x until it reaches 10.
Henry > x := 1
Henry > if <x<10> [] x := <x+1> [] x := <x+10>
I would like the program to be somewhat in the form of:
Henry > x := 1
Henry > if <x<10>
[] x := <x+1>
[] x := <x+10>
I thought about using the function space or newline from the Text.ParserCombinators.Parsec.Char module. This would allow me to recognise (I think) the newline token \n. Upon using it the following parser function:
ifStmt :: Parser HenryVal
ifStmt =
do
reserved "if"
cond <- bExpression <|>
do
_ <- char '<'
x <- try parseBinary
_ <- char '>'
return x
some (space <|> newline)
reserved "[]"
stmt1 <- statement
some (space <|> newline)
reserved " []"
stmt2 <- statement
return $ If cond stmt1 stmt2
I receive the following error when I try following:
Henry > x:=1
1
Henry > if <x<10>
Parse error at "Henry" (line 1, column 10):
unexpected end of input
expecting space or lf new-line
Henry > if <x<10>
Parse error at "Henry" (line 1, column 12):
unexpected end of input
expecting space, lf new-line or "[]"
Henry >
The first error arises from pressing Enter when I finish typing > and the second error arises from pressing Space once then pressing Enter. In both instances a newline wasn't corrected. I'm not sure as well as to what lf in lf new-line actually means because to my understanding, shouldn't hitting Enter give you a newline?
In another section of my code I have the following, whiteSpace = Token.whiteSpace lexer. When I replace the some (space <|> newline) with this and press enter after if <x<10>, a newline is actually created. However despite being able to write a full if-statement, there's no termination and it just allows me to keep writing as much as I want indefinitely.
I'm quite confused in how to proceed from here. I think my logic using some (space <|> newline) is correct insofar as if the program encounters at least one space or newline, a space or newline is made but I know my implementation is incorrect. I thought that maybe whiteSpace would lead to somewhere but it seems as if that's another dead end.
The parser looks fine. The problem here is that main currently runs the parser on a single line at a time. You need to accumulate the whole input before running the parser.

parsing maxscript - problem with newlines

I am trying to create parser for MAXScript language using their official grammar description of the language. I use flex and bison to create the lexer and parser.
However, I have run into following problem. In traditional languages (e.g. C) statements are separated by a special token (; in C). But in MAXScript expressions inside a compound expression can be separated either by ; or newline. There are other languages that use whitespace characters in their parsers, like Python. But Python is much more strict about the placement of the newline, and following program in Python is invalid:
# compile error
def
foo(x):
print(x)
# compile error
def bar
(x):
foo(x)
However in MAXScript following program is valid:
fn
foo x =
( // parenthesis start the compound expression
a = 3 + 2; // the semicolon is optional
print x
)
fn bar
x =
foo x
And you can even write things like this:
for
x
in
#(1,2,3,4)
do
format "%," x
Which will evaluate fine and print 1,2,3,4, to the output. So newlines can be inserted into many places with no special meaning.
However if you insert one more newline in the program like this:
for
x
in
#(1,2,3,4)
do
format "%,"
x
You will get a runtime error as format function expects to have more than one parameter passed.
Here is part of the bison input file that I have:
expr:
simple_expr
| if_expr
| while_loop
| do_loop
| for_loop
| expr_seq
expr_seq:
"(" expr_semicolon_list ")"
expr_semicolon_list:
expr
| expr TK_SEMICOLON expr_semicolon_list
| expr TK_EOL expr_semicolon_list
if_expr:
"if" expr "then" expr "else" expr
| "if" expr "then" expr
| "if" expr "do" expr
// etc.
This will parse only programs which use newline only as expression separator and will not expect newlines to be scattered in other places in the program.
My question is: Is there some way to tell bison to treat a token as an optional token? For bison it would mean this:
If you find newline token and you can shift with it or reduce, then do so.
Otherwise just discard the newline token and continue parsing.
Because if there is no way to do this, the only other solution I can think of is modifying the bison grammar file so that it expects those newlines everywhere. And bump the precedence of the rule where newline acts as an expression separator. Like this:
%precedence EXPR_SEPARATOR // high precedence
%%
// w = sequence of whitespace tokens
w: %empty // either nothing
| TK_EOL w // or newline followed by other whitespace tokens
expr:
w simple_expr w
| w if_expr w
| w while_loop w
| w do_loop w
| w for_loop w
| w expr_seq w
expr_seq:
w "(" w expr_semicolon_list w ")" w
expr_semicolon_list:
expr
| expr w TK_SEMICOLON w expr_semicolon_list
| expr TK_EOL w expr_semicolon_list %prec EXPR_SEPARATOR
if_expr:
w "if" w expr w "then" w expr w "else" w expr w
| w "if" w expr w "then" w expr w
| w "if" w expr w "do" w expr w
// etc.
However this looks very ugly and error-prone, and I would like to avoid such solution if possible.
My question is: Is there some way to tell bison to treat a token as an optional token?
No, there isn't. (See below for a longer explanation with diagrams.)
Still, the workaround is not quite as ugly as you think, although it's not without its problems.
In order to simplify things, I'm going to assume that the lexer can be convinced to produce only a single '\n' token regardless of how many consecutive newlines appear in the program text, including the case where there are comments scattered among the blank lines. That could be achieved with a complex regular expression, but a simpler way to do it is to use a start condition to suppress \n tokens until a regular token is encountered. The lexer's initial start condition should be the one which suppresses newline tokens, so that blank lines at the beginning of the program text won't confuse anything.
Now, the key insight is that we don't have to insert "maybe a newline" markers all over the grammar, since every newline must appear right after some real token. And that means that we can just add one non-terminal for every terminal:
tok_id: ID | ID '\n'
tok_if: "if" | "if" '\n'
tok_then: "then" | "then" '\n'
tok_else: "else" | "else" '\n'
tok_do: "do" | "do" '\n'
tok_semi: ';' | ';' '\n'
tok_dot: '.' | '.' '\n'
tok_plus: '+' | '+' '\n'
tok_dash: '-' | '-' '\n'
tok_star: '*' | '*' '\n'
tok_slash: '/' | '/' '\n'
tok_caret: '^' | '^' '\n'
tok_open: '(' | '(' '\n'
tok_close: ')' | ')' '\n'
tok_openb: '[' | '[' '\n'
tok_closeb: ']' | ']' '\n'
/* Etc. */
Now, it's just a question of replacing the use of a terminal with the corresponding non-terminal defined above. (No w non-terminal is required.) Once we do that, bison will report a number of shift-reduce conflicts in the non-terminal definitions just added; any terminal which can appear at the end of an expression will instigate a conflict, since the newline could be absorbed either by the terminal's non-terminal wrapper or by the expr_semicolon_list production. We want the newline to be part of expr_semicolon_list, so we need to add precedence declarations starting with newline, so that it is lower precedence than any other token.
That will most likely work for your grammar, but it is not 100% certain. The problem with precedence-based solutions is that they can have the effect of hiding real shift-reduce conflict issues. So I'd recommend running bison on the grammar and verifying that all the shift-reduce conflicts appear where expected (in the wrapper productions) before adding the precedence declarations.
Why token fallback is not as simple as it looks
In theory, it would be possible to implement a feature similar to the one you suggest. [Note 1]
But it's non-trivial, because of the way the LALR parser construction algorithm combines states. The result is that the parser might not "know" that the lookahead token cannot be shifted until it has done one or more reductions. So by the time it figures out that the lookahead token is not valid, it has already performed reductions which would have to be undone in order to continue the parse without the lookahead token.
Most parser generators compound the problem by removing error actions corresponding to a lookahead token if the default action in the state for that token is a reduction. The effect is again to delay detection of the error until after one or more futile reductions, but it has the benefit of significantly reducing the size of the transition table (since default entries don't need to be stored explicitly). Since the delayed error will be detected before any more input is consumed, the delay is generally considered acceptable. (Bison has an option to prevent this optimisation, however.)
As a practical illustration, here's a very simple expression grammar with only two operators:
prog: expr '\n' | prog expr '\n'
expr: prod | expr '+' prod
prod: term | prod '*' term
term: ID | '(' expr ')'
That leads to this state diagram [Note 2]:
Let's suppose that we wanted to ignore newlines pythonically, allowing the input
(
a + b
)
That means that the parser must ignore the newline after the b, since the input might be
(
a + b
* c
)
(Which is fine in Python but not, if I understand correctly, in MAXScript.)
Of course, the newline would be recognised as a statement separator if the input were not parenthesized:
a + b
Looking at the state diagram, we can see that the parser will end up in State 15 after the b is read, whether or not the expression is parenthesized. In that state, a newline is marked as a valid lookahead for the reduction action, so the reduction action will be performed, presumably creating an AST node for the sum. Only after this reduction will the parser notice that there is no action for the newline. If it now discards the newline character, it's too late; there is now no way to reduce b * c in order to make it an operand of the sum.
Bison does allow you to request a Canonical LR parser, which does not combine states. As a result, the state machine is much, much bigger; so much so that Canonical-LR is still considered impractical for non-toy grammars. In the simple two-operator expression grammar above, asking for a Canonical LR parser only increases the state count from 16 to 26, as shown here:
In the Canonical LR parser, there are two different states for the reduction term: term '+' prod. State 16 applies at the top-level, and thus the lookahead includes newline but not ) Inside parentheses the parser will instead reach state 26, where ) is a valid lookahead but newline is not. So, at least in some grammars, using a Canonical LR parser could make the prediction more precise. But features which are dependent on the use of a mammoth parsing automaton are not particularly practical.
One alternative would be for the parser to react to the newline by first simulating the reduction actions to see if a shift would eventually succeed. If you request Lookahead Correction (%define parse.lac full), bison will insert code to do precisely this. This code can create significant overhead, but many people request it anyway because it makes verbose error messages more accurate. So it would certainly be possible to repurpose this code to do token fallback handling, but no-one has actually done so, as far as I know.
Notes:
A similar question which comes up from time to time is whether you can tell bison to cause a token to be reclassified to a fallback token if there is no possibility to shift the token. (That would be useful for parsing languages like SQL which have a lot of non-reserved keywords.)
I generated the state graphs using Bison's -g option:
bison -o ex.tab.c --report=all -g ex.y
dot -Tpng -oex.png ex.dot
To produce the Canonical LR, I defined lr.type to be canonical-lr:
bison -o ex_canon.c --report=all -g -Dlr.type=canonical-lr ex.y
dot -Tpng -oex_canon.png ex_canon.dot

Parse a String using Parsec?

I am trying to parse a String using parsec in Haskell, however every attempt throws another type of error.
import Text.ParserCombinators.Parsec
csvFile = endBy line eol
line = sepBy cell (char ',')
cell = many (noneOf ",\n")
eol = char '\n'
parseCSV :: String -> Either ParseError [[String]]
parseCSV input = parse csvFile "(unknown)" input
This code, when run through stack ghci produces an error saying "non type-variable argument in the constraint: Text.Parsec.Prim.Stream"
Basically, I am wondering what the most straight forward way to parse a String into tokens based on commas is in Haskell. It seems like a very straightforward concept and I assumed that it would be a great learning experience, but so far it has produced nothing but errors.
The error I see when entering char '\n' in ghci is:
<interactive>:4:1: error:
• Non type-variable argument
in the constraint: Text.Parsec.Prim.Stream s m Char
(Use FlexibleContexts to permit this)
• When checking the inferred type
it :: forall s (m :: * -> *) u.
Text.Parsec.Prim.Stream s m Char =>
Text.Parsec.Prim.ParsecT s u m Char
The advice about FlexibleContexts is accurate. You can turn on FlexibleContexts like so:
*Main> :set -XFlexibleContexts
Unfortunately, the next error is • No instance for (Show (Text.Parsec.Prim.ParsecT s0 u0 m0 Char)) (basically, we can't print a function) so you'll still need to apply the parser to some input to actually run it.
Like commenters, I find that parseCSV can be used without any language extensions.
There are a few things going on here:
In the context of the whole program, the type of eol is constrained by the type signature on parseCSV. That doesn't happen when typing eol = char '\n' into GHCi.
GHCi's :t is permissive - it's willing to print some types that use language features that aren't turned on.
GHC has grown by adding a large number of language extensions, which can be turned on by the programmer on a per-module basis. Some are widely used by production-ready libraries, others are new & experimental.

try function in parsing lambda expressions

I'm totally new to Haskell and trying to implement a "Lambda calculus" parser, that will be used to read the input to a lambda reducer .. It's required to parse bindings first "identifier = expression;" from a text file, then at the end there's an expression alone ..
till now it can parse bindings only, and displays errors when encountering an expression alone .. when I try to use the try or option functions, it gives a type mismatch error:
Couldn't match type `[Expr]'
with `Text.Parsec.Prim.ParsecT s0 u0 m0 [[Expr]]'
Expected type: Text.Parsec.Prim.ParsecT
s0 u0 m0 (Text.Parsec.Prim.ParsecT s0 u0 m0 [[Expr]])
Actual type: Text.Parsec.Prim.ParsecT s0 u0 m0 [Expr]
In the second argument of `option', namely `bindings'
bindings weren't supposed to return anything, but I tried to add a return statement and it also returned a type mismatch error:
Couldn't match type `[Expr]' with `Expr'
Expected type: Text.Parsec.Prim.ParsecT
[Char] u0 Data.Functor.Identity.Identity [Expr]
Actual type: Text.Parsec.Prim.ParsecT
[Char] u0 Data.Functor.Identity.Identity [[Expr]]
In the second argument of `(<|>)', namely `expressions'
Don't use <|> if you want to allow both
Your program parser does its main work with
program = do
spaces
try bindings <|> expressions
spaces >> eof
This <|> is choice - it does bindings if it can, and if that fails, expressions, which isn't what you want. You want zero or more bindings, followed by expressions, so let's make it do that.
Sadly, even when this works, the last line of your parser is eof and
First, let's allow zero bindings, since they're optional, then let's get both the bindings and the expressions:
bindings = many binding
program = do
spaces
bs <- bindings
es <- expressions
spaces >> eof
return (bs,es)
This error would be easier to find with plenty more <?> "binding" type hints so you can see more clearly what was expected.
endBy doesn't need many
The error message you have stems from the line
expressions = many (endBy expression eol)
which should be
expressions :: Parser [Expr]
expressions = endBy expression eol
endBy works like sepBy - you don't need to use many on it because it already parses many.
This error would have been easier to find with a stronger data type tree, so:
Use try to deal with common prefixes
One of the hard-to-debug problems you've had is when you get the error expecting space or "=" whilst parsing an expression. If we think about that, the only place we expect = is in a binding, so it must be part way through parsing a binding when we've given it an expression. This only happens if our expression starts with an identifier, just like a binding does.
binding sees the first identifier and says "It's OK guys, I've got this" but then finds no = and gives you an error, where we wanted it to backtrack and let expression have a go. The key point is we've already used the identifier input, and we want to unuse it. try is right for that.
Encase your binding parser with try so if it fails, we'll go back to the start of the line and hand over to expression.
binding = try (do
(Var id) <- identifier
_ <- char '='
spaces
exp <- expression
spaces
eol <?> "end of line"
return $ Eq id exp
<?> "binding")
It's important that as far as possible each parser starts with matching something unique to avoid this problem. (try is backtracking, hence inefficient, so should be avoided if possible.)
In particular, avoid starting parsers with spaces, but instead make sure you finish them all with spaces. Your main program can start with spaces if you like, since it's the only alternative.
Use types for most productions - better structure & readability
My first piece of general advice is that you could do with a more fine-grained data type, and should annotate your parsers with their type. At the moment, everything's wrapped up in Expr, which means you can only get error messages about whether you have an Expr or a [Expr]. The fact that you had to add Eq to Expr is a sign you're pushing the type too far.
Usually it's worth making a data type for quite a lot of the productions, and if you import Control.Applicative hiding ((<|>),(<$>),many) Control.Applicative you can use <$> and <*> so that the production, the datatype and the parser are all the same structure:
--<program> ::= <spaces> [<bindings>] <expressions>
data Program = Prog [Binding] [Expr]
program = spaces >> Prog <$> bindings <*> expressions
-- <expression> ::= <abstraction> | factors
data Expression = Ab Abstraction | Fa [Factor]
expression = Ab <$> abstraction <|> Fa <$> factors <?> "expression"
Don't do this with letters for example, but for important things. What counts as important things is a matter of judgement, but I'd start with Identifiers. (You can use <* or *> to not include syntax like = in the results.)
Amended code:
Before refactoring types and using Applicative here
And afterwards here

Parsec and user defined state

I'm trying to implement js parser in haskell. But I'm stuck with automatic semicolon insertion. I have created test project to play around with problem, but I can not figure out how to solve the problem.
In my test project program is a list of expressions (unary or binary):
data Program = Program [Expression]
data Expression
= UnaryExpression Number
| PlusExpression Number Number
Input stream is a list of tokens:
data Token
= SemicolonToken
| NumberToken Number
| PlusToken
I want to parse inputs like these:
1; - Unary expression
1 + 2; - Binary expression
1; 2 + 3; - Two expressions (unary and binary)
1 2 + 3; - Same as previous input, but first semicolon is missing. So parser consume token 1, but token 2 is not allowed by any production of grammar (next expected token is semicolon or plus). Rule of automatic semicolon insertion says that in this case a semicolon is automatically inserted before token 2.
So, what is the most elegant way to implement such parser behavior.
You have
expression = try unaryExpression <|> plusExpression
but that doesn't work, since a UnaryExpression is a prefix of a PlusExpression. So for
input2 = [NumberToken Number1, PlusToken, NumberToken Number1, SemicolonToken]
the parser happily parses the first NumberToken and automatically adds a semicolon, since the next token is a PlusToken and not a SemicolonToken. Then it tries to parse the next Expression, but the next is a PlusToken, no Expression can start with that.
Change the order in which the parsers are tried,
expression = try plusExpression <|> unaryExpression
and it will first try to parse a PlusExpression, and only when that fails resort to the shorter parse of a UnaryExpression.

Resources