I noticed that in the MySQLParser.g4 file that it only handles one statement at a time:
query:
EOF
| (simpleStatement | beginWork) (SEMICOLON_SYMBOL EOF? | EOF)
;
simpleStatement:
// DDL
alterStatement
| createStatement
...
Why is this choice made instead of parsing the entire file or script, which may include multiple SQL statments, such as:
CREATE TABLE...;
INSERT INTO ...;
INSERT INTO ...;
# could be thousands of statements here
Is this for efficiency so that the parser only handles one statement at a time so that it doesn't have to consume as much memory? Or basically, why is the choice made in the parser here to only do one statement at a time, and if that's the case, how would it parse multiple statements at once, for example in MySQL Workbench if I have these two statements:
Finally, for testing purposes, is this a good way to add a convenience method for debugging in IntelliJ, or how would this normally be done if the grammar only expects one statement at a time, and you want to, for example, test to make sure all ten statements are correct?
root
: EOF
// this line is for testing only
| selectStatement (SEMICOLON selectStatement)* (SEMICOLON EOF? | EOF)
// this line is for the actual parser
| selectStatement (SEMICOLON EOF? | EOF)
;
There are several arguments pro single-statement processing:
Killer Reason: Each statement can use a different delimiter (which you cannot handle in the parser grammar). Delimiters are not part of the SQL syntax.
In editors you will want to know where a statement starts and ends, without first parsing the full text (think of megabytes sized dumps), e.g. for executing a single statement.
You don't want to miss all following statement details, if only one statement contains a syntax error.
Parsing a single statement at a time gives you much better response times (e.g. when editing SQL code while it's still being parsed).
The server can process single statements only, anyway.
I implemented the statement handling in MySQL Workbench and did it the same way in MySQL Shell for VS Code. The statement splitter is usually very fast (100ms in C++ for a million statements, depending on the box it runs on). This allows to do a quick first run for the statement ranges, show the statement indicator and make the editor ready for statement requests (for execution). After that a background thread is used to parse the individual statements for errors, which can be stopped at any time when a statement was edited.
Whether to allow for multiple statements is pretty much just a grammar design choice. Depending upon the context, it might have been more straightforward to know you'll only see a single statement at a time, or that you can easily separate multiple statements and send each to a parser.
It does look like it would be useful for you.
A simpler version would be:
root : selectStatement? (SEMICOLON selectStatement)* SEMICOLON? EOF
you should always have an EOF
Another thing that doesn't always dawn on designers is that your grammar can have multiple start rules. So you could also have a
selectStart: selectStatement SEMICOLO? EOF;
rule that only allows for a single statement and depending upon you situation you can choose which start rule to use. I had a graphical tool for a language I wrote so sometimes I parsed exprs, sometimes stmts and sometimes scripts. Each had its own start rule. But don't forget to end a start rule with an EOF. This forces the parser to look at ALL of your input. Without it, it will parse as much as it can, but will ignore training input that doesn't fit a parse rule.
(well, it's possible not to have EOF, IF you have a custom stream that remains open so there is no end of input. However, this would not be the case in your situation.)
Related
I'm trying to figure out how I can best parse just a subset of a given language with ANTLR. For example, say I'm looking to parse U-SQL. Really, I'm only interested in parsing certain parts of the language, such as query statements. I couldn't be bothered with parsing the many other features of the language. My current approach has been to design my lexer / parser grammar as follows:
// ...
statement
: queryStatement
| undefinedStatement
;
// ...
undefinedStatement
: (.)+?
;
// ...
UndefinedToken
: (.)+?
;
The gist is, I add a fall-back parser rule and lexer rule for undefined structures and tokens. I imagine later, when I go to walk the parse tree, I can simply ignore the undefined statements in the tree, and focus on the statements I'm interested in.
This seems like it would work, but is this an optimal strategy? Are there more elegant options available? Thanks in advance!
Parsing a subpart of a grammar is super easy. Usually you have a top level rule which you call to parse the full input with the entire grammar.
For the subpart use the function that parses only a subrule like:
const expression = parser.statement();
I use this approach frequently when I want to parse stored procedures or data types only.
Keep in mind however, that subrules usually are not termined with the EOF token (as the top level rule should be). This will cause no syntax error if more than the subelement is in the token stream (the parser just stops when the subrule has matched completely). If that's a problem for you then add a copy of the subrule you wanna parse, give it a dedicated name and end it with EOF, like this:
dataTypeDefinition: // For external use only. Don't reference this in the normal grammar.
dataType EOF
;
dataType: // type in sql_yacc.yy
type = (
...
Check the MySQL grammar for more details.
This general idea -- to parse the interesting bits of an input and ignore the sea of surrounding tokens -- is usually called "island parsing". There's an example of an island parser in the ANTLR reference book, although I don't know if it is directly applicable.
The tricky part of island parsing is getting the island boundaries right. If you miss a boundary, or recognise as a boundary something which isn't, then your parse will fail disastrously. So you need to understand the input at least well enough to be able to detect where the islands are. In your example, that might mean recognising a SELECT statement, for example. However, you cannot blindly recognise the string of letters SELECT because that string might appear inside a string constant or a comment or some other context in which it was never intended to be recognised as a token at all.
I suspect that if you are going to parse queries, you'll basically need to be able to recognise any token. So it's not going to be sea of uninspected input characters. You can view it as a sea of recognised but unparsed tokens. In that case, it should be reasonably safe to parse a non-query statement as a keyword followed by arbitrary tokens other than ; and ending with a ;. (But you might need to recognise nested blocks; I don't really know what the possibilities are.)
I'm working on a reStructuredText transpiler in Rust, and am in need of some advice concerning how lexing should be structured in languages that have recursive structures. For example lists within lists are possible in rST:
* This is a list item
* This is a sub list item
* And here we are at the preceding indentation level again.
The default docutils.parsers.rst took the approach of scanning the input one line at a time:
The reStructuredText parser is implemented as a state machine, examining its
input one line at a time.
The state machine mentioned basically operates on a set of states of the form (regex, match_method, next_state). It tries to match the current line to the regex based on the current state and runs match_method while transitioning to the next_state if a match succeeds, doing this until it runs out of lines to scan.
My question then is, is this the best approach to scanning a language such as rST? My approach thus far has been to create a Chars iterator of the source and eat away at the source while trying to match against structures at the current Unicode scalar. This works to some extent when all I'm doing is scanning inline content, but I've now run into the realization that handling recursive body level structures like nested lists is going to be a pain in the butt. It feels like I'm going to need a whole bunch of states with duplicate regexes and related methods in many states for matching against indentations before new lines and such.
Would it be better to simply have and iterator of the lines of the source and match on a per-line basis, and if a line such as
* this is an indented list item
is encountered in State::Body, simply transition to a state such as State::BulletList and start lexing lines based on the rules specified there? The above line could be lexed for example as a sequence
TokenType::Indent, TokenType::Bullet, TokenType::BodyText
Any thoughts on this?
I don't know much about rST. But you say it has "recursive" structures. If that's that case, you can't fully lex it as a recursive structure using just state machines or regexes or even lexer generators.
But this the wrong way to think about it. The lexer's job is to identify the atoms of the language. A parser's job is to recognize structure, especially if it is recursive (yes, parsers often build trees recording the recursive structures they found).
So build the lexer ignoring context if you can, and use a parser to pick up the recursive structures if you need them. You can read more about the distinction in my SO answer about Parsers Vs. Lexers https://stackoverflow.com/a/2852716/120163
If you insist on doing all of this in the lexer, you'll need to augment it with a pushdown stack to track the recursive structures. Then what are you building is a sloppy parser disguised as lexer. (You will probably still want a real parser to process the output of this "lexer").
Having a pushdown stack actually useful if the language has different atoms in different contexts especially if the contexts nest; in this case what you want is mode stack that you change as the lexer encounters tokens that indicate a switch from one mode to another. A really useful extension of this idea is to have mode changes select what amounts to different lexers, each of which produces lexemes unique to that mode.
As an example you might do this to lex a language that contains embedded SQL. We build parsers for JavaScript; our lexer uses a pushdown stack to process the content of regexp literals and track nesting of { ... } [...] and (... ). (This has arguably a downside: it rejects versions of JQuery.js that contain malformed regexes [yes, they exist]. Javascript doesn't care if you define a bad regex literal and never use it, but that seems pretty pointless.)
A special case of the stack occurs if you only have track single "(" ... ")" pairs or the equivalent. In this case you can use a counter to record how many "pushes" or "pop" you might have done on a real stack. If you have two or more pairs of tokens like this, counters don't work.
I'm trying to figure out the best approach to improving the errors displayed to a user of a Grako-generated parser. It seems like the default parse errors displayed by the Grako-generated parser when it hits some parsing issue in the input file are not helpful. The errors often seem to imply the issue is in one part of the input file when the true error is somewhere different.
I've been looking into the Grako Semantics class to put in some checks which would display better error messages if the checks fail, but it also seems like there could be tons of edge cases that must be specified to be able to catch all of the possible ways the parsing of a rule can fail.
Does anyone have any recommendations or examples I can view?
A PEG parser will exhaust all options, sometimes leaving you at a failure corresponding to the last, and least likely option.
With Grako, you can add cut elements (~) to the grammar to have the parser commit to certain options when it can be sure they are the ones to match.
term = '(' ~ expression ')' | int ;
Cut elements also prune the memoization cache, which improves parser performance.
Suppose I already have a complete YACC grammar. Let that be C grammar for example. Now I want to create a separate parser for domain-specific language, with simple grammar, except that it still needs to parse complete C type declarations. I wouldn't like to duplicate long rules from the original grammar with associated handling code, but instead would like to call out to the original parser to handle exactly one rule (let's call it "declarator").
If it was a recursive descent parser, there would be a function for each rule, easy to call in. But what about YACC with its implicit stack automaton?
Basically, no. Composing LR grammars is not easy, and bison doesn't offer much help.
But all is not lost. Nothing stops you from including the entire grammar (except the %start declaration), and just using part of it, except for one little detail: bison will complain about useless productions.
If that's a show-stopper for you, then you can use a trick to make it possible to create a grammar with multiple start rules. In fact, you can create a grammar which lets you specify the start symbol every time you call the parser; it doesn't even have to be baked in. Then you can tuck that into a library and use whichever parser you want.
Of course, this also comes at a cost: the cost is that the parser is bigger than it would otherwise need to be. However, it shouldn't be any slower, or at least not much -- there might be some cache effects -- and the extra size is probably insignificant compared to the rest of your compiler.
The hack is described in the bison FAQ in quite a lot of detail, so I'll just do an outline here: for each start production you want to support, you create one extra production which starts with a pseudo-token (that is, a lexical code which will never be generated by the lexer). For example, you might do the following:
%start meta_start
%token START_C START_DSL
meta_start: START_C c_start | START_DSL dsl_start;
Now you just have to arrange for the lexer to produce the appropriate START token when it first starts up. There are various ways to do that; the FAQ suggests using a global variable, but if you use a re-entrant flex scanner, you can just put the desired start token in the scanner state (along with a flag which is set when the start token has been sent).
I am wondering about the effect of having an ending delimiter about performance (if it's a scripting language) and ease to parse languages.
Is it easier to parse a language that has one?
If it is and the language is a scripting language, does it make the language run commands faster?
Any difference is going to be trivial.
This is because a language that doesn't use such end delimiters, do infact have them in the form of newline characters!! These will then use special characters to mark that teh statement continues on to the next line.
A few do neither, but there, the end of the statement is implicit in the definition - eg. the close ')' at the end of a parameter list. If there's a comma (or nothing) then more parameters or a closed ')' will be expected on the next line.
In the big scheme of things, these small differences have a negligible effect on processing time. For script languages, you're going to have much bigger differences according to whether (or how much) a particular statement construct can be internally optimized.
Performance-wise It doesn't matter.
If you think about it, there's always a delimiter, if it's not the ; then it's EndOfLine (\n)
So you're still parsing the code script the same way. The parser won't mind if it's a delimiter that's a visible character or invisible one.
The only thing that a visible delimiter is good for, is enabling the programmer to write multi-line script lines, which can be useful. Multi-lined script lines enable the programmer to write more coherent code in some cases. (Just parse the EndOfLine as a white space)
All languages I'm aware of that don't require an explicit typed-out statement delimiter or terminator use line breaks instead - in a few cases, with the exception that line breaks inside parens, braces, brackets, etc. are ignored (implicit line continuation). A small difference, and its effect on parsing speed will heavily depend on the parser (or parser generator). It may be slightly harder to write a parser for if you allow implicit line continuations (because you have to distinguish between newlines and statement seperators - but this can be sorted out relatively early in the parsing process unless you invent incredibly complex rules for it).
But: Even if there was a huge difference, it wouldn't affect runtime performance. Unless for programs where it doesn't matter anyway since they're rather short and/or I/O-bound (or especially stupid implementations that refuse to build any IR and instead interpret from source as they go, using a poorly-written parser), parsing takes place at most once at startup (today's "scripting" languages mostly compile to bytecode and mostly cache that bytecode between runs) and is then shadowed by the time the program spends doing its actual pass. Don't make tradeoffs at language design for speed; or if so, do it properly as e.g. C.
A more interesting issue is explicit block delimiters vs. offside rule. The tokenizer can already solve this, but most parsing generators/libraries don't (easily) give you such a parser. A few do, and in that case the difference is negible, but it's not as widely supported as "free-form" langauges.