According to the ECMAScript spec:
There are several situations where the identification of lexical input
elements is sensitive to the syntactic grammar context that is
consuming the input elements. This requires multiple goal symbols for
the lexical grammar.
Two such symbols are InputElementDiv and InputElementRegExp.
In ECMAScript, the meaning of / depends on the context in which it appears. Depending on the context, a / can either be a division operator, the start of a regex literal or a comment delimiter. The lexer cannot distinguish between a division operator and regex literal on its own, so it must rely on context information from the parser.
I'd like to understand why this requires the use of multiple goal symbols in the lexical grammar. I don't know much about language design so I don't know if this is due to some formal requirement of a grammar or if it's just convention.
Questions
Why not just use a single goal symbol like so:
InputElement ::
[...]
DivPunctuator
RegularExpressionLiteral
[...]
and let the parser tell the lexer which production to use (DivPunctuator vs RegExLiteral), rather than which goal symbol to use (InputElementDiv vs InputElementRegExp)?
What are some other languages that use multiple goal symbols in their lexical grammar?
How would we classify the ECMAScript lexical grammar? It's not context-sensitive in the sense of the formal definition of a CSG (i.e. the LHS of its productions are not surrounded by a context of terminal and nonterminal symbols).
Saying that the lexical production is "sensitive to the syntactic grammar context that is consuming the input elements" does not make the grammar context-sensitive, in the formal-languages definition of that term. Indeed, there are productions which are "sensitive to the syntactic grammar context" in just about every non-trivial grammar. It's the essence of parsing: the syntactic context effectively provides the set of potentially expandable non-terminals, and those will differ in different syntactic contexts, meaning that, for example, in most languages a statement cannot be entered where an expression is expected (although it's often the case that an expression is one of the manifestations of a statement).
However, the difference does not involve different expansions for the same non-terminal. What's required in a "context-free" language is that the set of possible derivations of a non-terminal is the same set regardless of where that non-terminal appears. So the context can provide a different selection of non-terminals, but every non-terminal can be expanded without regard to its context. That is the sense in which the grammar is free of context.
As you note, context-sensitivity is usually abstracted in a grammar by a grammar with a pattern on the left-hand side rather than a single non-terminal. In the original definition, the context --everything other than the non-terminal to be expanded-- needed to be passed through the production untouched; only a single non-terminal could be expanded, but the possible expansions depend on the context, as indicated by the productions. Implicit in the above is that there are grammars which can be written in BNF which don't even conform to that rule for context-sensitivity (or some other equivalent rule). So it's not a binary division, either context-free or context-sensitive. It's possible for a grammar to be neither (and, since the empty context is still a context, any context-free grammar is also context-sensitive). The bottom line is that when mathematicians talk, the way they use words is sometimes unexpected. But it always has a clear underlying definition.
In formal language theory, there are not lexical and syntactic productions; just productions. If both the lexical productions and the syntactic productions are free of context, then the total grammar is free of context. From a practical viewpoint, though, combined grammars are harder to parse, for a variety of reasons which I'm not going to go into here. It turns out that it is somewhat easier to write the grammars for a language, and to parse them, with a division between lexical and syntactic parsers.
In the classic model, the lexical analysis is done first, so that the parser doesn't see individual characters. Rather, the syntactic analysis is done with an "alphabet" (in a very expanded sense) of "lexical tokens". This is very convenient -- it means, for example, that the lexical analysis can simply drop whitespace and comments, which greatly simplifies writing a syntactic grammar. But it also reduces generality, precisely because the syntactic parser cannot "direct" the lexical analyser to do anything. The lexical analyser has already done what it is going to do before the syntactic parser is aware of its needs.
If the parser were able to direct the lexical analyser, it would do so in the same way as it directs itself. In some productions, the token non-terminals would include InputElementDiv and while in other productions InputElementRegExp would be the acceptable non-terminal. As I noted, that's not context-sensitivity --it's just the normal functioning of a context-free grammar-- but it does require a modification to the organization of the program to allow the parser's goals to be taken into account by the lexical analyser. This is often referred to (by practitioners, not theorists) as "lexical feedback" and sometimes by terms which are rather less value neutral; it's sometimes considered a weakness in the design of the language, because the neatly segregated lexer/parser architecture is violated. C++ is a pretty intense example, and indeed there are C++ programs which are hard for humans to parse as well, which is some kind of indication. But ECMAScript does not really suffer from that problem; human beings usually distinguish between the division operator and the regexp delimiter without exerting any noticeable intellectual effort. And, while the lexical feedback required to implement an ECMAScript parser does make the architecture a little less tidy, it's really not a difficult task, either.
Anyway, a "goal symbol" in the lexical grammar is just a phrase which the authors of the ECMAScript reference decided to use. Those "goal symbols" are just ordinary lexical non-terminals, like any other production, so there's no difference between saying that there are "multiple goal symbols" and saying that the "parser directs the lexer to use a different production", which I hope addresses the question you asked.
Notes
The lexical difference in the two contexts is not just that / has a different meaning. If that were all that it was, there would be no need for lexical feedback at all. The problem is that the tokenization itself changes. If an operator is possible, then the /= in
a /=4/gi;
is a single token (a compound assignment operator), and gi is a single identifier token. But if a regexp literal were possible at that point (and it's not, because regexp literals cannot follow identifiers), then the / and the = would be separate tokens, and so would g and i.
Parsers which are built from a single set of productions are preferred by some programmers (but not the one who is writing this :-) ); they are usually called "scannerless parsers". In a scannerless parser for ECMAScript there would be no lexical feedback because there is no separate lexical analysis.
There really is a breach between the theoretical purity of formal language theory and the practical details of writing a working parser of a real-life programming language. The theoretical models are really useful, and it would be hard to write a parser without knowing something about them. But very few parsers rigidly conform to the model, and that's OK. Similarly, the things which are popularly calle "regular expressions" aren't regular at all, in a formal language sense; some "regular expression" operators aren't even context-free (back-references). So it would be a huge mistake to assume that some theoretical result ("regular expressions can be identified in linear time and constant space") is actually true of a "regular expression" library. I don't think parsing theory is the only branch of computer science which exhibits this dichotomy.
Why not just use a single goal symbol like so:
InputElement ::
...
DivPunctuator
RegularExpressionLiteral
...
and let the parser tell the lexer which production to use (DivPunctuator vs RegExLiteral), rather than which goal symbol to use (InputElementDiv vs InputElementRegExp)?
Note that DivPunctuator and RegExLiteral aren't productions per se, rather they're nonterminals. And in this context, they're right-hand-sides (alternatives) in your proposed production for InputElement. So I'd rephrase your question as: Why not have the syntactic parser tell the lexical parser which of those two alternatives to use? (Or equivalently, which of those two to suppress.)
In the ECMAScript spec, there's a mechanism to accomplish this: grammatical parameters (explained in section 5.1.5).
E.g., you could define the parameter Div, where:
+Div means "a slash should be recognized as a DivPunctuator", and
~Div means "a slash should be recognized as the start of a RegExLiteral".
So then your production would become
InputElement[Div] ::
...
[+Div] DivPunctuator
[~Div] RegularExpressionLiteral
...
But notice that the syntactic parser still has to tell the lexical parser to use either InputElement[+Div] or InputElement[~Div] as the goal symbol, so you arrive back at the spec's current solution, modulo renaming.
What are some other languages that use multiple goal symbols in their lexical grammar?
I think most don't try to define a single symbol that derives all tokens (or input elements), let alone have to divide it up into variants like ECMAScript's InputElementFoo, so it might be difficult to find another language with something similar in its specification.
Instead, it's pretty common to simply define rules for the syntax of different kinds of tokens (e.g. Identifier, NumericLiteral) and then reference them from the syntactic productions. So that's kind of like having multiple lexical goal symbols, but not (I would say) in the sense you were asking about.
How would we classify the ECMAScript lexical grammar?
It's basically context-free, plus some extensions.
Related
Yes, I'm one of those insane people who have a parser-generator project. Minimal-LR(1) with operator-precedence was fairly straightforward. GLR support is next, preferably without making a mess of the corner cases around precedence and associativity (P&A).
Suppose you have an R/R conflict between rules with different precedence levels. A deterministic parser can safely choose the (first) rule of highest precedence. A parser designed to handle local ambiguity might not be sure, especially if the involved rules reduce to different non-terminals.
Suppose you have a R/R conflict between rules with- and without- precedence characteristics. A deterministic parser can reasonably choose the former. If you ask for GLR, do you mean to entertain both, or should the former clearly dominate the latter? Or is this scenario sufficiently weird as to justify rejecting the grammar?
Suppose you have an S/R/R conflict where only some of the participating rules have precedence, and maybe the look-ahead token does or doesn't have precedence. If P&A is all about what to do in front of the lookahead, then a non-precedent token should perhaps mean all options stay viable. But is that really the intended semantic here?
Suppose you have a nonassoc declaration on a terminal, and an S/R/R conflict where only ONE of the participating production rules hits the same non-associative precedence level. Then the other rule is clearly still viable to reduce, but what of the shift? Should we take it? What if we're mid-rule in a manner that doesn't trigger the same non-associativity problem? What if the look-ahead token is higher precedence than the remaining reduce, or the remaining reduce doesn't have precedence? How can we avoid accidentally constructing an invalid parse this way? Is there some trick with the parse-items to construct a shift-state that can't go wrong, or is this kind of thing beyond the scope of GLR parsing?
Also, how should semantic predicates interact with such ugly corner cases?
The simplest-thing-that-might-work is to treat anything involving operator-precedence in the same manner as a deterministic table-generator. But is that the intended semantic? Or perhaps: what kinds of declarations might grammar authors want to exert control over these weird cases?
Traditional yacc-style precedence rules cannot be used to resolve reduce/reduce conflicts.
Yacc/bison "resolve" reduce/reduce conflicts by choosing the first production in the grammar file. This has nothing to do with precedence, and in the grammars where you would want to use a GLR parser, it is almost certainly not correct; you want the GLR parser to pursue all possible paths.
The bison GLR parser requires that ambiguity be resolved; that is, that the grammar be unambiguous. However, it has two "outs": first, it lets you use "dynamic precedence" declarations (which is a completely different concept, although it happens to use the same word); second, if that's not enough, it lets you provide your own resolution function.
Amongst other possibilities, a custom resolution function can accept both reductions, for example by inserting a branch in the AST. There are some theoretical issues with this approach for general parsing, but it works fine with real programming languages, which tend to not be ambiguous, or at least "not very ambiguous".
A typical case for dynamic precedence is implementing a (textual) rule like C++'s §9.8/1:
There is an ambiguity in the grammar involving expression-statements and declarations: An expression-statement with a function-style explicit type conversion (8.2.3) as its leftmost subexpression can be indistinguishable from a declaration where the first declarator starts with a (. In those cases the statement is a declaration.
This rule cannot be expressed by a context-free grammar -- or, at least not in a way which would be readable -- but it is trivially expressible as a dynamic precedence rule.
As its name implies, dynamic precedence is dynamic; it's a rule applied at parse time by the parser. Bison's GLR algorithm only applies these rules if forced to; the parser handles multiline possible reductions normally (by maintaining all of them as possibilities). It is forced to apply dynamic precedence only when both possible reductions in a reduce/reduce conflict reduce to the same non-terminal.
By contrast, the yacc precedence algorithm, which as I mentioned only resolves shift/reduce conflicts, is static: it is compiled at generation time into the parse automaton (in effect, by removing actions from the transition tables), so the parser no longer sees the conflict.
This algorithm has been (justifiably) criticised for a variety of reasons, one of which is the odd behaviour of non-associative declarations in corner cases. Also, precedence rules do not compose well; because they are not scoped, they might end up accidentally applying to productions for which they were not intended. Not infrequently, they facilitate grammar bugs by hiding a conflict which should have been resolved by the grammar writer.
Best practice, therefore, is to avoid corner cases :-) Static precedence should be restricted to its originally-intended use cases: simple operator precedence and, possibly, documenting the "shift preferred" heuristic which resolves dangling-else resolution and certain grouped operator parses (iirc, there's a good example of this in the dragon book).
If you implement dynamic precedence -- and, honestly, there are good reasons not to -- then it should be applied to simple easily expressed rules like the C++ rule cited above: "if it looks like a declaration, it's a declaration." Even better would be to avoid writing ambiguous grammars; that particular C++ feature leads to the infamous "most vexatious parse", which has probably at some point bitten every one of us who have tried writing C++ programs.
To deal with heredoc in shell (e.g., bash), the grammar rule will change the variable need_here_doc via push_heredoc().
| LESS_LESS WORD
{
source.dest = 0;
redir.filename = $2;
$$ = make_redirection (source, r_reading_until, redir, 0);
push_heredoc ($$);
}
http://git.savannah.gnu.org/cgit/bash.git/tree/parse.y#n539
static void
push_heredoc (r)
REDIRECT *r;
{
if (need_here_doc >= HEREDOC_MAX)
{
last_command_exit_value = EX_BADUSAGE;
need_here_doc = 0;
report_syntax_error (_("maximum here-document count exceeded"));
reset_parser ();
exit_shell (last_command_exit_value);
}
redir_stack[need_here_doc++] = r;
}
http://git.savannah.gnu.org/cgit/bash.git/tree/parse.y#n2794
need_here_doc is used in read_token(), which is called by yylex(). This makes the behavior of yylex() non-automomous.
Is it normal to design a parser that can change how yylex() behaves?
Is it because the shell language is not LALR(1), so there is no way to avoid changing the behavior of yylex() by the grammar actions?
if (need_here_doc)
gather_here_documents ();
http://git.savannah.gnu.org/cgit/bash.git/tree/parse.y#n3285
current_token = read_token (READ);
http://git.savannah.gnu.org/cgit/bash.git/tree/parse.y#n2761
Is it normal to design a parser that can change how yylex() behaves?
Sure. It might not be ideal, but it's extremely common.
The Posix shell syntax is far from the ideal candidate for a flex/bison parser, and about the only thing you can say for the bash implementation using flex and bison is that it demonstrates how flexible those tools can be if pushed to their respective limits. Here-docs are not the only place where "lexical feedback" is necessary.
But even in more disciplined languages, lexical feedback can be useful. Or its alternative: writing partial parsing logic into the lexical scanner in order for it to know when the parse would require a different set of lexical rules.
Possibly the most well-known (or most frequently-commented) lexical feedback is the parsing of C-style cast expressions, which require the lexer to know whether the foo in (foo) is a typename or not. (This is usually implemented by way of a symbol table shared between the parser and the lexer but the precise implementation details are tricky.)
Here are a few other examples, which might be considered relatively benign uses of lexical feedback, although they certainly increase the coupling between lexer and parser.
Python (and Haskell) require the lexical scanner to reformulate leading whitespace into INDENT or DEDENT tokens. But if the line break occurs within parentheses, the whitespace handling is suppressed (including the NEWLINE token itself).
Ecmascript (Javascript) and other languages allow regular expression literals to be written surrounded by /s. But the / could also be a division operator or the first character in a /= mutation operator. The lexical decision depends on the parse context. (This could be guessed by the lexical scanner from the recent token history, which would count as reproducing part of the parsing logic in the lexical scanner.)
Similar to the above, many languages overload < in ways which complicate the logic in the lexical scanner. The use as a template bracket rather than a comparison operator might be dealt with in the scanner -- in C++, for example, it will depend on features like whether the preceding identifier was a template or not -- but that doesn't actually change lexical context. However, the use of an angle bracket to indicate the start of an X/HTML literal (or template) definitely changes lexical context. As with the regex example above, it will be necessary to know whether or not a comparison operator would be syntactically valid or not.
Is it because the shell language is not LALR(1), so there is no way to avoid changing the behavior of yylex() by the grammar actions?
The Posix shell syntax is most certainly not LALR(1), or even context-free. But most languages could not be parsed scannerlessly with an LALR(1) parser, and many languages turn out not to have context-free grammars if you take all syntactic considerations into account. (Cf. C-style cast expressions, above.) Perhaps shell is further from the platonic ideal than most. But then, it grew over the years from a kernel intended to be simple to type, rather than formally analysable. (No comment from me about whether this excuse can be extended to Perl, which I don't plan to discuss here.)
What I'd say in general is that languages which embed other languages (regular expressions, HTML fragments, Flex/Bison semantic actions, shell arithmetic expansions, etc., etc.) present challenges for a simplistic parser/scanner model. Despite lots of interesting work and solid experimentation, my sense is that language embedding still lacks a good implementable formal structure. And since most languages do have embedded sublanguages, there is and will continue to be a certain adhockery in their parser implementations. In part, that's what makes this field of study so much fun.
Does there exist a formal definition of the purpose, or at a clear best practice of usage, of lexical analysis (lexer) during/before parsing?
I know that the purpose of a lexer is to transform a stream of characters to a stream of tokens, but can't it happen that in some (context-free) languages the intended notion of a "token" could nonetheless depend on the context and "tokens" could be hard to identify without complete parsing?
There seems to be nothing obviously wrong with having a lexer that transforms every input character into a token and lets the parser do the rest. But would it be acceptable to have a lexer that differentiates, for example, between a "unary minus" and a usual binary minus, instead of leaving this to the parser?
Are there any precise rules to follow when deciding what shall be done by the lexer and what shall be left to the parser?
Does there exist a formal definition of the purpose [of a lexical analyzer]?
No. Lexical analyzers are part of the world of practical programming, for which formal models are useful but not definitive. A program which purports to do something should do that thing, of course, but "lexically analyze my programming language" is not a sufficiently precise requirements statement.
… or a clear best practice of usage
As above, the lexical analyzer should do what it purports to do. It should also not attempt to do anything else. Code duplication should be avoided. Ideally, the code should be verifiable.
These best practices motivate the use of a mature and well-document scanner framework whose input language doubles as a description of the lexical grammar being analyzed. However, practical considerations based on the idiosyncracies of particular programming languages normally result in deviations from this ideal.
There seems to be nothing obviously wrong with having a lexer that transforms every input character into a token…
In that case, the lexical analyzer would be redundant; the parser could simply use the input stream as is. This is called "scannerless parsing", and it has its advocates. I'm not one of them, so I won't enter into a discussion of pros and cons. If you're interested, you could start with the Wikipedia article and follow its links. If this style fits your problem domain, go for it.
can't it happen that in some (context-free) languages the intended notion of a "token" could nonetheless depend on the context?
Sure. A classic example is found in EcmaScript regular expression "literals", which need to be lexically analyzed with a completely different scanner. EcmaScript 6 also defines string template literals, which require a separate scanning environment. This could motivate scannerless processing, but it can also be implemented with an LR(1) parser with lexical feedback, in which the reduce action of particular marker non-terminals causes a switch to a different scanner.
But would it be acceptable to have a lexer that differentiates, for example, between a "unary minus" and a usual binary minus, instead of leaving this to the parser?
Anything is acceptable if it works, but that particular example strikes me as not particular useful. LR (and even LL) expression parsers do not require any aid from the lexical scanner to show the context of a minus sign. (Naïve operator precedence grammars do require such assistance, but a more carefully thought out op-prec architecture wouldn't. However, the existence of LALR parser generators more or less obviates the need for op-prec parsers.)
Generally speaking, for the lexer to be able to identify syntactic context, it needs to duplicate the analysis being done by the parser, thus violating one of the basic best practices of code development ("don't duplicate functionality"). Nonetheless, it can occasionally be useful, so I wouldn't go so far as to advocate an absolute ban. For example, many parsers for yacc/bison-like production rules compensate for the fact that a naïve grammar is LALR(2) by specially marking ID tokens which are immediately followed by a colon.
Another example, again drawn from EcmaScript, is efficient handling of automatic semicolon insertion (ASI), which can be done using a lookup table whose keys are 2-tuples of consecutive tokens. Similarly, Python's whitespace-aware syntax is conveniently handled by assistance from the lexical scanner, which must be able to understand when indentation is relevant (not inside parentheses or braces, for example).
I am writing a parser in Bison for a language which has the following constructs, among others:
self-dispatch: [identifier arguments]
dispatch: [expression . identifier arguments]
string slicing: expression[expression,expression] - similar to Python.
arguments is a comma-separated list of expressions, which can be empty too. All of the above are expressions on their own, too.
My problem is that I am not sure how to parse both [method [other_method]] and [someString[idx1, idx2].toInt] or if it is possible to do this at all with an LALR(1) parser.
To be more precise, let's take the following example: [a[b]] (call method a with the result of method b). When it reaches the state [a . [b]] (the lookahead is the second [), it won't know whether to reduce a (which has already been reduced to identifier) to expression because something like a[b,c] might follow (which could itself be reduced to expression and continue with the second construct from above) or to keep it identifier (and shift it) because a list of arguments will follow (such as [b] in this case).
Is this shift/reduce conflict due to the way I expressed this grammar or is it not possible to parse all of these constructs with an LALR(1) parser?
And, a more general question, how can one prove that a language is/is not parsable by a particular type of parser?
Assuming your grammar is unambiguous (which the part you describe appears to be) then your best bet is to specify a %glr-parser. Since in most cases, the correct parse will be forced after only a few tokens, the overhead should not be noticeable, and the advantage is that you do not need to complicate either the grammar or the construction of the AST.
The one downside is that bison cannot verify that the grammar is unambiguous -- in general, this is not possible -- and it is not easy to prove. If it turns out that some input is ambiguous, the GLR parser will generate an error, so a good test suite is important.
Proving that the language is not LR(1) would be tricky, and I suspect that it would be impossible because the language probably is recognizable with an LALR(1) parser. (Impossible to tell without seeing the entire grammar, though.) But parsing (outside of CS theory) needs to create a correct parse tree in order to be useful, and the sort of modifications required to produce an LR grammar will also modify the AST, requiring a post-parse fixup. The difficultly in creating a correct AST spring from the difference in precedence between
a[b[c],d]
and
[a[b[c],d]]
In the first (subset) case, b binds to its argument list [c] and the comma has lower precedence; in the end, b[c] and d are sibling children of the slice. In the second case (method invocation), the comma is part of the argument list and binds more tightly than the method application; b, [c] and d are siblings in a method application. But you cannot decide the shape of the parse tree until an arbitrarily long input (since d could be any expression).
That's all a bit hand-wavey since "precedence" is not formally definable, and there are hacks which could make it possible to adjust the tree. Since the LR property is not really composable, it is really possible to provide a more rigorous analysis. But regardless, the GLR parser is likely to be the simplest and most robust solution.
One small point for future reference: CFGs are not just a programming tool; they also serve the purpose of clearly communicating the grammar in question. Nirmally, if you want to describe your language, you are better off using a clear CFG than trying to describe informally. Of course, meaningful non-terminal names will help, and a few examples never hurt, but the essence of the grammar is in the formal description and omitting that makes it harder for others to "be helpful".
I wish to understand how does a parser work. I learnt about the LL, LR(0), LR(1) parts, how to build, NFA, DFA, parse tables, etc.
Now the problem is, i know that a lexer should extract tokens only on the parser demand in some situation, when it's not possible to extract all the tokens in one separated pass. I don't exactly understand this kind of situation, so i'm open to any explanation about this.
The question now is, how should a lexer does its job ? should it base its recognition on the current "contexts", the current non-terminals supposed to be parsed ? is it something totally different ?
What about the GLR parsing : is it another case where a lexer could try different terminals, or is it only a syntactic business ?
I would also want to understand what it's related to, for example is it related to the kind of parsing technique (LL, LR, etc) or only the grammar ?
Thanks a lot
The simple answer is that lexeme extraction has to be done in context. What one might consider be lexemes in the language may vary considerably in different parts of the language. For example, in COBOL, the data declaration section has 'PIC' strings and location-sensitive level numbers 01-99 that do not appear in the procedure section.
The lexer thus to somehow know what part of the language is being processed, to know what lexemes to collect. This is often handled by having lexing states which each process some subset of the entire language set of lexemes (often with considerable overlap in the subset; e.g., identifiers tend to be pretty similar in my experience). These states form a high level finite state machine, with transitions between them when phase changing lexemes are encountered, e.g., the keywords that indicate entry into the data declaration or procedure section of the COBOL program. Modern languages like Java and C# minimize the need for this but most other languages I've encountered really need this kind of help in the lexer.
So-called "scannerless" parsers (you are thinking "GLR") work by getting rid of the lexer entirely; now there's no need for the lexer to produce lexemes, and no need to track lexical states :-} Such parsers work by simply writing the grammar down the level of individual characters; typically you find grammar rules that are the exact equivalent of what you'd write for a lexeme description. The question is then, why doesn't such a parser get confused as to which "lexeme" to produce? This is where the GLR part is useful. GLR parsers are happy to process many possible interpretations of the input ("locally ambiguous parses") as long as the choice gets eventually resolved. So what really happens in the case of "ambiguous tokens" is the the grammar rules for both "tokens" produce nonterminals for their respectives "lexemes", and the GLR parser continues to parse until one of the parsing paths dies out or the parser terminates with an ambiguous parse.
My company builds lots of parsers for languages. We use GLR parsers because they are very nice for handling complex languages; write the context-free grammar and you have a parser. We use lexical-state based lexeme extractors with the usual regular-expression specification of lexemes and lexical-state-transitions triggered by certain lexemes. We could arguably build scannerless GLR parsers (by making our lexers produce single characters as tokens :) but we find the efficiency of the state-based lexers to be worth the extra trouble.
As practical extensions, our lexers actually use push-down-stack automata for the high level state machine rather than mere finite state machines. This helps when one has high level FSA whose substates are identical, and where it is helpful for the lexer to manage nested structures (e.g, match parentheses) to manage a mode switch (e.g., when the parentheses all been matched).
A unique feature of our lexers: we also do a little tiny bit of what scannerless parsers do: sometimes when a keyword is recognized, our lexers will inject both a keyword and an identifier into the parser (simulates a scannerless parser with a grammar rule for each). The parser will of course only accept what it wants "in context" and simply throw away the wrong alternative. This gives us an easy to handle "keywords in context otherwise interpreted as identifiers", which occurs in many, many languages.
Ideally, the tokens themselves should be unambiguous; you should always be able to tokenise an input stream without the parser doing any additional work.
This isn't always so simple, so you have some tools to help you out:
Start conditions
A lexer action can change the scanner's start condition, meaning it can activate different sets of rules.
A typical example of this is string literal lexing; when you parse a string literal, the rules for tokenising usually become completely different to the language containing them. This is an example of an exclusive start condition.
You can separate ambiguous lexings if you can identify two separate start conditions for them and ensure the lexer enters them appropriately, given some preceding context.
Lexical tie-ins
This is a fancy name for carrying state in the lexer, and modifying it in the parser. If a certain action in your parser gets executed, it modifies some state in the lexer, which results in lexer actions returning different tokens. This should be avoided when necessary, because it makes your lexer and parser both more difficult to reason about, and makes some things (like GLR parsers) impossible.
The upside is that you can do things that would require significant grammar changes with relatively minor impact on the code; you can use information from the parse to influence the behaviour of the lexer, which in turn can come some way to solving your problem of what you see as an "ambiguous" grammar.
Logic, reasoning
It's probable that it is possible to lex it in one parse, and the above tools should come second to thinking about how you should be tokenising the input and trying to convert that into the language of lexical analysis. :)
The fact is, your input is comprised of tokens—whether you like it or not!—and all you need to do is find a way to make a program understand the rules you already know.