Can't identify the invalid identifier in my FLEX code - flex-lexer

I am trying to build my own compiler which outputs the type of input the user gives, for example, abcd is an identifier, and 1242 is an integer. I have implemented it as below:
textProg.l
%{
#define IDENTIFIER 10
#define INTEGER 11
%}
IDENTIFIER [a-zA-Z_][a-zA-Z0-9_]*
INTEGER [1-9][0-9]*|"0"
%%
{IDENTIFIER} { return IDENTIFIER; }
{INTEGER} { return INTEGER; }
%%
int main() {
int token;
while(token = yylex()) {
if(token == IDENTIFIER) { printf("IDENTIFIER"); }
else if(token == INTEGER) { printf("INTEGER"); }
else { printf("INVALID"); }
}
}
This works perfectly when I run the following commands:
flex testProg.l
cc lex.yy.c -lfl
./a.out
Sample working input
sample
IDENTIFIER
1993
INTEGER
The problem arises when I try to input an invalid token, for example 12abc. This is neither an integer nor an identifier and should output "INVALID" but it outputs:
12abc
INTEGER
IDENTIFIER
What happened is that 12 and abc are taken as separate tokens instead of one. How can I avoid this?

Many languages use lexical analysers which are perfectly happy to let 12abc be an integer followed by a identifier. Why not? If that means something in the language, then that's probably what the user meant. If it doesn't mean anything, it will trigger a syntax error, so the user will be informed.
But, OK, you want to recognise that as an error. In that case you need to recognise the erroneous input as an error, and the first step is to recognise it as a token. That's easy if you remember flex's match precedences:
[[:alpha:]_][[:alnum:]_]* { return IDENTIFIER; }
[1-9][[:digit:]]*|0 { return NUMBER; }
[[:alnum:]_]+ { return BADTOKEN; }
Note that I replaced your macros with actual patterns, using named character classes for readability, and removed the redundant quotes on "0".

Flex parses 12abc as two separate tokens because you didn't tell it it shouldn't.
Lex derivatives, like Flex, works by one very simple but effective algorithm:
They start at the position when the last token ended (or at beginning of the text) and try to find a rule that matches the most characters from this point. (If there are multiple rules that match the same number of characters, the one defined in the "*.l" file first is chosen.)
That's it. Notice there is nothing about it having to match a whole word.
That's actually a good thing. It is why in most programming languages you don't need to explicitly separate tokens. You can write things like (2+30L)/2 and the lexer for that language will figure out where each token ends, without additional hints like whitespaces. (The tokens would be (, 2, +, 30, L, ), / and 2.)
If you want to disable this fancy mechanism for the specific case of putting numbers and identifiers together, you will need to create a rule that explicitly forbids it, e.g:
{IDENTIFIER} { return IDENTIFIER; }
{INTEGER} { return INTEGER; }
[0-9A-Za-z_]+ { return ERROR; }
Notice that this new rule also matches valid identifiers and integers. However, it won't be used for them because it is under them on the rules list.

Related

Flex scanning, differentiating between string (with single spaces) and padding (more than one space)

I am having trouble with flex to scan lines that looks something like this
DESCRIPTION This is the device description
I would like the line to be scanned such that DESCRIPTION is one token and "This is the device description" is the other.
I have been playing endlessly with my rules but cannot seem to get it to work.
From the documentation I think I want to implement a rule using
`r/s'
an r but only if it is followed by an s
where spaces are only accepted is they are followed by something that is not a while space. I have no idea how to write this rule with flex's syntax. In my mind the rule should be something like
[a-zA-Z](" "/[a-zA-Z0-9]|[a-zA-Z0-9])* return IDENTIFIER;
But this is invalid.
I can get the lines to chop up each word but I cannot get the rules to differentiate between 1 space and 1 < spaces. Halp.
This is not really a good match for flex, since the recognition of tokens is context-dependent. You can achieve context-dependent scanning using start conditions but excessive use of start conditions is often an indication that some other scanning mechanism would be better.
Regardless of how you do it, the key is figuring out exactly how to decide on the token division. Consider the following four lines, for example:
DEVICE This is the device
MODE This is the mode
DESCRIPTION This is the device description
UNDOCUMENTED FIELD
Of course, it is possible that the corner cases represented by the third and fourth lines never show up in any of your inputs.
If the first token cannot include whitespace, then the problem is relatively simple, although you still need a start condition (and I'm going to assume you read the documentation linked above):
%x WHITE WORDS
%%
/* Possibly should be [[:alpha:]] instead of [[:upper:]] */
[[:upper:]]+ { /* copy yytext */; BEGIN(WHITE); return KEYWORD; }
/* Handle other possible line beginnings */
<WHITE>\n { /* Blank descriptive text */; BEGIN(INITIAL); }
<WHITE>[ \t]+ { BEGIN(WORDS); }
<WHITE>. { /* Something not correct in this line */; ... }
<WORDS>.+ { /* copy yytext */; BEGIN(INITIAL); return DESCRIPTION; }
<WORDS>\n { BEGIN(INITIAL); }
If there might be whitespace in the first token but never two spaces in a row, you could replace the first pattern above with:
[[:alpha:]]+( [[:alpha:]]+)*
which will match any sequence of words (consisting only of letters) where there is exactly one space between successive words. Like the original pattern above, this will end on the first non-alphabetic character found. That error will be detected by the rules in <WHITE>, because any non-whitespace character encountered when that start condition becomes active will be handled by the start condition's default rule (the <WHITE>. rule).
My opinion is that you are using the wrong horse here. lex (flex) should be only used for lexical analysis and yacc (or bison) for syntactic one. Saying that one single character is not a separator but multiple are is not appropriate for a lexer.
My opinion is that lex should only reports words and padding and that yacc should later re-combine words that are not separated by padding elements.
The lex part would be as simple as:
[[:alnum:]_]+ {
// printf("WORD: >%s<\n", yytext); // for debugging
return WORD;
}
[[:blank:]]{2,} {
// printf("PADDING: >%s<\n", yytext);
return PADDING;
}
and the yacc part would contain:
elt: PADDING
| ident
ident: WORD
| ident WORD
action are omitted here because they depend too much on your actual processing.

Finding error in JavaCC parser/lexer code

I am writing a JavaCC parser/lexer which is meant to recognise all input strings in the following language L:
A string from L consists of several blocks separated by space characters.
At least one block must be present (i.e., no input consisting only of some number of white spaces is allowed).
A block is an odd-length sequence of lowercase letters (a-z).
No spaces are allowed before the first block or after the last one.
The number of spaces between blocks must be odd.
The I/O specifications include the following specification:
If the input does represent a string from L, then the word YES must be printed out to System.out, ending with the EOL character.
If the input is not in L, then only a single line with the word NO needs
to be printed out to System.out, also ending with the EOL character.
In addition, a brief error message should be printed out on System.err explaining the reason why the input is not in L.
Issue:
This is my current code:
PARSER_BEGIN(Assignment)
/** A parser which determines if user's input belongs to the langauge L. */
public class Assignment {
public static void main(String[] args) {
try {
Assignment parser = new Assignment(System.in);
parser.Input();
if(parser.Input()) {
System.out.println("YES"); // If the user's input belongs to L, print YES.
} else if(!(parser.Input())) {
System.out.println("NO");
System.out.println("Empty input");
}
} catch (ParseException e) {
System.out.println("NO"); // If the user's input does not belong to L, print NO.
}
}
}
PARSER_END(Assignment)
//** A token which matches any lowercase letter from the English alphabet. */
TOKEN :
{
< ID: (["a"-"z"]) >
}
//* A token which matches a single white space. */
TOKEN :
{
<WHITESPACE: " ">
}
/** This production is the basis for the construction of strings which belong to language L. */
boolean Input() :
{}
{
<ID>(<ID><ID>)* ((<WHITESPACE>(<WHITESPACE><WHITESPACE>)*)<ID>(<ID><ID>)*)* ("\n"|"\r") <EOF>
{
System.out.println("ABOUT TO RETURN TRUE");
return true;
}
|
{
System.out.println("ABOUT TO RETURN FALSE");
return false;
}
}
The issue that I am having is as follows:
I am trying to write code which will ensure that:
If the user's input is empty, then the text NO Empty input will be printed out.
If there is a parsing error because the input does not follow the description of L above, then only the text NO will be printed out.
At the moment, when I input the string "jjj jjj jjj", which, by definition, is in L (and I follow this with a carriage return and an EOF [CTRL + D]), the text NO Empty input is printed out.
I did not expect this to happen.
In an attempt to resolve the issue I wrote the ...TRUE and ...FALSE print statements in my production (see code above).
Interestingly enough, I found that when I inputted the same string of js, the terminal printed out the ...TRUE statement once, immediately followed by two occurrences of the ...FALSE statement.
Then the text NO Empty input was printed out, as before.
I have also used Google to try to find out if I am incorrectly using the OR symbol | in my production Input(), or if I am not using the return keyword properly, either. However, this has not helped.
Could I please have hint(s) for resolving this issue?
You're calling the Input method three times. The first time it will read from stdin until it reaches the end of the stream. This will successfully parse the input and return true. The other two times, the stream will be empty, so it will fail and return false.
You shouldn't call a rule multiple times unless you actually want it to be applied multiple times (which only makes sense if the rule only consumes part of the input rather than going until the end of the stream). Instead when you need the result in multiple places, just call the method once and store the result in a variable.
Or in your case you could just call it once in the if and no variable would even be needed:
Assignment parser = new Assignment(System.in);
if(parser.Input()) {
System.out.println("YES"); // If the user's input belongs to L, print YES.
} else {
System.out.println("NO");
System.out.println("Empty input");
}
When the input is jjj jjj jjj followed by a newline or carriage return (but not both), your main method invokes Parser.Input three times.
The first time, your parser consumes all the input and returns true.
The second and third times, all the input having already been consumed, the parser returns false.
Once the input is consumed, the lexer will just keep returning <EOF> tokens.

Is it legitimate for a tokenizer to have a stack?

I have designed a new language for that I want to write a reasonable lexer and parser.
For the sake of brevity, I have reduced this language to a minimum so that my questions are still open.
The language has implicit and explicit strings, arrays and objects. An implicit string is just a sequence of characters that does not contain <, {, [ or ]. An explicit string looks like <salt<text>salt> where salt is an arbitrary identifier (i.e. [a-zA-Z][a-zA-Z0-9]*) and text is an arbitrary sequence of characters that does not contain the salt.
An array starts with [, followed by objects and/or strings and ends with ].
All characters within an array that don't belong to an array, object or explicit string do belong to an implicit string and the length of each implicit string is maximal and greater than 0.
An object starts with { and ends with } and consists of properties. A property starts with an identifier, followed by a colon, then optional whitespaces and then either an explicit string, array or object.
So the string [ name:test <xml<<html>[]</html>>xml> {name:<b<test>b>}<b<bla>b> ] represents an array with 6 items: " name:test ", "<html>[]</html>", " ", { name: "test" }, "bla" and " " (the object is notated in json).
As one can see, this language is not context free due to the explicit string (that I don't want to miss). However, the syntax tree is nonambiguous.
So my question is: Is a property a token that may be returned by a tokenizer? Or should the tokenizer return T_identifier, T_colon when he reads an object property?
The real language allows even prefixes in the identifier of a property, e.g. ns/name:<a<test>a> where ns is the prefix for a namespace.
Should the tokenizer return T_property_prefix("ns"), T_property_prefix_separator, T_property_name("name"), T_property_colon or just T_property("ns/name") or even T_identifier("ns"), T_slash, T_identifier("name"), T_colon?
If the tokenizer should recognize properties (which would be useful for syntax highlighters), he must have a stack, because name: is not a property if it is in an array. To decide whether bla: in [{foo:{bar:[test:baz]} bla:{}}] is a property or just an implicit string, the tokenizer must track when he enters and leave an object or array.
Thus, the tokenizer would not be a finite state machine any more.
Or does it make sense to have two tokenizers - the first, which separates whitespaces from alpha-numerical character sequences and special characters like : or [, the second, which uses the first to build more semantical tokens? The parser could then operate on top of the second tokenizer.
Anyways, the tokenizer must have an infinite lookahead to see when an explicit string ends. Or should the detection of the end of an explicit string happen inside the parser?
Or should I use a parser generator for my undertaking? Since my language is not context free, I don't think there is an appropriate parser generator.
Thanks in advance for your answers!
flex can be requested to provide a context stack, and many flex scanners use this feature. So, while it may not fit with a purist view of how a scanner scans, it is a perfectly acceptable and supported feature. See this chapter of the flex manual for details on how to have different lexical contexts (called "start conditions"); at the very end is a brief description of the context stack. (Don't miss the sentence which notes that you need %option stack to enable the stack.) [See Note 1]
Slightly trickier is the requirement to match strings with variable end markers. flex does not have any variable match feature, but it does allow you to read one character at a time from the scanner input, using the function input(). That's sufficient for your language (at least as described).
Here's a rough outline of a possible scanner:
%option stack
%x SC_OBJECT
%%
/* initial/array context only */
[^][{<]+ yylval = strdup(yytext); return STRING;
/* object context only */
<SC_OBJECT>{
[}] yy_pop_state(); return '}';
[[:alpha:]][[:alnum:]]* yylval = strdup(yytext); return ID;
[:/] return yytext[0];
[[:space:]]+ /* Ignore whitespace */
}
/* either context */
<*>{
[][] return yytext[0]; /* char class with [] */
[{] yy_push_state(SC_OBJECT); return '{';
"<"[[:alpha:]][[:alnum:]]*"<" {
/* We need to save a copy of the salt because yytext could
* be invalidated by input().
*/
char* salt = strdup(yytext);
char* saltend = salt + yyleng;
char* match = salt;
/* The string accumulator code is *not* intended
* to be a model for how to write string accumulators.
*/
yylval = NULL;
size_t length = 0;
/* change salt to what we're looking for */
*salt = *(saltend - 1) = '>';
while (match != saltend) {
int ch = input();
if (ch == EOF) {
yyerror("Unexpected EOF");
/* free the temps and do something */
}
if (ch == *match) ++match;
else if (ch == '>') match = salt + 1;
else match = salt;
/* Don't do this in real code */
yylval = realloc(yylval, ++length);
yylval[length - 1] = ch;
}
/* Get rid of the terminator */
yylval[length - yyleng] = 0;
free(salt);
return STRING;
}
. yyerror("Invalid character in object");
}
I didn't test that thoroughly, but here is what it looks like with your example input:
[ name:test <xml<<html>[]</html>>xml> {name:<b<test>b>}<b<bla>b> ]
Token: [
Token: STRING: -- name:test --
Token: STRING: --<html>[]</html>--
Token: STRING: -- --
Token: {
Token: ID: --name--
Token: :
Token: STRING: --test--
Token: }
Token: STRING: --bla--
Token: STRING: -- --
Token: ]
Notes
In your case, unless you wanted to avoid having a parser, you don't actually need a stack since the only thing that needs to be pushed onto the stack is an object context, and a stack with only one possible value can be replaced with a counter.
Consequently, you could just remove the %option stack and define a counter at the top of the scan. Instead of pushing the start condition, you increment the counter and set the start condition; instead of popping, you decrement the counter and reset the start condition if it drops to 0.
%%
/* Indented text before the first rule is inserted at the top of yylex */
int object_count = 0;
<*>[{] ++object_count; BEGIN(SC_OBJECT); return '{';
<SC_OBJECT[}] if (!--object_count) BEGIN(INITIAL); return '}'
Reading the input one character at a time is not the most efficient. Since in your case, a string terminate must start with >, it would probably be better to define a separate "explicit string" context, in which you recognized [^>]+ and [>]. The second of these would do the character-at-a-time match, as with the above code, but would terminate instead of looping if it found a non-matching character other than >. However, the simple code presented may turn out to be fast enough, and anyway it was just intended to be good enough to do a test run.
I think the traditional way to parse your language would be to have the tokenizer return T_identifier("ns"), T_slash, T_identifier("name"), T_colon for ns/name:
Anyway, I can see three reasonable ways you could implement support for your language:
Use lex/flex and yacc/bison. The tokenizers generated by lex/flex do not have stack so you should be using T_identifier and not T_context_specific_type. I didn't try the approach so I can't give a definite comment on whether your language could be parsed by lex/flex and yacc/bison. So, my comment is try it to see if it works. You may find information about the lexer hack useful: http://en.wikipedia.org/wiki/The_lexer_hack
Implement a hand-built recursive descent parser. Note that this can be easily built without separate lexer/parser stages. So, if the lexemes depend on context it is easy to handle when using this approach.
Implement your own parser generator which turns lexemes on and off based on the context of the parser. So, the lexer and the parser would be integrated together using this approach.
I once worked for a major network security vendor where deep packet inspection was performed by using approach (3), i.e. we had a custom parser generator. The reason for this is that approach (1) doesn't work for two reasons: firstly, data can't be fed to lex/flex and yacc/bison incrementally, and secondly, HTTP can't be parsed by using lex/flex and yacc/bison because the meaning of the string "HTTP" depends on its location, i.e. it could be a header value or the protocol specifier. The approach (2) didn't work because data can't be fed incrementally to recursive descent parsers.
I should add that if you want to have meaningful error messages, a recursive descent parser approach is heavily recommended. My understanding is that the current version of gcc uses a hand-built recursive descent parser.

Jison Lexer - Detect Certain Keyword as an Identifier at Certain Times

"end" { return 'END'; }
...
0[xX][0-9a-fA-F]+ { return 'NUMBER'; }
[A-Za-z_$][A-Za-z0-9_$]* { return 'IDENT'; }
...
Call
: IDENT ArgumentList
{{ $$ = ['CallExpr', $1, $2]; }}
| IDENT
{{ $$ = ['CallExprNoArgs', $1]; }}
;
CallArray
: CallElement
{{ $$ = ['CallArray', $1]; }}
;
CallElement
: CallElement "." Call
{{ $$ = ['CallElement', $1, $3]; }}
| Call
;
Hello! So, in my grammar I want "res.end();" to not detect end as a keyword, but as an ident. I've been thinking for a while about this one but couldn't solve it. Does anyone have any ideas? Thank you!
edit: It's a C-like programming language.
There's not quite enough information in the question to justify the assumptions I'm making here, so this answer may be inexact.
Let's suppose we have a somewhat Lua-like language in which a.b is syntactic sugar for a["b"]. Furthermore, since the . must be followed by a lexical identifier -- in other words, it is never followed by a syntactic keyword -- we'd like to inhibit keyword recognition in this context.
That's a pretty simple rule. It's simple enough that the lexer could implement it without any semantic information at all; all that it says is that the token which follows a . must be an identifier. In this context, keywords should be treated as identifiers, and anything else other than an identifier is an error.
We can do this with start conditions. Specifically, we define a start condition which is only used after a . token:
%x selector
%%
/* White space and comment rules need to explicitly include
* the selector condition
*/
<INITIAL,selector>\s+ ;
/* Other rules, including keywords, are unmodified */
"end" return "END";
/* The dot rule triggers a new start condition */
"." this.begin("selector"); return ".";
/* Outside of the start condition, identifiers don't change state. */
[A-Za-z_]\w* yylval = yytext; return "ID";
/* Only identifiers are valid in this start condition, and if found
* the start condition is changed back. Anything else is an error.
*/
<selector>[A-Za-z_]\w* yylval = yytext; this.popState(); return "ID";
<selector>. parse_error("Expecting identifier");
Modify your parser, so it always knows what it is expecting to read next (that will be some set of tokens, you can compute this using the notion of First(x) for x being any nonterminal).
When lexing, have the lexer ask the parser what set of tokens it expects next.
Your keywork reconizer for 'end' asks the parser, and it either ways "expecting 'end'" at which pointer the lexer simply hands on the 'end' lexeme, or it says "expecting ID" at which point it hands the parser an ID with name text "end".
This may or may not be convenient to get your parser to do. But you need something like this.
We use a GLR parser; our parser accepts multiple tokens in the same place. Our solution is to generate both the 'end' keyword and and the identifier with text "end" and shove them both into the GLR parser. It can handle local ambiguity; the multiple parses caused by this proceed until the parser with the wrong assumption encounters a syntax error, and then it just vanishes, by fiat. The last standing parser is the one with the right set of assumptions. This scheme is somewhat like the first one, just that we hand the parser the choices and it decides rather than making the lexer decide.
You might be able to send your parser a "two-interpretation" lexeme, e.g., a keyword-in-context lexeme, which in essence claims it it both a keyword and/or an identifier. With a single token lookahead internally, the parser can likely decide easily and restamp the lexeme. Not as general as the GLR solution, but probably works in a lot of cases.

Start states in Lex / Flex

I'm using Flex and Bison for a parser generator, but having problems with the start states in my scanner.
I'm using exclusive rules to deal with commenting, but this grammar doesn't seem to match quoted tokens:
%x COMMENT
// { BEGIN(COMMENT); }
<COMMENT>[^\n] ;
<COMMENT>\n { BEGIN(INITIAL); }
"==" { return EQUALEQUAL; }
. ;
In this simple example the line:
// a == b
isn't matched entirely as a comment, unless I include this rule:
<COMMENT>"==" ;
How do I get round this without having to add all these tokens into my exclusive rules?
Matching C-style comments in Lex/Flex or whatever is well documented:
in the documentation, as well as various variations around the Internet.
Here is a variation on that found in the Flex documentation:
<INITIAL>{
"//" BEGIN(IN_COMMENT);
}
<IN_COMMENT>{
\n BEGIN(INITIAL);
[^\n]+ // eat comment
"/" // eat the lone /
}
Try adding a "+" after the [^n] rule. I don't know why the exclusive state is still picking up '==' even in an exclusive state, but apparently it is. Flex will normally match the rule that matches the most text, and adding the "+" will at least make the two rules tie in length. Putting the COMMENT rule first will cause it to be used in case of a tie.
The clue is:
The problem is this 'eat comment'
rule doesn't seem to match tokens with
more than one character
so add a * to match zero or more non-newlines. You want Zero otherwise a empty comment will not match.
%x COMMENT
// { BEGIN(COMMENT); }
<COMMENT>[^\n]* ;
<COMMENT>\n { BEGIN(INITIAL); }
"==" { return EQUALEQUAL; }
. ;

Resources