Flex function unput(int cahr), In JFlex the same function - flex-lexer

We know that in C Flex there is a function unput(int c) which can put the character c back onto the input stream, I wonder if there is a similar function in JFlex. Thx!

If we look at the specification of unput from a flex manual we can note its functionality:
unput(c) puts the character c back onto the input stream. It will be
the next character scanned. The following action will take the current
token and cause it to be rescanned enclosed in parentheses.
{
int i;
/* Copy yytext because unput() trashes yytext */
char *yycopy = strdup( yytext );
unput( ')' );
for ( i = yyleng - 1; i >= 0; --i )
unput( yycopy[i] );
unput( '(' );
free( yycopy );
}
Note that since each unput() puts the given character back at the beginning of the input stream, pushing back strings must be done
back-to-front.
According to the JFlex manual, there is no unput, but there is yypushback:
• void yypushback(int number)
pushes number characters of the matched text back into the input
stream. They will be read again in the next call of the scanning
method. The number of characters to be read again must not be greater
than the length of the matched text. The pushed back characters will
not be included in yylength() and yytext(). Note that in Java
strings are unchangeable, i.e. an action code like
String matched = yytext();
yypushback(1);
return matched;
will return the whole matched text, while
yypushback(1);
return yytext();
will return the matched text minus the last character.
Although they are not the same, many of the uses of unput can be achieved by using yypushback; however you cannot put different characters into the input stream, which you could with unput. Note that flex has yyless which operates like yypushback.

Related

Flex confusing to transform string character by character

I want to use flex to transform a string based on simple rules. I have rules like the first character stays the same and the second and third characters might change. Like if the second character was a letter, it becomes the number listed in the rules below. If the third is a digit, it becomes a certain letter.
%%
/*^[a-z] {char *yycopy = strdup( yytext ); unput(yycopy[0]);}*/
[ajs] {putchar('1');}
[bkt] {putchar('2');}
[clu] {putchar('3');}
[dmv] {putchar('4');}
[1] {putchar('j');}
[2] {putchar('k');}
[3] {putchar('l');}
[4] {putchar('m');} /*more number rules till 9*/
%%
int yywrap(void){return 1;}
int main( int argc, char **argv )
{
++argv, --argc; /* skip over program name */
if ( argc > 0 )
yyin = fopen( argv[0], "r" );
else
yyin = stdin;
while (yylex());
}
If there are different rules for characters in different positions within the string, how can I use start conditions to change a particular character (i.e. the rules for the second and third character are different).
You switch start condition by using the BEGIN action. Flex never automatically changes start condition, so you when you need to return to the initial start condition (called INITIAL), you have to do so explicitly (BEGIN(INITIAL)).
You need to declare start condition names in the (f)lex prologue, usually with the %x command. (%s is also possible but with different semantics. See the Flex manual for details.)
You indicate that a start condition applies to a rule by starting the rule with a start condition name in angle brackets. You can put more than one start condition inside the angle brackets; separate them with commas and don't use spaces. Don't put a space after the angle brackets either; they are part of the pattern and (f)lex patterns cannot include unquoted space characters.
BEGIN is a macro and it does not require parentheses around the start condition name, but I suggest always using them anyway, so you don't have to worry about what the macro expands to. Start condition names are small integers (either enum constants or preprocessor macros) but nothing guarantees their value, so don't make assumptions.
That's about it. So you could implement your astro numerological codifier with:
%x SECOND THIRD REST
%%
[a-z] ECHO; BEGIN(SECOND);
<SECOND>[ajs] putchar('1'); BEGIN(THIRD);
/* More SECOND rules */
<THIRD>1 putchar('j'); BEGIN(REST);
/* More THIRD rules */
<*>.*\n? ECHO; BEGIN(INITIAL);
(I deliberately did not add any <REST> rules beacause the fallback at the end covers it. I also deliberately left out the anchor in the first rule because my rules guarantee that the INITIAL start condition is 9nly in force at the beginning of a line. See the last rule. The last rule specifies an optional newline in case the file does not end with a newline, which occasionally happens although it's technically invalid.)

Flex lexical analyzer not behaving as expected

I'm trying to use Flex to match basic patterns and print something.
%%
^[^qA-Z]*q[a-pr-z0-9]*4\n {printf("userid1, userid2 \n"); return 1;}
%%
int yywrap(void){return 1;}
int main( int argc, char **argv )
{
++argv, --argc; /* skip over program name */
if ( argc > 0 )
yyin = fopen( argv[0], "r" );
else
yyin = stdin;
while (yylex());
}
Resolved dumb question
I don't know what you are trying to do, so I'll focus on the immediate issue, which is your last pattern:
^[^qA-Z]*q[a-pr-z0-9]*4[a-pr-z0-9]*4[a-pr-z0-9]*\n
That pattern starts by matching [^qA-Z]*, which is any number of anything which is not a q nor a capital letter (A-Z). Then it matches a q.
Here it's worth considering all the things which are not a q nor a capital letter (A-Z). Obviously, that includes lower-case letters such as s (other than q). It also includes digits. And it includes any other character: punctuation, whitespace, even control characters. In particular, it includes a newline character.
So when you type
10s10<newline>
That certainly could be the start of the last pattern. The scanner hasn't yet seen a q so it doesn't know whether the pattern will eventually match, but it hasn't yet failed. So it keeps on reading more characters, including more newlines.
When you eventually type a q, the scanner can continue with the rest of the pattern. Depending on what you type next, it might or might not be able to continue. If, as seems likely, your input eventually fails to match the pattern, the lexer will fall back to the longest successful match, which is the first pattern. At that point, it will perform the first action
Negative character classes need to be used with a bit of caution. It's s easy to fall into the trap of thinking that "not ..." only includes "reasonable" input. But it includes everything. Often, as in this case, you'll want to at least exclude newlines.,

Lex parsing to determine

This is an extension to a previous question. I'm trying to parse a .txt file and determine if each line is valid or invalid depending on my rules. the text files will contain an assortment of random strings, hex, integers and decimals seperated by a single space such as:
5 -0xA98F 0XA98H text hello 2.3 -12 0xabc
I'm trying to identify valid hex, integers and decimals and get an output like so.
5 valid
-0xA98F valid
0xA98H invalid
text invalid
hello invalid
2.3 valid
-12 valid
0xabc invalid
My current code however displays like so:
5 valid
-0xa98f valid
0xA98 valid <--- issue 1 just remoives the H
2.3 valid <--- ignores text and hello
-12 valid
0xabc invalid
here is the code I current have:
%{
#include <iostream>
using namespace std;
%}
Decimal [+-]?[0-9]+\.[0-9]+?
Integers [+-]?[0-9]+
Hex [-]?[0][xX][0-9A-F]+
%%
[ \t\n] ;
{Decimal} {cout << yytext << "Valid" << endl; }
{Integers} { cout << yytext << "Valid" << endl; }
{Hex} {cout << yytext << " Valid" << endl;}
. ;
%%
main() {
FILE *myfile= fopen("something.txt", "r");
if (!myfile) {
cout << "Error" << endl;
return -1;
}
yyin = myfile;
yylex();
fclose(yyin);
}
The key to using flex for problems like this is understanding the "maximal-munch" rule. The rule is simple: Flex always picks the action corresponding to the pattern which matches the longest string (starting with the current input point; flex never "searches" for a match.) If more than one pattern matches the same longest substring, then the first pattern in the flex description is chosen. That means that the order of rules is important.
This is described at more length in the Flex manual section on How the Input is Matched.
So let's suppose that you are interested in matching complete words, where "words" are non-empty sequences of arbitrary non-whitespace characters separated by whitespace. (So, for example, the line 3, 4 and 5. would contain only one valid strings.)
It's easy to identify the four possibilities:
Decimal integers
Decimal floating point
Hexadecimal integers
Anything other word.
We also need to ignore whitespace, other than recognizing it as a word separator.
If we put the rules in that order, we can be confident that the correct rule will be chosen for each line, because of the maximal munch rule.
So here's the entire flex file (except for the definition of main):
%option noinput nounput noyywrap nodefault
%%
[[:space:]]+ { /* Ignore whitespace */ }
[+-]?[[:digit:]]+ { printf("%s valid\n", yytext); /* Decimal integer */ }
[+-]?[[:digit:]]+"."[[:digit:]]* {
printf("%s valid\n", yytext); /* Decimal point */ }
[+-]?"."[[:digit:]]+ { printf("%s valid\n", yytext); /* Decimal point */ }
[+-]0[xX][[:xdigit:]]+ { printf("%s valid\n", yytext); /* Hexadecimal integer */ }
[^[:space:]]+ { printf("%s invalid\n", yytext); /* Any word not matched by above rules */ }
Notes
I've used ordinary printf statements here. You're free to use C++ streams, of course, but I prefer to use either stdio.h or iostreams, but not both. It might be considered cleaner to #include <stdio.h>, but in fact Flex already does that because it needs it for its own purposes.
The %option statement tells flex that you don't need yywrap (which means you don't need to provide one or link with -lfl), that you don't use input or unput (which means you can compile with -Wall without getting unused function warnings) and that you don't expect flex to need to insert a default rule (which saves you from embarrassing errors, because flex will warn you if there is anything which might not match any rule.)
I used [[:xdigit:]]+ in the hexadecimal pattern, which allows both upper and lower-case hex digits. If that's not desired, you could replace it with [0-9A-F] as in your original code, but your examples seem to indicate that your original code was not correct. Of course, you could write out the posix character classes, but I find them more readable. See the Flex manual section on Patterns for a complete list.

Parsing RTF non-breaking space

I am making a simple parser from RTF to HTML.
I have the following raw RTF:
who\\~nursed\\~and
According to the RTF specification \~ is the keyword for a non-breaking space.
The end of a keyword is marked by a Delimiter which is defined as follows:
A space. This serves only to delimit a control word and is ignored in subsequent processing.
A numeric digit or an ASCII minus sign (-), which indicates that a numeric parameter is associated with the control word. The subsequent digital sequence is then delimited by any character other than an ASCII digit (commonly another control word that begins with a backslash). The parameter can be a positive or negative decimal number. The range of the values for the number is nominally –32768 through 32767, i.e., a signed 16-bit integer. A small number of control words take values in the range −2,147,483,648 to 2,147,483,647 (32-bit signed integer). These control words include \binN, \revdttmN, \rsidN related control words and some picture properties like \bliptagN. Here N stands for the numeric parameter. An RTF parser must allow for up to 10 digits optionally preceded by a minus sign. If the delimiter is a space, it is discarded, that is, it’s not included in subsequent processing.
Any character other than a letter or a digit. In this case, the delimiting character terminates the control word and is not part of the control word. Such as a backslash “\”, which means a new control word or a control symbol follows.
As i understand it, the highlighted part above, is the rule used in this particular instance. But if that is the case, then my parser would read until the ~ sign, and conclude that since this is not a letter or a digit, it is not part of the keyword.
This currently results in the following output:
who~nursed~and
I have the following code for reading a keyword:
public GetKeyword(index: number): KeywordSet {
var keywordarray: string[] = [];
var valuearray: string[] = [];
index++;
while (index < this.m_input.length) {
var remainint = this.m_input.substr(index);
//Keep going until we hit a delimiter
if (this.m_input[index] == " ") {
index++;
break;
} else if (this.IsNumber(this.m_input[index])) {
valuearray.push(this.m_input[index]);
} else if (this.IsDelimiter(this.m_input[index])) {
break;
} else keywordarray.push(this.m_input[index]);
index++;
}
var value: number = null;
if (valuearray.length > 0) value = parseInt(valuearray.join(""));
var keywordset = new KeywordSet(keywordarray.join(""), index, value);
return keywordset;
}
private IsDelimiter(char: string): boolean {
if (char == "*" || char == "'") return false;
return !this.IsLetterOrDigit(char);
}
When GetKeyword() reaches "~" it recognises it as a delimiter, and stops reading, resulting in an empty keyword as return value.
I do not have an AST constructed for this. Don't think it is necessary for this?
The quote in your question describes the syntax of an entity called control word but the \~ is actually a different entity called control symbol. Control symbols have a different syntax:
Control Symbol
A control symbol consists of a backslash followed by a single, non-alphabetical character. For example, \~ (backslash tilde) represents a non-breaking space. Control symbols do not have delimiters, i.e., a space following a control symbol is treated as text, not a delimiter.
See page 9 of Rich Text Format (RTF) Specification, version 1.9.1.

Handle tab correctly when counting location information in Lex?

I have read the manual about how to track location information, but still have problem. It seems that Lex (2.5) counts both space and tab as one character, and that makes the error reporting with wrong column number. Currently, I have the editor to replace tab with spaces for the parser. But I wonder if there is a way to handle tab correctly in Lex itself.
You could use unputc to perform dirty trickery with the input stream. When you match a tab use unputc to put four spaces in the input steam...
Manual extract:
unput(c) puts the character c back onto the input stream. It will be
the next character scanned. The following action will take the current
token and cause it to be rescanned enclosed in parentheses.
{
int i;
/* Copy yytext because unput() trashes yytext */
char *yycopy = strdup( yytext );
unput( ')' );
for ( i = yyleng - 1; i >= 0; --i )
unput( yycopy[i] );
unput( '(' );
free( yycopy );
}
Note that since each unput() puts the given character back at the beginning of the input stream, pushing back strings must be done
back-to-front.

Resources