how to tokenize input based on a set of ABNF grammar rules - parsing

I have read the RFC on the ABNF specification and am having difficulty understanding how a set of ABNF rules could be used to reliably extract tokens from some input string that matches the grammar. It seems that the specification doesn't ever mention tokens or ASTs, so it may not concern itself with that, but I believe that would be the ultimate goal of applying any BNF grammar, unless I am mistaken.
In the specification, they list example rules for parsing a postal-address:
postal-address = name-part street zip-part
name-part = *(personal-part SP) last-name [SP suffix] CRLF
name-part =/ personal-part CRLF
personal-part = first-name / (initial ".")
first-name = *ALPHA
initial = ALPHA
last-name = *ALPHA
suffix = ("Jr." / "Sr." / 1*("I" / "V" / "X"))
street = [apt SP] house-num SP street-name CRLF
apt = 1*4DIGIT
house-num = 1*8(DIGIT / ALPHA)
street-name = 1*VCHAR
zip-part = town-name "," SP state 1*2SP zip-code CRLF
town-name = 1*(ALPHA / SP)
state = 2ALPHA
zip-code = 5DIGIT ["-" 4DIGIT]
There is also a list of core rules that I won't post here describing expected common-usage rules.
Ultimately, what I would like to do is figure out the rules necessary for taking the input
John H. Doe
12345 Fakestreet
Springfield, IL 55555
and generating what I believe would be the correct token sequence which is:
["John", " ", "H", ".", "Doe", "\r\n",
"12345", " ", "Fakestreet", "\r\n",
"Springfield", ",", " ", "IL", " ", "55555", "\r\n"]
(I believe the spaces and CRLFs need to be returned as "tokens" because they are specified as requirements in certain rules)
Some problems I am considering:
It makes sense that "Fakestreet" should be its own token, but according to the definition it is a variable repetition of the visible-character core rule. Ideally I would not like to read out each letter as its own token ("F", "a", "k", and so on), so (assuming core-rules can be treated as terminals?) any potential token string would need to be checked against the entire, theoretically infinite, rule definition 1*VCHAR to see if it is a match. And some rules are more complicated than that, like zip-code's 5DIGIT ["-" 4DIGIT], but any potential token needs to be checked against this rule as well ("12345" and "12345-6789" are both valid tokens). So it seems like entire rule element concatenations need to be checked completely as well, unless "12345-6789" should rather be tokenized as ["12345", "-", "6789"] which... may be correct?
I'd assume we would not want to completely check rules that reference other rules, otherwise we may end up tokenizing the entire postal-address as a single token of type "postal-address". Maybe rules that reference other rules shouldn't be checked? Maybe there is such a thing as a "terminal-rule" that includes no rule refs (excluding core rules)?
Occasionally in the rules, terminal values are combined with rule references, for instance in the definition of "personal-part", the literal "." is defined. So, while we may not want to match any potential token string against the entire "personal-part" rule definition, it seems we do want to try to match it against the literal "." because it is a required token for parsing a personal-part. Maybe in non-terminal rules, terminal values listed there should be considered?
I realize this is a lengthy question, but it seems BNF supersets like EBNF and ABNF are being used for this kind of thing but I cannot find a standard specification for how to tokenize from ABNF grammar.

Related

Understanding ABNF syntax "0<pchar>"

In RFC 3986, they defined the rule:
path-empty = 0<pchar>
For simplicity, let's assume pchar is defined:
pchar = 'a' / 'b' / 'c'
What does path-empty match and how is it matched?
I've read the Wikipedia page on ABNF. My guess is that matches the empty string (regex ^(?![\s\S])). If that is the case, why even reference pchar? Is there not a simpler way to match the empty string in ABNF syntax without referencing another rule?
How could this be translated to ANTLR4?
Yes, you are correct. path-empty derives the empty string.
In ABNF, the right-hand side of a rule must contain an element, which will be anything other than spaces, newlines, and comments. See rfc5234, page 10. Given this syntax, there are several ways to define the empty string. path-empty = 0<pchar> is one way. It means "exactly zero of <pchar>". But, path-empty = "" and path-empty = 0pchar would also work. ABNF does not define whether one is preferred over the other.
Note, the rfc3986 spec uses a prose-val, i.e., <pchar> instead of the "rulename" pchar (or even "", 0pchar, or just <empty string>). It's unclear why, but it has the same effect--path-empty derives the empty string. But, <pchar> is not the same as pchar. A prose value is a "last resort" way to add a non-formal description of the rule.
In Antlr4, the rule would just be path_empty : ;. Note, Antlr has a different naming convention that defines a strict boundary between lexer and parser. ABNF does not have this distinction. In fact, this grammar could be converted to a single Antlr lexer grammar, an exercise in understanding the power of Antlr lexers.

(F) Lex, how do I match negation?

Some language grammars use negations in their rules. For example, in the Dart specification the following rule is used:
~('\'|'"'|'$'|NEWLINE)
Which means match anything that is not one of the rules inside the parenthesis. Now, I know in flex I can negate character rules (ex: [^ab] , but some of the rules I want to negate could be more complicated than a single character so I don't think I could use character rules for that. For example I may need to negate the sequence '"""' for multiline strings but I'm not sure what the way to do it in flex would be.
(TL;DR: Skip down to the bottom for a practical answer.)
The inverse of any regular language is a regular language. So in theory it is possible to write the inverse of a regular expression as a regular expression. Unfortunately, it is not always easy.
The """ case, at least, is not too difficult.
First, let's be clear about what we are trying to match.
Strictly speaking "not """" would mean "any string other than """". But that would include, for example, x""".
So it might be tempting to say that we're looking for "any string which does not contain """". (That is, the inverse of .*""".*). But that's not quite correct either. The typical usage is to tokenise an input like:
"""This string might contain " or ""."""
If we start after the initial """ and look for the longest string which doesn't contain """, we will find:
This string might contain " or "".""
whereas what we wanted was:
This string might contain " or "".
So it turns out that we need "any string which does not end with " and which doesn't contain """", which is actually the conjunction of two inverses: (~.*" ∧ ~.*""".*)
It's (relatively) easy to produce a state diagram for that:
(Note that the only difference between the above and the state diagram for "any string which does not contain """" is that in that state diagram, all the states would be accepting, and in this one states 1 and 2 are not accepting.)
Now, the challenge is to turn that back into a regular expression. There are automated techniques for doing that, but the regular expressions they produce are often long and clumsy. This case is simple, though, because there is only one accepting state and we need only describe all the paths which can end in that state:
([^"]|\"([^"]|\"[^"]))*
This model will work for any simple string, but it's a little more complicated when the string is not just a sequence of the same character. For example, suppose we wanted to match strings terminated with END rather than """. Naively modifying the above pattern would result in:
([^E]|E([^N]|N[^D]))* <--- DON'T USE THIS
but that regular expression will match the string
ENENDstuff which shouldn't have been matched
The real state diagram we're looking for is
and one way of writing that as a regular expression is:
([^E]|E(E|NE)*([^EN]|N[^ED]))
Again, I produced that by tracing all the ways to end up in state 0:
[^E] stays in state 0
E in state 1:
(E|NE)*: stay in state 1
[^EN]: back to state 0
N[^ED]:back to state 0 via state 2
This can be a lot of work, both to produce and to read. And the results are error-prone. (Formal validation is easier with the state diagrams, which are small for this class of problems, rather than with the regular expressions which can grow to be enormous).
A practical and scalable solution
Practical Flex rulesets use start conditions to solve this kind of problem. For example, here is how you might recognize python triple-quoted strings:
%x TRIPLEQ
start \"\"\"
end \"\"\"
%%
{start} { BEGIN( TRIPLEQ ); /* Note: no return, flex continues */ }
<TRIPLEQ>.|\n { /* Append the next token to yytext instead of
* replacing yytext with the next token
*/
yymore();
/* No return yet, flex continues */
}
<TRIPLEQ>{end} { /* We've found the end of the string, but
* we need to get rid of the terminating """
*/
yylval.str = malloc(yyleng - 2);
memcpy(yylval.str, yytext, yyleng - 3);
yylval.str[yyleng - 3] = 0;
return STRING;
}
This works because the . rule in start condition TRIPLEQ will not match " if the " is part of a string matched by {end}; flex always chooses the longest match. It could be made more efficient by using [^"]+|\"|\n instead of .|\n, because that would result in longer matches and consequently fewer calls to yymore(); I didn't write it that way above simply for clarity.
This model is much easier to extend. In particular, if we wanted to use <![CDATA[ as the start and ]]> as the terminator, we'd only need to change the definitions
start "<![CDATA["
end "]]>"
(and possibly the optimized rule inside the start condition, if using the optimization suggested above.)

How to match any symbol in ANTLR parser (not lexer)?

How to match any symbol in ANTLR parser (not lexer)? Where is the complete language description for ANTLR4 parsers?
UPDATE
Is the answer is "impossible"?
You first need to understand the roles of each part in parsing:
The lexer: this is the object that tokenizes your input string. Tokenizing means to convert a stream of input characters to an abstract token symbol (usually just a number).
The parser: this is the object that only works with tokens to determine the structure of a language. A language (written as one or more grammar files) defines the token combinations that are valid.
As you can see, the parser doesn't even know what a letter is. It only knows tokens. So your question is already wrong.
Having said that it would probably help to know why you want to skip individual input letters in your parser. Looks like your base concept needs adjustments.
It depends what you mean by "symbol". To match any token inside a parser rule, use the . (DOT) meta char. If you're trying to match any character inside a parser rule, then you're out of luck, there is a strict separation between parser- and lexer rules in ANTLR. It is not possible to match any character inside a parser rule.
It is possible, but only if you have such a basic grammar that the reason to use ANTlr is negated anyway.
If you had the grammar:
text : ANY_CHAR* ;
ANY_CHAR : . ;
it would do what you (seem to) want.
However, as many have pointed out, this would be a pretty strange thing to do. The purpose of the lexer is to identify different tokens that can be strung together in the parser to form a grammar, so your lexer can either identify the specific string "JSTL/EL" as a token, or [A-Z]'/EL', [A-Z]'/'[A-Z][A-Z], etc - depending on what you need.
The parser is then used to define the grammar, so:
phrase : CHAR* jstl CHAR* ;
jstl : JSTL SLASH QUALIFIER ;
JSTL : 'JSTL' ;
SLASH : '/'
QUALIFIER : [A-Z][A-Z] ;
CHAR : . ;
would accept "blah blah JSTL/EL..." as input, but not "blah blah EL/JSTL...".
I'd recommend looking at The Definitive ANTlr 4 Reference, in particular the section on "Islands in the stream" and the Grammar Reference (Ch 15) that specifically deals with Unicode.

In the PowerShell grammar, what is the the `lvalueExpression` rule saying?

I was reviewing the PowerShell grammar posted here: http://www.manning.com/payette/AppCexcerpt.pdf
(I don't think it has been updated since PowerShell v1, and there are some typos. So, it's clearly not the true PowerShell Grammar, but a human-oriented document.)
In section C.2.1, it says:
<lvalueExpression> = <lvalue> [? |? <lvalue>]*
What is the meaning of the question marks? I can't tell if it means "match any character" or "match a question mark" or it's a typo.
I'm not sure what inputs this is intended to match, but maybe it's this:
$a,$b = 1, 2
in which case maybe the question mark is supposed to be a comma?
Based on its use in the preceding rule (<assignmentStatementRule> = <lvalueExpression> <AssignmentOperatorToken> <pipelineRule>), it appears that lvalueExpression in Appendix C of Windows PowerShell in Action corresponds to expression in section B.2.3 of The PowerShell Language Specification that Joey linked to. Matching it further than this is difficult, but I'll add some speculation anyway :)
The ? characters in [? |? <lvalue>]* are very likely erroneous. If it had been used to represent "the previous token is optional", then:
the [ and | tokens it was applied to should have been quoted
only [ makes sense as part of a value expression, but indexing is already covered later by the propertyOrArrayReferenceOperator rule
? is not used anywhere else in the grammar, but {0|1} is used multiple times to indicate "can appear zero or one times"
Given its similarity to [ '|' <cmdletCall> ]* at the end of the first rule in the section, it may have been a copy-and-paste error, compounded by a ‘smart quote’ round-trip encoding error. Assuming this was copied with the intent of editing later, then ?|? may have become '.' to represent multiple property accesses (but again, this is covered by the propertyOrArrayReferenceOperator rule).
Though based on the statement at the end of section C.2.1 that "[the pipeline rule] also handles parsing assignment expressions", lvalueExpression was probably intended to list all the assignable expressions besides simpleLvalue (e.g. cast-expression for [int]$x = 1, array-literal-expression for $a,$b,$c = 1,2,3), etc).

What's valid and what's not in a URI query?

Background (question further down)
I've been Googling this back and forth reading RFCs and SO questions trying to crack this, but I still don't got jack.
So I guess we just vote for the "best" answer and that's it, or?
Basically it boils down to this.
3.4. Query Component
The query component is a string of information to be interpreted by the resource.
query = *uric
Within a query component, the characters ";", "/", "?", ":", "#", "&", "=", "+", ",", and "$" are reserved.
The first thing that boggles me is that *uric is defined like this
uric = reserved | unreserved | escaped
reserved = ";" | "/" | "?" | ":" | "#" | "&" | "=" | "+" | "$" | ","
This is however somewhat clarified by paragraphs such as
The "reserved" syntax class above refers to those characters that are allowed within a URI, but which may not be allowed within a particular component of the generic URI syntax; they are used as delimiters of the components described in Section 3.
Characters in the "reserved" set are not reserved in all contexts. The set of characters actually reserved within any given URI component is defined by that component. In general, a character is reserved if the semantics of the URI changes if the character is replaced with its escaped US-ASCII encoding.
This last excerpt feels somewhat backwards, but it clearly states that the reserved character set depends on context. Yet 3.4 states that all the reserved characters are reserved within a query component, however, the only things that would change the semantics here is escaping the question mark (?) as URIs do not define the concept of a query string.
At this point I've given up on the RFCs entirely but found RFC 1738 particularly interesting.
An HTTP URL takes the form:
http://<host>:<port>/<path>?<searchpart>
Within the <path> and <searchpart> components, "/", ";", "?" are reserved. The "/" character may be used within HTTP to designate a hierarchical structure.
I interpret this at least with regards to HTTP URLs that RFC 1738 supersedes RFC 2396. Because the URI query has no notion of a query string also the interpretation of reserved doesn't really let allow me to define query strings as I'm used to doing by now.
Question
This all started when I wanted to pass a list of numbers together with the request of another resource. I didn't think much of it, and just passed it as a comma separated values. To my surprise though the comma was escaped. The query page.html?q=1,2,3 encoded turned into page.html?q=1%2C2%2C3 it works, but it's ugly and didn't expect it. That's when I started going through RFCs.
My first question is simply, is encoding commas really necessary?
My answer, according to RFC 2396: yes, according to RFC 1738: no
Later I found related posts regarding the passing of lists between requests. Where the csv approach was poised as bad. This showed up instead, (haven't seen this before).
page.html?q=1;q=2;q=3
My second question, is this a valid URL?
My answer, according to RFC 2396: no, according to RFC 1738: no (; is reserved)
I don't have any issues with passing csv as long as it's numbers, but yes you do run into the risk of having to encode and decode values back and forth if the comma suddenly is needed for something else. Anyway I tried the semi-colon query string thing with ASP.NET and the result was not what I expected.
Default.aspx?a=1;a=2&b=1&a=3
Request.QueryString["a"] = "1;a=2,3"
Request.QueryString["b"] = "1"
I fail to see how this greatly differs from a csv approach as when I ask for "a" I get a string with commas in it. ASP.NET certainly is not a reference implementation but it hasn't let me down yet.
But most importantly -- my third question -- where is specification for this? and what would you do or for that matter not do?
That a character is reserved within a generic URL component doesn't mean it must be escaped when it appears within the component or within data in the component. The character must also be defined as a delimiter within the generic or scheme-specific syntax and the appearance of the character must be within data.
The current standard for generic URIs is RFC 3986, which has this to say:
2.2. Reserved Characters
URIs include components and subcomponents that are delimited by characters in the "reserved" set. These characters are called "reserved" because they may (or may not) be defined as delimiters by the generic syntax, by each scheme-specific syntax, or by the implementation-specific syntax of a URI's dereferencing algorithm. If data for a URI component would conflict with a reserved character's purpose as a delimiter [emphasis added], then the conflicting data must be percent-encoded before the URI is formed.
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "#"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
3.3. Path Component
[...]
pchar = unreserved / pct-encoded / sub-delims / ":" / "#"
[...]
3.4 Query Component
[...]
query = *( pchar / "/" / "?" )
Thus commas are explicitly allowed within query strings and only need to be escaped in data if specific schemes define it as a delimiter. The HTTP scheme doesn't use the comma or semi-colon as a delimiter in query strings, so they don't need to be escaped. Whether browsers follow this standard is another matter.
Using CSV should work fine for string data, you just have to follow standard CSV conventions and either quote data or escape the commas with backslashes.
As for RFC 2396, it also allows for unescaped commas in HTTP query strings:
2.2. Reserved Characters
Many URI include components consisting of or delimited by, certain
special characters. These characters are called "reserved", since
their usage within the URI component is limited to their reserved
purpose. If the data for a URI component would conflict with the
reserved purpose, then the conflicting data must be escaped before
forming the URI.
Since commas don't have a reserved purpose under the HTTP scheme, they don't have to be escaped in data. The note from § 2.3 about reserved characters being those that change semantics when percent-encoded applies only generally; characters may be percent-encoded without changing semantics for specific schemes and yet still be reserved.
I think the real question is: "What characters should be encoded in a query string?" And that depends mainly on two things: The validity and the meaning of a character.
Validity according to the RFC standard
In RFC3986 we can find which special characters are valid and which are not inside a query string:
// Valid:
! $ & ' ( ) * + , - . / : ; = ? # _ ~
% (should be followed by two hex chars to be completely valid (e.g. %7C))
// Invalid:
" < > [ \ ] ^ ` { | }
space
# (marks the end of the query string, so it can't be a part of it)
extended ASCII characters (e.g. °)
Deviations from the standard
Browsers and web frameworks do not always strictly follow the RFC standard. Below are some examples:
[, ] are not valid, but Chrome and Firefox do not encode these characters inside a query string. The reasoning given by Chrome devs is simply: "If other browsers and an RFC disagree, we will generally match other browsers." QueryHelpers.AddQueryString from ASP.NET Core on the other hand will encode these characters.
Other invalid characters that are not encoded by Chrome and Firefox are:
\ ^ ` { | }
' is a valid character inside a query string but will be encoded by Chrome, Firefox and QueryHelpers nevertheless. The explanation given by Firefox devs is that they knew that they don't have to encode it according to the RFC standard, but did it to reduce vulnerabilities.
Special meaning
Some characters are valid and also don't get encoded by browsers, but should still be encoded in certain cases.
+: Spaces are normally encoded as %20 but alternatively they can be encoded as +. So + inside a query string means it's an encoded space. If you want to include a character that's actually supposed to literally mean plus, then you have to use the encoded version of + which is %2B.
~: Some old Unix systems interpreted URI parts that started with ~ as a path to a home directory. So it's a good idea to encode ~ if it's not meant to denote the start of a Unix home directory path for an old system (so nowadays probably always encode).
=, &: Usually (although RFC doesn't specify that this is required) query strings contain parameters in the format "key1=value1&key2=value2". If that's the case and =s or &s should be part of the parameter key or the parameter value instead of giving them the role of separating the key and value or separating the parameters, then you have to encode those =s and &s. So if a parameter value should for some reason consist of the string "=&" then it has to be encoded as %3D%26 which then can be used for the full key and value: "weirdparam=%3D%26".
%: Usually web frameworks figure out that %s that are not followed by two hex characters simply mean the % itself, but it's still a good idea to always encode % when it's supposed to only mean % and not indicate the start of an encoded character (e.g. %7C) because RFC3986 specifies that % is only valid when followed by two hex characters. So don't use "percentageparam=%" use "percentageparam=%25" instead.
Encoding guidelines
Encode every character that is otherwise invalid* according to RFC3986 and every character that can have special meaning but should only be interpreted in a literal way without giving it a special meaning. You can also encode things that aren't required to be encoded, like '. Why? Because it doesn't hurt to encode more than necessary. Servers and web frameworks when parsing a query string will decode every encoded character, no matter if it was really necessary to previously encode that character or not.
The only characters of a query string that shouldn't be encoded are those that can have a special meaning and shouldn't lose that special meaning, e.g. don't encode the = of "key1=value1". For that to achieve don't apply an encoding method to the whole query string (and also not to the whole URI) but apply it only and separately to the query parameter keys and values. For example, with JS:
var url = "http://example.com?" + encodeURIComponent(myKey1) + "=" + encodeURIComponent(myValue1) + "&" + encodeURIComponent(myKey2)...;
Note that encodeURIComponent encodes a lot more characters than necessary meaning characters that are valid in a query string and don't have special meaning there e.g. /, ?, ...
The reason is that encodeURIComponent wasn't created for query strings alone but instead encodes characters that have special meaning outside of the query string as well, e.g. / for the path URI component. QueryHelpers.AddQueryString works in a similar manner. Under the hood it uses System.Text.Encodings.Web.DefaultUrlEncoder which is not just meant for query strings but also for isegment, ipath-noscheme and ifragment.
* You could probably get away with only regarding those characters as invalid that are both not allowed by the RFC and that are also always encoded by Chrome for instance. This would be Space " < >. But it's probably better to be on the safer side and encode at least everything that RFC3986 considers invalid.
OP's questions
My first question is simply, is encoding commas really necessary -> No it's not necessary, but it doesn't hurt (except ugliness) and will happen with default encoding methods e.g. encodeURIComponent and decoding and query string parsing should work nevertheless.
My second question, is this a valid URL (page.html?q=1;q=2;q=3)? -> It's RFC valid, but your server / web framework might have a hard time parsing the query string when it might expect the typical "key1=value1&key2=value2" format for query strings.
Where is specification for this? -> There isn't a single specification that covers everything because some things are implementation specific. For instance there are different ways of specifying arrays inside of query strings.
Just use ?q=1+2+3
I am answering here a fourth question :) that did not ask but all started with: how do i pass list of numbers a-la comma-separated values? Seems to me the best approach is just to pass them space-separated, where spaces will get url-form-encoded to +. Works great, as longs as you know the values in the list contain no spaces (something numbers tend not to).
page.html?q=1;q=2;q=3
is this a valid URL?
Yes. The ; is reserved, but not by an RFC. The context that defines this component is the definition of the application/x-www-form-urlencoded media type, which is part of the HTML standard (section 17.13.4.1). In particular the sneaky note hidden away in section B.2.2:
We recommend that HTTP server implementors, and in particular, CGI implementors support the use of ";" in place of "&" to save authors the trouble of escaping "&" characters in this manner.
Unfortunately many popular server-side scripting frameworks including ASP.NET do not support this usage.
I would like to note that page.html?q=1&q=2&q=3 is a valid url as well. This is a completely legitimate way of expressing an array in a query string. Your server technology will determine how exactly that is presented.
In Classic ASP, you check Response.QueryString("q").Count and then use Response.QueryString("q")(0) (and (1) and (2)).
Note that you saw this in your ASP.NET, too (I think it was not intended, but look):
Default.aspx?a=1;a=2&b=1&a=3
Request.QueryString["a"] = "1;a=2,3"
Request.QueryString["b"] = "1"
Notice that the semicolon is ignored, so you have a defined twice, and you got its value twice, separated by a comma. Using all ampersands Default.aspx?a=1&a=2&b=1&a=3 will yield a as "1,2,3". But I am sure there is a method to get each individual element, in case the elements themselves contain commas. It is simply the default property of the non-indexed QueryString that concatenates the sub-values together with comma separators.
I had the same issue. The URL that was hyperlinked was a third party URL and was expecting a list of parameters in format page.html?q=1,2,3 ONLY and the URL page.html?q=1%2C2%2C3 did not work. I was able to get it working using javascript. May not be the best approach but can check out the solution here if it helps anyone.
If you are sending the ENCODED characters to FLASH/SWF file, then you should ENCODE the character twice!! (because of Flash parser)

Resources