Related
Grammar:
rule: (a b)? a c ;
Input:
a d
Question: Which error message correct at position 2 for given input?
1. expected "b", "c".
2. expected "c".
P.S.
I write parser and I have choice (dilemma) take into account that "b" expected at position or not take.
The #1 error (expected "b", "c") want to say that input "a b" expected but because it optional it may not expected but possible.
I don't know possible is the same as expected or not?
Which error message better and correct #1 or #2?
Thanks for answers.
P.S.
In first case I define marker of testing as limit of position.
if(_inputPos > testing) {
_failure(_inputPos, _code[cp + {{OFFSET_RESULT}}]);
}
Limit moved in optional expressions:
OPTIONAL_EXPRESSION:
testing = _inputPos;
The "b" expression move _inputPos above the testing pos and add failure at _inputPos.
In second case I can define marker of testing as boolean flag.
if(!testing) {
_failure(_inputPos, _code[cp + {{OFFSET_RESULT}}]);
}
The "b" expression in this case not add failure because it tested (inner for optional expression).
What you think what is better and correct?
Testing defined as specific position and if expression above this position (_inputPos > testing) it add failure (even it inside optional expression).
Testing defined as flag and if this flag set that the failures not takes into account. After executing optional expression it restore (not reset!) previous value of testing (true or false).
Also failures not takes into account if rule not fails. They only reported if parsing fails.
P.S.
Changes at 06 Jan 2014
This question raised because it related to two different problems.
First problem:
Parsing expression grammar (PEG) describe only three atomic items of input:
terminal symbol
nonterminal symbol
empty string
This grammar does not provide such operation as lexical preprocessing an thus it does not provide such element as the token.
Second problem:
What is a grammar? Are two grammars can be considred equal if they accept the same input but produce different result?
Assume we have two grammar:
Grammar 1
rule <- type? identifier
Grammar 2
rule <- type identifier / identifier
They both accept the same input but produce (in PEG) different result.
Grammar 1 results:
{type : type, identifier : identifier}
{type : null, identifier : identifier}
Grammar 2 results:
{type : type, identifier : identifier}
{identifier : identifier}
Quetions:
Both grammar equal?
It is painless to do optimization of grammars?
My answer on both questions is negative. No equal, Not painless.
But you may ask. "But why this happens?".
I can answer to you. "Because this is not a problem. This is a feature".
In PEG parser expression ALWAYS consists from these parts.
ORDERED_CHOICE => SEQUENCE => EXPRESSION
And this explanation is the my answer on question "But why this happens?".
Another problem.
PEG parser does not recognize WHITESPACES because it does not have tokens and tokens separators.
Now look at this grammar (in short):
program <- WHITESPACE expr EOF
expr <- ruleX
ruleX <- 'X' WHITESPACE
WHITESPACE < ' '?
EOF <- ! .
All PEG grammar desribed in this manner.
First WHITESPACE at begin and other WHITESPACE (often) at the end of rule.
In this case in PEG optional WHITESPACE must be assumed as expected.
But WHITESPACE not means only space. It may be more complex [\t\n\r] and even comments.
But the main rule of error messages is the following.
If not possible to display all expected elements (or not possible to display at least one from all set of expected elements) in this case is more correct do not display anything.
More precisely required to display "unexpected" error mesage.
How you in PEG will display expected WHITESPACE?
Parser error: expected WHITESPACE
Parser error: expected ' ', '\t', '\n' , 'r'
What about start charcters of comments? They also may be part of WHITESPACE in some grammars.
In this case optional WHITESPACE will be reject all other potential expected elements because not possible correctly to display WHITESPACE in error message because WHITESPACE is too complex to display.
Is this good or bad?
I think this is not bad and required some tricks to hide this nature of PEG parsers.
And in my PEG parser I not assume that the inner expression at first position of optional (optional & zero_or_more) expression must be treated as expected.
But all other inner (except at the first position) must treated as expected.
Example 1:
List<int list; // type? ident
Here "List<int" is a "type". But missing ">" is not at the first position in optional "type?".
This failure take into account and report as "expected '>'"
This is because we not skip "type" but enter into "type" and after really optional "List" we move position from first to next real "expected" (that already outside of testing position) element.
"List" was in "testing" position.
If inner expression (inside optional expression) "fits in the limitation" not continue at next position then it not assumed as the expected input.
From this assumption has been asked main question.
You must just take into account that we are talking about PEG parsers and their error messages.
Here is your grammar:
What is clear here is that after the first a there are two possible inputs: b or c. Your error message should not prioritize one over the other.
The basic idea to produce an error message for an invalid input is to find the most far place you failed (if your grammar where d | (a b)? a c, d wouldn't be part of the error) and determine what are all possible inputs that could make you advance and say "expected '...' but got '...'". There are other approaches to try to recover the parser and force it to continue. If there is only one possible expected token, let's temporarily insert it into the token stream and continue as if it where there since ever. This would lead to better error detection as you can find errors beyond the point where the parser first stopped.
How do I enable folding on system verilog keywords in Gvim ?
For example
function
Code
....
....
endfunction
I would like Gvim to create a fold from function to endfunction. How do I do that ?
Here is a custom foldexpression that should do what you want. It starts a fold on the line following each "function", and ends it on the line preceding each "endfunction", and otherwise inherits the foldlevel of the previous line.
function! VimFunctionFoldExpr()
if getline(v:lnum-1) =~ '^\s*function'
return '>1'
elseif getline(v:lnum+1) =~ '^\s*endfunction'
return '<1'
else
return '='
endif
endfunction
To tell Vim to use this function, set the following:
set foldmethod=expr
set foldexpr=VimFunctionFoldExpr()
You might also want to tweak your foldtext setting so that it respects the intent level. Here is a SE question about how to do that.
I found this nice snippet for parsing lisp in Prolog (from here):
ws --> [W], { code_type(W, space) }, ws.
ws --> [].
parse(String, Expr) :- phrase(expressions(Expr), String).
expressions([E|Es]) -->
ws, expression(E), ws,
!, % single solution: longest input match
expressions(Es).
expressions([]) --> [].
% A number N is represented as n(N), a symbol S as s(S).
expression(s(A)) --> symbol(Cs), { atom_codes(A, Cs) }.
expression(n(N)) --> number(Cs), { number_codes(N, Cs) }.
expression(List) --> "(", expressions(List), ")".
expression([s(quote),Q]) --> "'", expression(Q).
number([D|Ds]) --> digit(D), number(Ds).
number([D]) --> digit(D).
digit(D) --> [D], { code_type(D, digit) }.
symbol([A|As]) -->
[A],
{ memberchk(A, "+/-*><=") ; code_type(A, alpha) },
symbolr(As).
symbolr([A|As]) -->
[A],
{ memberchk(A, "+/-*><=") ; code_type(A, alnum) },
symbolr(As).
symbolr([]) --> [].
However expressions uses a cut. I'm assuming this is for efficiency. Is it possible to write this code so that it works efficiently without cut?
Would also be in interested answers that involve Mercury's soft-cut / committed choice.
The cut is not used for efficiency, but to commit to the first solution (see the comment next to the !/0: "single solution: longest input match"). If you comment out the !/0, you get for example:
?- parse("abc", E).
E = [s(abc)] ;
E = [s(ab), s(c)] ;
E = [s(a), s(bc)] ;
E = [s(a), s(b), s(c)] ;
false.
It is clear that only the first solution, consisting of the longest sequence of characters that form a token, is desired in such cases. Given the example above, I therefore disagree with "false": expression//1 is ambiguous, because number//1 and symbolr//1 are. In Mercury, you could use the determinism declaration cc_nondet to commit to a solution, if any.
You are touching a quite deep problem here. At the place of the cut you have
added the comment "longest input match". But what you actually did was to commit
to the first solution which will produce the "longest input match" for the non-terminal ws//0 but not necessarily for expression//1.
Many programming languages define their tokens based on the longest input match. This often leads to very strange effects. For example, a number may be immediately
followed by a letter in many programming languages. That's the case for Pascal, Haskell,
Prolog and many other languages. E.g. if a>2then 1 else 2 is valid Haskell.
Valid Prolog: X is 2mod 3.
Given that, it might be a good idea to define a programming language such that it does not depend on such features at all.
Of course, you would then like to optimize the grammar. But I can only recommend to start with a definition that is unambiguous in the first place.
As for efficiency (and purity):
eos([],[]).
nows --> call(eos).
nows, [W] --> [W], { code_type(W, nospace) }.
ws --> nows.
ws --> [W], {code_type(W, space)}, ws.
You could use a construct that has already found its place in Parsing Expression Grammars (PEGs) but which is also available in DCGs. Namely the negation of a DCG goal. In PEGs the exclamation mark (!) with an argument is used for negation, i.e. ! e. In DCG the negation of a DCG goal is expressed by the (\+) operator, which is already used for ordinary negation as failure in ordinary Prolog clauses and queries.
So lets first explain how (\+) works in DCGs. If you have a production rule of
the form:
A --> B, \+C, D.
Then this is translated to:
A(I,O) :- B(I,X), \+ C(X,_), D(X,O).
Which means an attempt is made to parse the C DCG goal, but without actually consuming the input list. Now this can be used to replace the cut, if desired, and it gives a little bit more declarative feeling. To explain the idea lets assume that with have a grammar without ws//0. So the original clause set of expressions//1 would be:
expressions([E|Es]) --> expression(E), !, expressions(Es).
expressions([]) --> [].
With negation we can turn this into the following cut-less form:
expressions([E|Es]) --> expression(E), expressions(Es).
expressions([]) --> \+ expression(_).
Unfortunately the above variant is quite un-efficient, since an attempt to parse an expression is made twice. Once in the first rule, and then again in the second rule for the negation. But you could do the following and only check for the negation of the beginning of an expression:
expressions([E|Es]) --> expression(E), expressions(Es).
expressions([]) --> \+ symbol(_), \+ number(_), \+ "(", \+ "'".
If you try negation, you will see that you get a relatively strict parser. This is important if you try to parse maximum prefix of input and if you want to detect some errors. Try that:
?- phrase(expressions(X),"'",Y).
You should get failure in the negation version which checks the first symbol of the expression. In the cut and in the cut free version you will get success with the empty list as a result.
But you could also deal in another way with errors, I have only made the error example to highlight a little bit how the negation version works.
In other settings, for example CYK parser, one can make the negation quite efficient, it can use the information which is already placed in the chart.
Best Regards
I am curious why the comma ‹,› is a shortcut for and and not andalso in guard tests.
Since I'd call myself a “C native” I fail to see any shortcomings of short-circuit boolean evaluation.
I compiled some test code using the to_core flag to see what code is actually generated. Using the comma, I see the left hand value and right and value get evaluated and both and'ed. With andalso you have a case block within the case block and no call to erlang:and/2.
I did no benchmark tests but I daresay the andalso variant is the faster one.
To delve into the past:
Originally in guards there were only , separated tests which were evaluated from left-to-right until either there were no more and the guard succeeded or a test failed and the guard as a whole failed. Later ; was added to allow alternate guards in the same clause. If guards evaluate both sides of a , before testing then someone has gotten it wrong along the way. #Kay's example seems to imply that they do go from left-to-right as they should.
Boolean operators were only allowed much later in guards.
and, together with or, xor and not, is a boolean operator and was not intended for control. They are all strict and evaluate their arguments first, like the arithmetic operators +, -, * and '/'. There exist strict boolean operators in C as well.
The short-circuiting control operators andalso and orelse were added later to simplify some code. As you have said the compiler does expand them to nested case expressions so there is no performance gain in using them, just convenience and clarity of code. This would explain the resultant code you saw.
N.B. in guards there are tests and not expressions. There is a subtle difference which means that while using and and andalso is equivalent to , using orelse is not equivalent to ;. This is left to another question. Hint: it's all about failure.
So both and and andalso have their place.
Adam Lindbergs link is right. Using the comma does generate better beam code than using andalso. I compiled the following code using the +to_asm flag:
a(A,B) ->
case ok of
_ when A, B -> true;
_ -> false
end.
aa(A,B) ->
case ok of
_ when A andalso B -> true;
_ -> false
end.
which generates
{function, a, 2, 2}.
{label,1}.
{func_info,{atom,andAndAndalso},{atom,a},2}.
{label,2}.
{test,is_eq_exact,{f,3},[{x,0},{atom,true}]}.
{test,is_eq_exact,{f,3},[{x,1},{atom,true}]}.
{move,{atom,true},{x,0}}.
return.
{label,3}.
{move,{atom,false},{x,0}}.
return.
{function, aa, 2, 5}.
{label,4}.
{func_info,{atom,andAndAndalso},{atom,aa},2}.
{label,5}.
{test,is_atom,{f,7},[{x,0}]}.
{select_val,{x,0},{f,7},{list,[{atom,true},{f,6},{atom,false},{f,9}]}}.
{label,6}.
{move,{x,1},{x,2}}.
{jump,{f,8}}.
{label,7}.
{move,{x,0},{x,2}}.
{label,8}.
{test,is_eq_exact,{f,9},[{x,2},{atom,true}]}.
{move,{atom,true},{x,0}}.
return.
{label,9}.
{move,{atom,false},{x,0}}.
return.
I only looked into what is generated with the +to_core flag, but obviously there is a optimization step between to_core and to_asm.
It's an historical reason. and was implemented before andalso, which was introduced in Erlang 5.1 (the only reference I can find right now is EEP-17). Guards have not been changed because of backwards compatibility.
The boolean operators "and" and "or" always evaluate arguements on both the sides of the operator. Whereas if you want the functionality of C operators && and || (where 2nd arguement is evaluated only if needed..for eg if we want to evalue "true orelse false" as soon as true is found to be the first arguement, the second arguement will not be evaluated which is not the case had "or" been used ) go for "andalso" and "orelse".
I've started looking into REBOL, just for fun, and as a fan of programming languages, I really like seeing new ideas and even just alternative syntaxes. REBOL is definitely full of these. One thing I noticed is the use of '/' as the path operator which can be used similarly to the '.' operator in most object-oriented programming languages. I have not programmed in REBOL extensively, just looked at some examples and read some documentation, but it isn't clear to me why there's no ambiguity with the '/' operator.
x: 4
y: 2
result: x/y
In my example, this should be division, but it seems like it could just as easily be the path operator if x were an object or function refinement. How does REBOL handle the ambiguity? Is it just a matter of an overloaded operator and the type system so it doesn't know until runtime? Or is it something I'm missing in the grammar and there really is a difference?
UPDATE Found a good piece of example code:
sp: to-integer (100 * 2 * length? buf) / d/3 / 1024 / 1024
It appears that arithmetic division requires whitespace, while the path operator requires no whitespace. Is that it?
This question deserves an answer from the syntactic point of view. In Rebol, there is no "path operator", in fact. The x/y is a syntactic element called path. As opposed to that the standalone / (delimited by spaces) is not a path, it is a word (which is usually interpreted as the division operator). In Rebol you can examine syntactic elements like this:
length? code: [x/y x / y] ; == 4
type? first code ; == path!
type? second code
, etc.
The code guide says:
White-space is used in general for delimiting (for separating symbols).
This is especially important because words may contain characters such as + and -.
http://www.rebol.com/r3/docs/guide/code-syntax.html
One acquired skill of being a REBOler is to get the hang of inserting whitespace in expressions where other languages usually do not require it :)
Spaces are generally needed in Rebol, but there are exceptions here and there for "special" characters, such as those delimiting series. For instance:
[a b c] is the same as [ a b c ]
(a b c) is the same as ( a b c )
[a b c]def is the same as [a b c] def
Some fairly powerful tools for doing introspection of syntactic elements are type?, quote, and probe. The quote operator prevents the interpreter from giving behavior to things. So if you tried something like:
>> data: [x [y 10]]
>> type? data/x/y
>> probe data/x/y
The "live" nature of the code would dig through the path and give you an integer! of value 10. But if you use quote:
>> data: [x [y 10]]
>> type? quote data/x/y
>> probe quote data/x/y
Then you wind up with a path! whose value is simply data/x/y, it never gets evaluated.
In the internal representation, a PATH! is quite similar to a BLOCK! or a PAREN!. It just has this special distinctive lexical type, which allows it to be treated differently. Although you've noticed that it can behave like a "dot" by picking members out of an object or series, that is only how it is used by the DO dialect. You could invent your own ideas, let's say you make the "russell" command:
russell [
x: 10
y: 20
z: 30
x/y/z
(
print x
print y
print z
)
]
Imagine that in my fanciful example, this outputs 30, 10, 20...because what the russell function does is evaluate its block in such a way that a path is treated as an instruction to shift values. So x/y/z means x=>y, y=>z, and z=>x. Then any code in parentheses is run in the DO dialect. Assignments are treated normally.
When you want to make up a fun new riff on how to express yourself, Rebol takes care of a lot of the grunt work. So for example the parentheses are guaranteed to have matched up to get a paren!. You don't have to go looking for all that yourself, you just build your dialect up from the building blocks of all those different types...and hook into existing behaviors (such as the DO dialect for basics like math and general computation, and the mind-bending PARSE dialect for some rather amazing pattern matching muscle).
But speaking of "all those different types", there's yet another weirdo situation for slash that can create another type:
>> type? quote /foo
This is called a refinement!, and happens when you start a lexical element with a slash. You'll see it used in the DO dialect to call out optional parameter sets to a function. But once again, it's just another symbolic LEGO in the parts box. You can ascribe meaning to it in your own dialects that is completely different...
While I didn't find any written definitive clarification, I did also find that +,-,* and others are valid characters in a word, so clearly it requires a space.
x*y
Is a valid identifier
x * y
Performs multiplication. It looks like the path operator is just another case of this.