Poker hand range parser ... how do I write the grammar? - parsing

I'd like to build a poker hand-range parser, whereby I can provide a string such as the following (assume a standard 52-card deck, ranks 2-A, s = suited, o = offsuit):
"22+,A2s+,AKo-ATo,7d6d"
The parser should be able to produce the following combinations:
6 combinations for each of 22, 33, 44, 55, 66, 77, 88, 99, TT, JJ, KK, QQ, AA
4 combinations for each of A2s, A3s, A4s, A5s, A6s, A7s, A8s, A9s, ATs, AJs, AQs, AKs
12 combinations for each of ATo, AJo, AQo, AKo
1 combination of 7(diamonds)6(diamonds)
I think I know parts of the grammar, but not all of it:
NM+ --> NM, N[M+1], ... ,N[N-1]
NN+ --> NN, [N+1][N+1], ... ,TT where T is the top rank of the deck (e.g. Ace)
NP - NM --> NM, N[M+1], ... ,NP
MM - NN --> NN, [N+1][N+1], ..., MM
I don't know the expression for the grammar for dealing with suitedness.
I'm a programming newbie, so forgive this basic question: is this a grammar induction problem or a parsing problem?
Thanks,
Mike

Well you should probably look at EBNF to show your grammar in a widely accepted manner.
I think it would look something like this:
S = Combination { ',' Combination } .
Combination = Hand ['+' | '-' Hand] .
Hand = Card Card ["s" | "o"] .
Card = rank [ color ] .
Where {} means 0 or more occurences, [] means 0 or 1 occurence and | means either whats left of | or whats right of |.
So basically what this comes down to is a start symbol (S) that says that the parser has to handle from 1 to any number of combinations that are all separated by a ",".
These combinations consist of a description of a card and then either a "+", a "-" and another card description or nothing.
A card description consists of rank and optionally a color (spades, hearts, etc.). The fact that rank and color aren't capitalized shows that they can't be further divided into subparts (making them a terminal class).
My example doesn't provide the offsuite/suite possibility and that is mainly because in you're examples one time the o/s comes at the very end "AK-ATo" and one time in the middle "A2s+".
Are these examples your own creation or are they given to you from an external source (read: you can't change them)?
If you can change them I would strongly recommend placing those at one specified position of a combination (for example at the end) to make creating the grammar and ultimately the parsing a lot easier.

Related

How to use context free grammars?

Could someone help me with using context free grammars. Up until now I've used regular expressions to remove comments, block comments and empty lines from a string so that it can be used to count the PLOC. This seems to be extremely slow so I was looking for a different more efficient method.
I saw the following post: What is the best way to ignore comments in a java file with Rascal?
I have no idea how to use this, the help doesn't get me far as well. When I try to define the line used in the post I immediately get an error.
lexical SingleLineComment = "//" ~[\n] "\n";
Could someone help me out with this and also explain a bit about how to setup such a context free grammar and then to actually extract the wanted data?
Kind regards,
Bob
First this will help: the ~ in Rascal CFG notation is not in the language, the negation of a character class is written like so: ![\n].
To use a context-free grammar in Rascal goes in three steps:
write it, like for example the syntax definition of the Func language here: http://docs.rascal-mpl.org/unstable/Recipes/#Languages-Func
Use it to parse input, like so:
// This is the basic parse command, but be careful it will not accept spaces and newlines before and after the TopNonTerminal text:
Prog myParseTree = parse(#Prog, "example string");
// you can do the same directly to an input file:
Prog myParseTree = parse(#TopNonTerminal, |home:///myProgram.func|);
// if you need to accept layout before and after the program, use a "start nonterminal":
start[Prog] myParseTree = parse(#start[TopNonTerminal], |home:///myProgram.func|);
Prog myProgram = myParseTree.top;
// shorthand for parsing stuff:
myProgram = [Prog] "example";
myProgram = [Prog] |home:///myLocation.txt|;
Once you have the tree you can start using visit and / deepmatch to extract information from the tree, or write recursive functions if you like. Examples can be found here: http://docs.rascal-mpl.org/unstable/Recipes/#Languages-Func , but here are some common idioms as well to extract information from a parse tree:
// produces the source location of each node in the tree:
myParseTree#\loc
// produces a set of all nodes of type Stat
{ s | /Stat s := myParseTree }
// pattern match an if-then-else and bind the three expressions and collect them in a set:
{ e1, e2, e3 | (Stat) `if <Exp e1> then <Exp e2> else <Exp e3> end` <- myExpressionList }
// collect all locations of all sub-trees (every parse tree is of a non-terminal type, which is a sub-type of Tree. It uses |unknown:///| for small sub-trees which have not been annotated for efficiency's sake, like literals and character classes:
[ t#\loc?|unknown:///| | /Tree t := myParseTree ]
That should give you a start. I'd go try out some stuff and look at more examples. Writing a grammar is a nice thing to do, but it does require some trial and error methods like writing a regex, but even more so.
For the grammar you might be writing, which finds source code comments but leaves the rest as "any character" you will need to use the longest match disambiguation a lot:
lexical Identifier = [a-z]+ !>> [a-z]; // means do not accept an Identifier if there is still [a-z] to add to it; so only the longest possible Identifier will match.
This kind of context-free grammar is called an "Island Grammar" metaphorically, because you will write precise rules for the parts you want to recognize (the comments are "Islands") while leaving the rest as everything else (the rest is "Water"). See https://dl.acm.org/citation.cfm?id=837160

bison and grammar: replaying the parse stack

I have not messed with building languages or parsers in a formal way since grad school and have forgotten most of what I knew back then. I now have a project that might benefit from such a thing but I'm not sure how to approach the following situation.
Let's say that in the language I want to parse there is a token that means "generate a random floating point number" in an expression.
exp: NUMBER
{$$ = $1;}
| NUMBER PLUS exp
{$$ = $1 + $3;}
| R PLUS exp
{$$ = random() + $3;}
;
I also want a "list" generating operator that will reevaluate an "exp" some number of times. Maybe like:
listExp: NUMBER COLON exp
{
for (int i = 0; i < $1; i++) {
print $3;
}
}
;
The problem I see is that "exp" will have already been reduced by the time the loop starts. If I have the input
2 : R + 2
then I think the random number will be generated as the "exp" is parsed and 2 added to it -- lets say the result is 2.0055. Then in the list expression I think 2.0055 would be printed out twice.
Is there a way to mark the "exp" before evaluation and then parse it as many times as the list loop count requires? The idea being to get a different random number in each evaluation.
Your best bet is to build an AST and evaluate the entire AST at the end of the parse. In-line evaluation is only possible for very simple (i.e. "calculator-like") projects.
Instead of an AST, you could construct code for a stack- or three-address- virtual machine. That's generally more efficient, particularly if you intend to execute the code frequently, but the AST is a lot simpler to construct, and executing it is a single depth-first scan.
Depending on your language design there are at least 5 different points at which a token in the language could be bound to a value. They are:
Pre-processor (like C #define)
Lexer: recognise tokens
Parser: recognise token structure, output AST
Semantic analysis: analyse AST, assign types and conversions etc
Code generation: output executable code or execute code directly.
If you have a token that can occur multiple times and you want to assign it a different random value each time, then phase 4 is the place to do it. If you generate an AST, walk the tree and assign the values. If you go straight to code generation (or an interpreter) do it then.

Parsing an expression in Prolog and returning an abstract syntax

I have to write parse(Tkns, T) that takes in a mathematical expression in the form of a list of tokens and finds T, and return a statement representing the abstract syntax, respecting order of operations and associativity.
For example,
?- parse( [ num(3), plus, num(2), star, num(1) ], T ).
T = add(integer(3), multiply(integer(2), integer(1))) ;
No
I've attempted to implement + and * as follows
parse([num(X)], integer(X)).
parse(Tkns, T) :-
( append(E1, [plus|E2], Tkns),
parse(E1, T1),
parse(E2, T2),
T = add(T1,T2)
; append(E1, [star|E2], Tkns),
parse(E1, T1),
parse(E2, T2),
T = multiply(T1,T2)
).
Which finds the correct answer, but also returns answers that do not follow associativity or order of operations.
ex)
parse( [ num(3), plus, num(2), star, num(1) ], T ).
also returns
mult(add(integer(3), integer(2)), integer(1))
and
parse([num(1), plus, num(2), plus, num(3)], T)
returns the equivalent of 1+2+3 and 1+(2+3) when it should only return the former.
Is there a way I can get this to work?
Edit: more info: I only need to implement +,-,*,/,negate (-1, -2, etc.) and all numbers are integers. A hint was given that the code will be structured similarly to the grammer
<expression> ::= <expression> + <term>
| <expression> - <term>
| <term>
<term> ::= <term> * <factor>
| <term> / <factor>
| <factor>
<factor> ::= num
| ( <expression> )
Only with negate implemented as well.
Edit2: I found a grammar parser written in Prolog (http://www.cs.sunysb.edu/~warren/xsbbook/node10.html). Is there a way I could modify it to print a left hand derivation of a grammar ("print" in the sense that the Prolog interpreter will output "T=[the correct answer]")
Removing left recursion will drive you towards DCG based grammars.
But there is an interesting alternative way: implement bottom up parsing.
How hard is this in Prolog ? Well, as Pereira and Shieber show in their wonderful book
'Prolog and Natural-Language Analysis', can be really easy: from chapter 6.5
Prolog supplies by default a top-down, left-to-right, backtrack parsing algorithm for
DCGs.
It is well known that top-down parsing algorithms of this kind will loop on
left-recursive rules (cf. the example of Program 2.3).
Although techniques are avail-
able to remove left recursion from context-free grammars, these techniques are not
readily generalizable to DCGs, and furthermore they can increase grammar size by
large factors.
As an alternative, we may consider implementing a bottom-up parsing method
directly in Prolog. Of the various possibilities, we will consider here the left-corner
method in one of its adaptations to DCGs.
For programming convenience, the input grammar for the left-corner DCG interpreter is represented in a slight variation of the DCG notation. The right-hand sides of
rules are given as lists rather than conjunctions of literals. Thus rules are unit clauses
of the form, e.g.,
s ---> [np, vp].
or
optrel ---> [].
Terminals are introduced by dictionary unit clauses of the form word(w,PT).
Consider to complete the lecture before proceeding (lookup the free book entry by title in info page).
Now let's try writing a bottom up processor:
:- op(150, xfx, ---> ).
parse(Phrase) -->
leaf(SubPhrase),
lc(SubPhrase, Phrase).
leaf(Cat) --> [Word], {word(Word,Cat)}.
leaf(Phrase) --> {Phrase ---> []}.
lc(Phrase, Phrase) --> [].
lc(SubPhrase, SuperPhrase) -->
{Phrase ---> [SubPhrase|Rest]},
parse_rest(Rest),
lc(Phrase, SuperPhrase).
parse_rest([]) --> [].
parse_rest([Phrase|Phrases]) -->
parse(Phrase),
parse_rest(Phrases).
% that's all! fairly easy, isn't it ?
% here start the grammar: replace with your one, don't worry about Left Recursion
e(sum(L,R)) ---> [e(L),sum,e(R)].
e(num(N)) ---> [num(N)].
word(N, num(N)) :- integer(N).
word(+, sum).
that for instance yields
phrase(parse(P), [1,+,3,+,1]).
P = e(sum(sum(num(1), num(3)), num(1)))
note the left recursive grammar used is e ::= e + e | num
Before fixing your program, look at how you identified the problem! You assumed that a particular sentence will have exactly one syntax tree, but you got two of them. So essentially, Prolog helped you to find the bug!
This is a very useful debugging strategy in Prolog: Look at all the answers.
Next is the specific way how you encoded the grammar. In fact, you did something quite smart: You essentially encoded a left-recursive grammar - nevertheless your program terminates for a list of fixed length! That's because you indicate within each recursion that there has to be at least one element in the middle serving as operator. So for each recursion there has to be at least one element. That is fine. However, this strategy is inherently very inefficient. For, for each application of the rule, it will have to consider all possible partitions.
Another disadvantage is that you can no longer generate a sentence out of a syntax tree. That is, if you use your definition with:
?- parse(S, add(add(integer(1),integer(2)),integer(3))).
There are two reasons: The first is that the goals T = add(...,...) are too late. Simply put them at the beginning in front of the append/3 goals. But much more interesting is that now append/3 does not terminate. Here is the relevant failure-slice (see the link for more on this).
parse([num(X)], integer(X)) :- false.
parse(Tkns, T) :-
( T = add(T1,T2),
append(E1, [plus|E2], Tkns), false,
parse(E1, T1),
parse(E2, T2),
; false, T = multiply(T1,T2),
append(E1, [star|E2], Tkns),
parse(E1, T1),
parse(E2, T2),
).
#DanielLyons already gave you the "traditional" solution which requires all kinds of justification from formal languages. But I will stick to your grammar you encoded in your program which - translated into DCGs - reads:
expr(integer(X)) --> [num(X)].
expr(add(L,R)) --> expr(L), [plus], expr(R).
expr(multiply(L,R)) --> expr(L), [star], expr(R).
When using this grammar with ?- phrase(expr(T),[num(1),plus,num(2),plus,num(3)]). it will not terminate. Here is the relevant slice:
expr(integer(X)) --> {false}, [num(X)].
expr(add(L,R)) --> expr(L), {false}, [plus], expr(R).
expr(multiply(L,R)) --> {false}expr(L), [star], expr(R).
So it is this tiny part that has to be changed. Note that the rule "knows" that it wants one terminal symbol, alas, the terminal appears too late. If only it would occur in front of the recursion! But it does not.
There is a general way how to fix this: Add another pair of arguments to encode the length.
parse(T, L) :-
phrase(expr(T, L,[]), L).
expr(integer(X), [_|S],S) --> [num(X)].
expr(add(L,R), [_|S0],S) --> expr(L, S0,S1), [plus], expr(R, S1,S).
expr(multiply(L,R), [_|S0],S) --> expr(L, S0,S1), [star], expr(R, S1,S).
This is a very general method that is of particular interest if you have ambiguous grammars, or if you do not know whether or not your grammar is ambiguous. Simply let Prolog do the thinking for you!
The correct approach is to use DCGs, but your example grammar is left-recursive, which won't work. Here's what would:
expression(T+E) --> term(T), [plus], expression(E).
expression(T-E) --> term(T), [minus], expression(E).
expression(T) --> term(T).
term(F*T) --> factor(F), [star], term(T).
term(F/T) --> factor(F), [div], term(T).
term(F) --> factor(F).
factor(N) --> num(N).
factor(E) --> ['('], expression(E), [')'].
num(N) --> [num(N)], { number(N) }.
The relationship between this and your sample grammar should be obvious, as should the transformation from left-recursive to right-recursive. I can't recall the details from my automata class about left-most derivations, but I think it only comes into play if the grammar is ambiguous, and I don't think this one is. Hopefully a genuine computer scientist will come along and clarify that point.
I see no point in producing an AST other than what Prolog would use. The code within parenthesis on the left-hand side of the production is the AST-building code (e.g. the T+E in the first expression//1 rule). Adjust the code accordingly if this is undesirable.
From here, presenting your parse/2 API is quite trivial:
parse(L, T) :- phrase(expression(T), L).
Because we're using Prolog's own structures, the result will look a lot less impressive than it is:
?- parse([num(4), star, num(8), div, '(', num(3), plus, num(1), ')'], T).
T = 4* (8/ (3+1)) ;
false.
You can show a more AST-y output if you like using write_canonical/2:
?- parse([num(4), star, num(8), div, '(', num(3), plus, num(1), ')'], T),
write_canonical(T).
*(4,/(8,+(3,1)))
T = 4* (8/ (3+1)) a
The part *(4,/(8,+(3,1))) is the result of write_canonical/1. And you can evaluate that directly with is/2:
?- parse([num(4), star, num(8), div, '(', num(3), plus, num(1), ')'], T),
Result is T.
T = 4* (8/ (3+1)),
Result = 8 ;
false.

Operator associativity using Scala Parsers

So I've been trying to write a calculator with Scala's parser, and it's been fun, except that I found that operator associativity is backwards, and that when I try to make my grammar left-recursive, even though it's completely unambiguous, I get a stack overflow.
To clarify, if I have a rule like:
def subtract: Parser[Int] = num ~ "-" ~ add { x => x._1._1 - x._2 }
then evaluating 7 - 4 - 3 comes out to be 6 instead of 0.
The way I have actually implemented this is that I am composing a binary tree where operators are non-leaf nodes, and leaf nodes are numbers. The way I evaluate the tree is left child (operator) right child. When constructing the tree for 7 - 4 - 5, what I would like for it to look like is:
-
- 5
7 4 NULL NULL
where - is the root, its children are - and 5, and the second -'s children are 7 and 4.
However, the only tree I can construct easily is
-
7 -
NULL NULL 4 5
which is different, and not what I want.
Basically, the easy parenthesization is 7 - (4 - 5) whereas I want (7 - 4) - 5.
How can I hack this? I feel like I should be able to write a calculator with correct operator precedence regardless. Should I tokenize everything first and then reverse my tokens? Is it ok for me to just flip my tree by taking all left children of right children and making them the right child of the right child's parent and making the parent the left child of the ex-right child? It seems good at a first approximation, but I haven't really thought about it too deeply. I feel like there must just be some case that I'm missing.
My impression is that I can only make an LL parser with the scala parsers. If you know another way, please tell me!
Scala's standard implementation of parser combinators (the Parsers trait) do not support left-recursive grammars. You can, however, use PackratParsers if you need left recursion. That said, if your grammar is a simple arithmetic expression parser, you most definitely do not need left recursion.
Edit
There are ways to use right recursion and still keep left associativity, and if you are keen on that, just look up arithmetic expressions and recursive descent parsers. And, of course, as, I said, you can use PackratParsers, which allow left recursion.
But the easiest way to handle associativity without using PackratParsers is to avoid using recursion. Just use one of the repetition operators to get a List, and then foldLeft or foldRight as required. Simple example:
trait Tree
case class Node(op: String, left: Tree, right: Tree) extends Tree
case class Leaf(value: Int) extends Tree
import scala.util.parsing.combinator.RegexParsers
object P extends RegexParsers {
def expr = term ~ (("+" | "-") ~ term).* ^^ mkTree
def term = "\\d+".r ^^ (_.toInt)
def mkTree(input: Int ~ List[String ~ Int]): Tree = input match {
case first ~ rest => ((Leaf(first): Tree) /: rest)(combine)
}
def combine(acc: Tree, next: String ~ Int) = next match {
case op ~ y => Node(op, acc, Leaf(y))
}
}
You can find other, more complete, examples on the scala-dist repository.
I'm interpreting your question as follows:
If you write rules like def expression = number ~ "-" ~ expression and then evalute on each node of the syntax tree, then you find that in 3 - 5 - 4, the 5 - 4 is computed first, giving 1 as a result, and then 3 - 1 is computed giving 2 as a result.
On the other hand, if you write rules like def expression = expression ~ "-" ~ number, the rules are left-recursive and overflow the stack.
There are three solutions to this problem:
Post-process the abstract syntax tree to convert it from a right-associative tree to a left-associative tree. If you're using actions on the grammar rules to do the computation immediately, this won't work for you.
Define the rule as def expression = repsep(number, "-") and then when evaluating the computation, loop over the parsed numbers (which will appear in a flat list) in whichever direction provides you the associativity you need. You can't use this if more than one kind of operator will appear, since the operator will be thrown away.
Define the rule as def expression = number ~ ( "-" ~ number) *. You'll have an initial number, plus a set of operator-number pairs in a flat list, to process in any direction you want (though left-to-right is probably easier here).
Use PackratParsers as Daniel Sobral suggested. This is probably your best and simplest choice.

REBOL path operator vs division ambiguity

I've started looking into REBOL, just for fun, and as a fan of programming languages, I really like seeing new ideas and even just alternative syntaxes. REBOL is definitely full of these. One thing I noticed is the use of '/' as the path operator which can be used similarly to the '.' operator in most object-oriented programming languages. I have not programmed in REBOL extensively, just looked at some examples and read some documentation, but it isn't clear to me why there's no ambiguity with the '/' operator.
x: 4
y: 2
result: x/y
In my example, this should be division, but it seems like it could just as easily be the path operator if x were an object or function refinement. How does REBOL handle the ambiguity? Is it just a matter of an overloaded operator and the type system so it doesn't know until runtime? Or is it something I'm missing in the grammar and there really is a difference?
UPDATE Found a good piece of example code:
sp: to-integer (100 * 2 * length? buf) / d/3 / 1024 / 1024
It appears that arithmetic division requires whitespace, while the path operator requires no whitespace. Is that it?
This question deserves an answer from the syntactic point of view. In Rebol, there is no "path operator", in fact. The x/y is a syntactic element called path. As opposed to that the standalone / (delimited by spaces) is not a path, it is a word (which is usually interpreted as the division operator). In Rebol you can examine syntactic elements like this:
length? code: [x/y x / y] ; == 4
type? first code ; == path!
type? second code
, etc.
The code guide says:
White-space is used in general for delimiting (for separating symbols).
This is especially important because words may contain characters such as + and -.
http://www.rebol.com/r3/docs/guide/code-syntax.html
One acquired skill of being a REBOler is to get the hang of inserting whitespace in expressions where other languages usually do not require it :)
Spaces are generally needed in Rebol, but there are exceptions here and there for "special" characters, such as those delimiting series. For instance:
[a b c] is the same as [ a b c ]
(a b c) is the same as ( a b c )
[a b c]def is the same as [a b c] def
Some fairly powerful tools for doing introspection of syntactic elements are type?, quote, and probe. The quote operator prevents the interpreter from giving behavior to things. So if you tried something like:
>> data: [x [y 10]]
>> type? data/x/y
>> probe data/x/y
The "live" nature of the code would dig through the path and give you an integer! of value 10. But if you use quote:
>> data: [x [y 10]]
>> type? quote data/x/y
>> probe quote data/x/y
Then you wind up with a path! whose value is simply data/x/y, it never gets evaluated.
In the internal representation, a PATH! is quite similar to a BLOCK! or a PAREN!. It just has this special distinctive lexical type, which allows it to be treated differently. Although you've noticed that it can behave like a "dot" by picking members out of an object or series, that is only how it is used by the DO dialect. You could invent your own ideas, let's say you make the "russell" command:
russell [
x: 10
y: 20
z: 30
x/y/z
(
print x
print y
print z
)
]
Imagine that in my fanciful example, this outputs 30, 10, 20...because what the russell function does is evaluate its block in such a way that a path is treated as an instruction to shift values. So x/y/z means x=>y, y=>z, and z=>x. Then any code in parentheses is run in the DO dialect. Assignments are treated normally.
When you want to make up a fun new riff on how to express yourself, Rebol takes care of a lot of the grunt work. So for example the parentheses are guaranteed to have matched up to get a paren!. You don't have to go looking for all that yourself, you just build your dialect up from the building blocks of all those different types...and hook into existing behaviors (such as the DO dialect for basics like math and general computation, and the mind-bending PARSE dialect for some rather amazing pattern matching muscle).
But speaking of "all those different types", there's yet another weirdo situation for slash that can create another type:
>> type? quote /foo
This is called a refinement!, and happens when you start a lexical element with a slash. You'll see it used in the DO dialect to call out optional parameter sets to a function. But once again, it's just another symbolic LEGO in the parts box. You can ascribe meaning to it in your own dialects that is completely different...
While I didn't find any written definitive clarification, I did also find that +,-,* and others are valid characters in a word, so clearly it requires a space.
x*y
Is a valid identifier
x * y
Performs multiplication. It looks like the path operator is just another case of this.

Resources