Parsing Scheme using Scala parser combinators - parsing

I'm writing a small scheme interpreter in Scala and I'm running into problems parsing lists in Scheme. My code parses lists that contain multiple numbers, identifiers, and booleans, but it chokes if I try to parse a list containing multiple strings or lists. What am I missing?
Here's my parser:
class SchemeParsers extends RegexParsers {
// Scheme boolean #t and #f translate to Scala's true and false
def bool : Parser[Boolean] =
("#t" | "#f") ^^ {case "#t" => true; case "#f" => false}
// A Scheme identifier allows alphanumeric chars, some symbols, and
// can't start with a digit
def id : Parser[String] =
"""[a-zA-Z=*+/<>!\?][a-zA-Z0-9=*+/<>!\?]*""".r ^^ {case s => s}
// This interpreter only accepts numbers as integers
def num : Parser[Int] = """-?\d+""".r ^^ {case s => s toInt}
// A string can have any character except ", and is wrapped in "
def str : Parser[String] = '"' ~> """[^""]*""".r <~ '"' ^^ {case s => s}
// A Scheme list is a series of expressions wrapped in ()
def list : Parser[List[Any]] =
'(' ~> rep(expr) <~ ')' ^^ {s: List[Any] => s}
// A Scheme expression contains any of the other constructions
def expr : Parser[Any] = id | str | num | bool | list ^^ {case s => s}
}

As it was correctly pointed out by #Gabe, you left some white-spaces unhandled:
scala> object SchemeParsers extends RegexParsers {
|
| private def space = regex("[ \\n]*".r)
|
| // Scheme boolean #t and #f translate to Scala's true and false
| private def bool : Parser[Boolean] =
| ("#t" | "#f") ^^ {case "#t" => true; case "#f" => false}
|
| // A Scheme identifier allows alphanumeric chars, some symbols, and
| // can't start with a digit
| private def id : Parser[String] =
| """[a-zA-Z=*+/<>!\?][a-zA-Z0-9=*+/<>!\?]*""".r
|
| // This interpreter only accepts numbers as integers
| private def num : Parser[Int] = """-?\d+""".r ^^ {case s => s toInt}
|
| // A string can have any character except ", and is wrapped in "
| private def str : Parser[String] = '"' ~> """[^""]*""".r <~ '"' <~ space ^^ {case s => s}
|
| // A Scheme list is a series of expressions wrapped in ()
| private def list : Parser[List[Any]] =
| '(' ~> space ~> rep(expr) <~ ')' <~ space ^^ {s: List[Any] => s}
|
| // A Scheme expression contains any of the other constructions
| private def expr : Parser[Any] = id | str | num | bool | list ^^ {case s => s}
|
| def parseExpr(str: String) = parse(expr, str)
| }
defined module SchemeParsers
scala> SchemeParsers.parseExpr("""(("a" "b") ("a" "b"))""")
res12: SchemeParsers.ParseResult[Any] = [1.22] parsed: List(List(a, b), List(a, b))
scala> SchemeParsers.parseExpr("""("a" "b" "c")""")
res13: SchemeParsers.ParseResult[Any] = [1.14] parsed: List(a, b, c)
scala> SchemeParsers.parseExpr("""((1) (1 2) (1 2 3))""")
res14: SchemeParsers.ParseResult[Any] = [1.20] parsed: List(List(1), List(1, 2), List(1, 2, 3))

The only problem with the code is your usage of characters instead of strings. Below, I removed the redundant ^^ { case s => s } and replaced all characters with strings. I'll further discuss this issue below.
class SchemeParsers extends RegexParsers {
// Scheme boolean #t and #f translate to Scala's true and false
def bool : Parser[Boolean] =
("#t" | "#f") ^^ {case "#t" => true; case "#f" => false}
// A Scheme identifier allows alphanumeric chars, some symbols, and
// can't start with a digit
def id : Parser[String] =
"""[a-zA-Z=*+/<>!\?][a-zA-Z0-9=*+/<>!\?]*""".r ^^ {case s => s}
// This interpreter only accepts numbers as integers
def num : Parser[Int] = """-?\d+""".r ^^ {case s => s toInt}
// A string can have any character except ", and is wrapped in "
def str : Parser[String] = "\"" ~> """[^""]*""".r <~ "\""
// A Scheme list is a series of expressions wrapped in ()
def list : Parser[List[Any]] =
"(" ~> rep(expr) <~ ")" ^^ {s: List[Any] => s}
// A Scheme expression contains any of the other constructions
def expr : Parser[Any] = id | str | num | bool | list
}
All Parsers have an implicit accept for their Elem types. So, if the basic element is a Char, such as in RegexParsers, then there's an implicit accept action for them, which is what happens here for the symbols (, ) and ", which are characters in your code.
What RegexParsers do automatically is to skip white spaces (defined as protected val whiteSpace = """\s+""".r, so you could override that) automatically at the beginning of any String or Regex. It also takes care of moving the positioning cursor past the white space in case of error messages.
One consequence of this that you seem not to have realized is that " a string beginning with a space" will have its prefix space removed from the parsed output, which is very unlikely to be something you want. :-)
Also, since \s includes new lines, a new line will be acceptable before any identifier, which may or may not be what you want.
You may disable space skipping in your regex as a whole by overrideing skipWhiteSpace. On the other hand, the default skipWhiteSpace tests for whiteSpace's length, so you could potentially turn it on and off just by manipulating the value of whiteSpace throughout the parsing process.

Related

scala parser. output of log parser utility

I am using the log parser utility to trace the parsing.
Scala code:
import scala.util.parsing.combinator.JavaTokenParsers
object Arith extends JavaTokenParsers with App {
def expr: Parser[Any] = log(term~rep("+"~term | "-"~term))("expr")
def term: Parser[Any] = factor~rep("*"~factor | "/"~factor)
def factor: Parser[Any] = floatingPointNumber | "("~expr~")"
println(parseAll(expr, "2 * (3 + 7)"))
}
Output:
trying expr at scala.util.parsing.input.CharSequenceReader#13a317a
trying expr at scala.util.parsing.input.CharSequenceReader#14c1103
expr --> [1.11] parsed: ((3~List())~List((+~(7~List()))))
expr --> [1.12] parsed: ((2~List((*~(((~((3~List())~List((+~(7~List())))))~)))))~List())
[1.12] parsed: ((2~List((*~(((~((3~List())~List((+~(7~List())))))~)))))~List())
The input is printed as scala.util.parsing.input.CharSequenceReader#13a317a. Is there a way to print string representation of the input like "2 * (3 + 7)"?
Overriding log solved my problem.
Scala Snippet:
import scala.util.parsing.combinator.JavaTokenParsers
object Arith extends JavaTokenParsers with App {
override def log[T](p: => Parser[T])(name: String): Parser[T] = Parser { in =>
def prt(x: Input) = x.source.toString.drop(x.offset)
println("trying " + name + " at " + prt(in))
val r = p(in)
println(name + " --> " + r + " next: " + prt(r.next))
r
}
def expr: Parser[Any] = log(term ~ rep("+" ~ term | "-" ~ term))("expr")
def term: Parser[Any] = factor ~ rep("*" ~ factor | "/" ~ factor)
def factor: Parser[Any] = floatingPointNumber | "(" ~ expr ~ ")"
println(parseAll(expr, "(3+7)*2"))
}
Output:
trying expr at (3+7)*2
trying expr at 3+7)*2
expr --> [1.5] parsed: ((3~List())~List((+~(7~List())))) next: )*2
expr --> [1.8] parsed: (((((~((3~List())~List((+~(7~List())))))~))~List((*~2)))~List()) next:
[1.8] parsed: (((((~((3~List())~List((+~(7~List())))))~))~List((*~2)))~List())

Parser Combinators - Simple grammar

I am trying to use parser combinators in Scala on a simple grammar that I have copied from a book. When I run the following code it stops immediately after the first token has been parsed with the error
[1.3] failure: string matching regex '\z' expected but '+' found
I can see why things go wrong. The first token is an expression and therefor it is the only thing that needs to be parsed according to the grammar. However I do not know what is a good way to fix it.
object SimpleParser extends RegexParsers
{
def Name = """[a-zA-Z]+""".r
def Int = """[0-9]+""".r
def Main:Parser[Any] = Expr
def Expr:Parser[Any] =
(
Term
| Term <~ "+" ~> Expr
| Term <~ "-" ~> Expr
)
def Term:Parser[Any] =
(
Factor
| Factor <~ "*" ~> Term
)
def Factor:Parser[Any] =
(
Name
| Int
| "-" ~> Int
| "(" ~> Expr <~ ")"
| "let" ~> Name <~ "=" ~> Expr <~ "in" ~> Expr <~ "end"
)
def main(args: Array[String])
{
var input = "2 + 2"
println(input)
println(parseAll(Main, input))
}
}
Factor <~ "*" ~> Term means Factor.<~("*" ~> Term), so the whole right part is dropped.
Use Factor ~ "*" ~ Term ^^ { case f ~ _ ~ t => ??? } or rep1sep:
scala> :paste
// Entering paste mode (ctrl-D to finish)
import scala.util.parsing.combinator.RegexParsers
object SimpleParser extends RegexParsers
{
def Name = """[a-zA-Z]+""".r
def Int = """[0-9]+""".r
def Main:Parser[Any] = Expr
def Expr:Parser[Any] = rep1sep(Term, "+" | "-")
def Term:Parser[Any] = rep1sep(Factor, "*")
def Factor:Parser[Any] =
(
"let" ~> Name ~ "=" ~ Expr ~ "in" ~ Expr <~ "end" ^^ { case n ~ _ ~ e1 ~ _ ~ e2 => (n, e1, e2)
| Int
| "-" ~> Int
| "(" ~> Expr <~ ")"
| Name }
)
}
SimpleParser.parseAll(SimpleParser.Main, "2 + 2")
// Exiting paste mode, now interpreting.
import scala.util.parsing.combinator.RegexParsers
defined module SimpleParser
res1: SimpleParser.ParseResult[Any] = [1.6] parsed: List(List(2), List(2))
The second part of parser def Term:Parser[Any] = Factor | Factor <~ "*" ~> Term is useless. The first part, Factor, can parse (with non-empty next) any Input that the second part, Factor <~ "*" ~> Term, is able to parse.

How to use a ParseKit grammar to parse a simple boolean expression language

Trying to build a grammar that will parse simple bool expressions.
I am running into an issue when there are multiple expressions.
I need to be able to parse 1..n and/or'ed expressions.
Each example below is a complete expression:
(myitem.isavailable("1234") or myitem.ispresent("1234")) and
myitem.isready("1234")
myitem.value > 4 and myitem.value < 10
myitem.value = yes or myotheritem.value = no
Grammar:
#start = conditionalexpression* | expressiontypes;
conditionalexpression = condition expressiontypes;
expressiontypes = expression | functionexpression;
expression = itemname dot property condition value;
functionexpression = itemname dot functionproperty;
itemname = Word;
propertytypes = property | functionproperty;
property = Word;
functionproperty = Word '(' value ')';
value = Word | QuotedString | Number;
condition = textcondition;
dot = '.';
textcondition = 'or' | 'and' | '<' | '>' | '=';
Developer of ParseKit here.
Here is a ParseKit grammar that matches your example input:
#start = expr;
expr = orExpr;
orExpr = andExpr orTerm*;
orTerm = 'or' andExpr;
// 'and' should bind more tightly than 'or'
andExpr = relExpr andTerm*;
andTerm = 'and' relExpr;
// relational expressions should bind more tightly than 'and'/'or'
relExpr = callExpr relTerm*;
relTerm = relOp callExpr;
// func calls should bind more tightly than relational expressions
callExpr = primaryExpr ('(' argList ')')?;
argList = Empty | atom (',' atom)*;
primaryExpr = atom | '(' expr ')';
atom = obj | literal;
// member access should bind most tightly
obj = id member*;
member = ('.' id);
id = Word;
literal = Number | QuotedString | bool;
bool = 'yes' | 'no';
relOp = '<' | '>' | '=';
To give you an idea of how I arrived at this grammar:
I realized that your language is a simple, composable expression langauge.
I remembered that XPath 1.0 is also a relatively simple expression langauge with a easily available/readable grammar.
I visited the XPath 1.0 spec online and quickly scanned the XPath basic language grammar. That served to provide a quick jumping-off point for desinging your language grammar. If you ignore the path expression part of XPath expressions, XPath is a very good template for a basic expression language.
My grammar above successfully parses all of your example inputs (see below). Hope this helps.
[foo, ., bar, (, "hello", ), or, (, bar, or, baz, >, bat, )]foo/./bar/(/"hello"/)/or/(/bar/or/baz/>/bat/)^
[myitem, ., value, >, 4, and, myitem, ., value, <, 10]myitem/./value/>/4/and/myitem/./value/</10^
[myitem, ., value, =, yes, or, myotheritem, ., value, =, no]myitem/./value/=/yes/or/myotheritem/./value/=/no^

Scala Parse floating point numbers with StandardTokenParsers

This is a grammar for a System of first order ODEs:
system ::= equation { equation }
equation ::= variable "=" (arithExpr | param) "\n"
variable ::= algebraicVar | stateVar
algrebraicVar ::= identifier
stateVar ::= algebraicVar'
arithExpr ::= term { "+" term | "-" term }
term ::= factor { "*" factor | "/" factor }
factor ::= algebraicVar
| powerExpr
| floatingPointNumber
| functionCall
| "(" arithExpr ")"
powerExpr ::= arithExpr {"^" arithExpr}
Notes:
An identifier should be a valid Scala Identifier.
A stateVar is an algebraicVar followed by one apostrophe (x' denotes the first derivative of x --with respect to time--)
I haven't coded anything for a functionCall but I mean something like Cos[Omega]
This is what I have already
package tests
import scala.util.parsing.combinator.lexical.StdLexical
import scala.util.parsing.combinator.syntactical.StandardTokenParsers
import scala.util.parsing.combinator._
import scala.util.parsing.combinator.JavaTokenParsers
import token._
object Parser1 extends StandardTokenParsers {
lexical.delimiters ++= List("(", ")", "=", "+", "-", "*", "/", "\n")
lexical.reserved ++= List(
"Log", "Ln", "Exp",
"Sin", "Cos", "Tan",
"Cot", "Sec", "Csc",
"Sqrt", "Param", "'")
def system: Parser[Any] = repsep(equation, "\n")
def equation: Parser[Any] = variable ~ "=" ~ ("Param" | arithExpr )
def variable: Parser[Any] = stateVar | algebraicVar
def algebraicVar: Parser[Any] = ident
def stateVar: Parser[Any] = algebraicVar ~ "\'"
def arithExpr: Parser[Any] = term ~ rep("+" ~ term | "-" ~ term)
def term: Parser[Any] = factor ~ rep("*" ~ factor | "/" ~ factor)
def factor: Parser[Any] = algebraicVar | floatingPointNumber | "(" ~ arithExpr ~ ")"
def powerExpr: Parser[Any] = arithExpr ~ rep("^" ~ arithExpr)
def main(args: Array[String]) {
val code = "x1 = 2.5 * x2"
equation(new lexical.Scanner(code)) match {
case Success(msg, _) => println(msg)
case Failure(msg, _) => println(msg)
case Error(msg, _) => println(msg)
}
}
}
However this line doesn't work:
def factor: Parser[Any] = algebraicVar | floatingPointNumber | "(" ~ arithExpr ~ ")"
Because I haven't defined what's a floatingPointNumber. First I tried to mix in JavaTokenParsers but then I get conflicting definitions. The reason I'm trying to use StandardTokenParsers instead of JavaTokenParsers is to use able to use a set of predefined Keywords with
lexical.reserved ++= List(
"Log", "Ln", "Exp",
"Sin", "Cos", "Tan",
"Cot", "Sec", "Csc",
"Sqrt", "Param", "'")
I asked this on the Scala-user mailing list (https://groups.google.com/forum/?fromgroups#!topic/scala-user/KXlfGauGR9Q) but I haven't received enough replies. Thanks a lot for helping.
Given that mixing in JavaTokenParsers doesn't work, you might try mixing in RegexParsers instead and copying just the definition of floatingPointNumber from the source for JavaTokenParsers.
That definition, at least in this version is simply a regex:
def floatingPointNumber: Parser[String] =
"""-?(\d+(\.\d*)?|\d*\.\d+)([eE][+-]?\d+)?[fFdD]?""".r

EBNF to Scala parser combinator

I have the following EBNF that I want to parse:
PostfixExp -> PrimaryExp ( "[" Exp "]"
| . id "(" ExpList ")"
| . length )*
And this is what I got:
def postfixExp: Parser[Expression] = (
primaryExp ~ rep(
"[" ~ expression ~ "]"
| "." ~ ident ~"(" ~ repsep(expression, "," ) ~ ")"
| "." ~ "length") ^^ {
case primary ~ list => list.foldLeft(primary)((prim,post) =>
post match {
case "[" ~ length ~ "]" => ElementExpression(prim, length.asInstanceOf[Expression])
case "." ~ function ~"(" ~ arguments ~ ")" => CallMethodExpression(prim, function.asInstanceOf[String], arguments.asInstanceOf[List[Expression]])
case _ => LengthExpression(prim)
}
)
})
But I would like to know if there is a better way, preferably without having to resort to casting (asInstanceOf).
I would do it like this:
type E = Expression
def postfixExp = primaryExp ~ rep(
"[" ~> expr <~ "]" ^^ { e => ElementExpression(_:E, e) }
| "." ~ "length" ^^^ LengthExpression
| "." ~> ident ~ ("(" ~> repsep(expr, ",") <~ ")") ^^ flatten2 { (f, args) =>
CallMethodExpression(_:E, f, args)
}
) ^^ flatten2 { (e, ls) => collapse(ls)(e) }
def expr: Parser[E] = ...
def collapse(ls: List[E=>E])(e: E) = {
ls.foldLeft(e) { (e, f) => f(e) }
}
Shortened expressions to expr for brevity as well as added the type alias E for the same reason.
The trick that I'm using here to avoid the ugly case analysis is to return a function value from within the inner production. This function takes an Expression (which will be the primary) and then returns a new Expression based on the first. This unifies the two cases of dot-dispatch and bracketed expressions. Finally, the collapse method is used to merge the linear List of function values into a proper AST, starting with the specified primary expression.
Note that LengthExpression is just returned as a value (using ^^^) from its respective production. This works because the companion objects for case classes (assuming that LengthExpression is indeed a case class) extend the corresponding function value delegating to their constructor. Thus, the function represented by LengthExpression takes a single Expression and returns a new instance of LengthExpression, precisely satisfying our needs for the higher-order tree construction.

Resources