How to make a sub parser with Parsec? - parsing

I would like to parse several lists of commands indented or formated as array with Parsec. As example, my lists will be formated like this:
Command1 arg1 arg2 Command1 arg1 arg2 Command1 arg1 arg2
Command2 arg1 Command3 arg1 arg2 arg3
Command3 arg1 arg2 arg3
Command4
Command3 arg1 arg2 arg3 Command2 arg1
Command4
Command4
Command5 arg1 Command2 arg1
These commands are supposed to be parsed column by column with state changes in the parser.
My idea is to gather the commands into separated list of string and parse these strings into a subparser (executed inside the main parser).
I inspected the API of the Parsec library but I didn't find a function to do that.
I considered using runParser but this function only extract the results of the parser and not its state.
I also considered making a function inspired by runParsecT and mkPT to make my own parser, but the constructors ParsecT or initialPos are not available (not exported by the library)
Is it possible to run a subparser inside a parser with Parsec?
If not, does a library such as megaparsec can solve my problem?

Not a complete answer, more a question for clarification:
Is it necessary to build a list of strings?
I would prefer to parse the input and convert it into a more special datatype. By that you can use the type guarantees of haskell.
I would begin by defining a datatype for my commands:
data Command = Command1 Argtype1
| Command2 Argtype2
| Command3 Argtype1 Argtype2
data Argtype1 = Arg1 | Arg2 | ArgX
data Argtype2 = Arg2_1 | Arg2_2
After that you can parse the input and put it in datatypes.
At the end of the parsing you can mappend the results (that is for lists adding at the front with operation (:)).
You end up with a datatype of [Command].
With that you can work further.
For parsing the text you can follow the introduction to the package megaparsec at
(https://markkarpov.com/megaparsec/parsing-simple-imperative-language.html)
Or do you mean something completly different? Perhaps that every line (containing some commands) is as it whole shall be one input of a state machine and the state machine changes in relation to the commands? Then I wonder why the state machine shall be implemented as a parser.

As a starting point, the simplest answer to "How to make a sub parser" is using the monadic bind, applicative <*>, alternative <|>, and the combinators provided by the library. Assuming that each command belongs to a single type (as in Hans Kruger's answer), and with arbitrary number of columns, the below might make a good template.
import Text.Parsec
import Text.Parsec.Char
import Data.List(transpose)
cmdFileParser :: Parsec s u [[CommandType]]
cmdFileParser = sepBy sepParser cmdLineParser
where
sepParser = newline --From Text.Parsec.Char
cmdLineParser :: Parsec s u [CommandType]
cmdLineParser = sepBy sepParser cmdParser
where
sepParser = tab
cmdParser :: Parsec s u CommandType
cmdParser = parseCommand1
<|> parseCommand2
<|> parseCommand3
<|> etc
Then, after the the parsing, transpose the [[CommandType]] to group commands by column
main = do
...
let ret = runParser cmdFileParser
"debug string telling what was parsed"
stringToParse
case ret of
Left e -> putStrLn "wasn't parsed"
Right cmds -> doSomethingWith (transpose cmds)
I would say that the above is a typical approach. There are variations of course. For instance if you know there should be only three columns, you might have instead of the above cmdLineParser the below
cmdLineParser :: Parsec s u (CommandType,CommandType,CommandType)
cmdLineParser = (\a b c -> (a,b,c)) <$> ct <*> ct <*> cmdParser
where
ct = cmdParser <* tab
I would say that using getState is atypical. When I first started using Parsec, I remember getting something like what I think you are after working, but it wasn't pretty. Of course, if you really want to just return the strings you can always parse for any char except your newlines and tabs.
cmdParser :: Parsec s u String
cmdParser = many (noneOf "\n\t")
Although, careful of using the above. I've been burned in my use of many before, where it takes too much or always succeeds. So I don't have high confidence that that exact formulation will get you the command string. Also, if you just parse that command as a string, then reparse the command in your main, you will be parsing twice!

Related

Parse String to Datatype in Haskell

I'm taking a Haskell course at school, and I have to define a Logical Proposition datatype in Haskell. Everything so far Works fine (definition and functions), and i've declared it as an instance of Ord, Eq and show. The problem comes when I'm required to define a program which interacts with the user: I have to parse the input from the user into my datatype:
type Var = String
data FProp = V Var
| No FProp
| Y FProp FProp
| O FProp FProp
| Si FProp FProp
| Sii FProp FProp
where the formula: ¬q ^ p would be: (Y (No (V "q")) (V "p"))
I've been researching, and found that I can declare my datatype as an instance of Read.
Is this advisable? If it is, can I get some help in order to define the parsing method?
Not a complete answer, since this is a homework problem, but here are some hints.
The other answer suggested getLine followed by splitting at words. It sounds like you instead want something more like a conventional tokenizer, which would let you write things like:
(Y
(No (V q))
(V p))
Here’s one implementation that turns a string into tokens that are either a string of alphanumeric characters or a single, non-alphanumeric printable character. You would need to extend it to support quoted strings:
import Data.Char
type Token = String
tokenize :: String -> [Token]
{- Here, a token is either a string of alphanumeric characters, or else one
- non-spacing printable character, such as "(" or ")".
-}
tokenize [] = []
tokenize (x:xs) | isSpace x = tokenize xs
| not (isPrint x) = error $
"Invalid character " ++ show x ++ " in input."
| not (isAlphaNum x) = [x]:(tokenize xs)
| otherwise = let (token, rest) = span isAlphaNum (x:xs)
in token:(tokenize rest)
It turns the example into ["(","Y","(","No","(","V","q",")",")","(","V","p",")",")"]. Note that you have access to the entire repertoire of Unicode.
The main function that evaluates this interactively might look like:
main = interact ( unlines . map show . map evaluate . parse . tokenize )
Where parse turns a list of tokens into a list of ASTs and evaluate turns an AST into a printable expression.
As for implementing the parser, your language appears to have similar syntax to LISP, which is one of the simplest languages to parse; you don’t even need precedence rules. A recursive-descent parser could do it, and is probably the easiest to implement by hand. You can pattern-match on parse ("(":xs) =, but pattern-matching syntax can also implement lookahead very easily, for example parse ("(":x1:xs) = to look ahead one token.
If you’re calling the parser recursively, you would define a helper function that consumes only a single expression, and that has a type signature like :: [Token] -> (AST, [Token]). This lets you parse the inner expression, check that the next token is ")", and proceed with the parse. However, externally, you’ll want to consume all the tokens and return an AST or a list of them.
The stylish way to write a parser is with monadic parser combinators. (And maybe someone will post an example of one.) The industrial-strength solution would be a library like Parsec, but that’s probably overkill here. Still, parsing is (mostly!) a solved problem, and if you just want to get the assignment done on time, using a library off the shelf is a good idea.
the read part of a REPL interpreter typically looks like this
repl :: ForthState -> IO () -- parser definition
repl state
= do putStr "> " -- puts a > character to indicate it's waiting for input
input <- getLine -- this is what you're looking for, to read a line.
if input == "quit" -- allows user to quit the interpreter
then do putStrLn "Bye!"
return ()
else let (is, cs, d, output) = eval (words input) state -- your grammar definition is somewhere down the chain when eval is called on input
in do mapM_ putStrLn output
repl (is, cs, d, [])
main = do putStrLn "Welcome to your very own interpreter!"
repl initialForthState -- runs the parser, starting with read
your eval method will have various loops, stack manipulations, conditionals, etc to actually figure out what the user inputted. hope this helps you with at least the reading input part.

Grouping lines with Parsec

I have a line-based text format I want to parse with Parsec†. A line either starts with a pound sign and specifies a key value pair separated by a colon or is a URL that is described by the previous tags.
Here's a short example:
#foo:bar
#faz:baz
https://example.com
#foo:beep
https://example.net
For simplicity's sake, I'm going to store everything as String. A Tag is a type Tag = (String, String), for example ("foo", "bar"). Ultimately, I'd like to group these as ([Tag], URL).
However, I struggle figuring out how to parse either [one or more tags] or [one URL].
My current approach looks like this:
import qualified System.Environment as Env
import qualified Text.Megaparsec as M
import qualified Text.Megaparsec.Text as M
type Tag = (String, String)
data Segment = Tags [Tag] | URL String
deriving (Eq, Show)
tagP :: M.Parser Tag
tagP = M.char '#' *> ((,) <$> M.someTill M.printChar (M.char ':') <*> M.someTill M.printChar M.eol) M.<?> "Tag starting with #"
urlP :: M.Parser String
urlP = M.someTill M.printChar M.eol M.<?> "Some URL"
parser :: M.Parser Segment
parser = (Tags <$> M.many tagP) M.<|> (URL <$> urlP)
main :: IO ()
main = do
fname <- head <$> Env.getArgs
res <- M.parseFromFile (parser <* M.eof) fname
print res
If I try to run this on the above sample, I get a parsing error like this:
3:1:
unexpected 'h'
expecting Tag starting with # or end of input
Clearly my use of many in combination with <|> is incorrect. Since the tag parser won't consume any input from the URL parser it cannot be related to backtracking. How do I need to change this to get to the desired result?
The full example is available on GitHub.
† I'm actually using MegaParsec here for better error messages but I think the problem is quite generic and not about any particular implementation of parser combinators.
What you're doing works quite fine, only, at the moment you only parse a single segment (i.e., either only tags or only a URL), but that doesn't consume the whole input. It's eof that's causing the error.
Simply use one more many or some, to allow for multiple segments:
main :: IO ()
main = do
fname <- head <$> Env.getArgs
res <- M.parseFromFile (many parser <* M.eof) fname
print res
#cocreature answered this for me on Twitter.
As leftaroundabout pointed out here, there are two separate mistakes in my code:
The parser itself misuses <|> while it should just sequentially parse the lines and skip to the next parser if it doesn't consume any input.
The invocation (parseFromFile) only applies the parser function a single time and would fail as soon as it would get to the second block.
We can fix the parser and introduce grouping in one go:
parser :: M.Parser ([Tag], String)
parser = liftA2 (,) (M.many tagP) urlP
Afterwards, we just need to apply the change suggested by leftaroundabout:
...
res <- M.parseFromFile (M.many parser <* M.eof) fname
Running this leads to the desired result:
[([("foo","bar"),("faz","baz")],"https://example.com"),([("foo","beep")],"https://example.net")]

Insert a character into parser combinator character stream in Haskell

This question is related to both Parsec and uu-parsinglib. When we write parser combinators, they process characters streams from compiler. Is it somehow possible to parse a character and put it back (or return another character back) to the input stream?
I want for example to parse input "test + 5", parse the t, e, s, t and after recognition of test pattern, put for example v character back into the character stream, so while continuating the parsing process we are matching against v + 5
I do not want to use this in any particular case for now - I want to deeply learn the possibilities.
I'm not sure if it's possible with these parsers directly, but in general you can accomplish it by combining parsers with some streaming that allows injecting leftovers.
For example, using attoparsec-conduit you can turn a parser into a conduit using
sinkParser :: (AttoparsecInput a, MonadThrow m)
=> Parser a b -> Consumer a m b
where Consumer is a special kind of conduit that doesn't produce any output, only receives input and returns a final value.
Since conduits support leftovers, you can create a helper method that converts a parser that optionally returns a value to be pushed into the stream into a conduit:
import Data.Attoparsec.Types
import Data.Conduit
import Data.Conduit.Attoparsec
import Data.Functor
reinject :: (AttoparsecInput a, MonadThrow m)
=> Parser a (Maybe a, b) -> Consumer a m b
reinject p = do
(lo, r) <- sinkParser p
maybe (return ()) leftover lo
return r
Then you convert standard parsers to conduits using sinkParser and these special parsers using reinject, and then combine conduits instead of parsers.
I think the simplest way to archive this is to build a multi-layered parser. Think of a lexer + parser combination. This is a clean approach to this problem.
You have to separate the two kind of parsing. The search-and-replace parsing goes to the first parser and the build-the-AST parsing to the second. Or you can create an intermediate token representation.
import Text.Parsec
import Text.Parsec.String
parserLvl1 :: Parser String
parserLvl1 = many (try (string "test" >> return 'v') <|> anyChar)
parserLvl2 :: Parser Plus
parserLvl2 = do text1 <- many (noneOf "+")
char '+'
text2 <- many (noneOf "+")
return $ Plus text1 text2
data Plus = Plus String String
deriving Show
wholeParse :: String -> Either ParseError Plus
wholeParse source = do res1 <- parse parserLvl1 "lvl1" source
res2 <- parse parserLvl2 "lvl2" res1
return res2
Now you can parse your example. wholeParse "test+5" results in Right (Plus "v" "5").
Possible variations:
Create a class and an instance for combining wrapped parser stages. (Possibly carrying parser state.)
Create an intermediate representation, a stream of tokens
This is easily done in uu-parsinglib using the pSwitch function. But the question is why you want to do so? Because the v is missing from the input? In that case uu-parsinglib will perform error correction automatically so you do not need something like this. Otherwise you can write
pSwitch :: (st1 -> (st2, st2 -> st1)) -> P st2 a -> P st1 a
pInsert_v = pSwitch (\st1 -> (prepend v st2, id) (pSucceed ())
It depends on your actual state type how the v is actually added, so you will have to define the function prepend yourself. I do not know e.g. how such an insertion would influence the current position in the file etc.
Doaitse Swierstra

Using Parsec to write a Read instance

Using Parsec, I'm able to write a function of type String -> Maybe MyType with relative ease. I would now like to create a Read instance for my type based on that; however, I don't understand how readsPrec works or what it is supposed to do.
My best guess right now is that readsPrec is used to build a recursive parser from scratch to traverse a string, building up the desired datatype in Haskell. However, I already have a very robust parser who does that very thing for me. So how do I tell readsPrec to use my parser? What is the "operator precedence" parameter it takes, and what is it good for in my context?
If it helps, I've created a minimal example on Github. It contains a type, a parser, and a blank Read instance, and reflects quite well where I'm stuck.
(Background: The real parser is for Scheme.)
However, I already have a very robust parser who does that very thing for me.
It's actually not that robust, your parser has problems with superfluous parentheses, it won't parse
((1) (2))
for example, and it will throw an exception on some malformed inputs, because
singleP = Single . read <$> many digit
may use read "" :: Int.
That out of the way, the precedence argument is used to determine whether parentheses are necessary in some place, e.g. if you have
infixr 6 :+:
data a :+: b = a :+: b
data C = C Int
data D = D C
you don't need parentheses around a C 12 as an argument of (:+:), since the precedence of application is higher than that of (:+:), but you'd need parentheses around C 12 as an argument of D.
So you'd usually have something like
readsPrec p = needsParens (p >= precedenceLevel) someParser
where someParser parses a value from the input without enclosing parentheses, and needsParens True thing parses a thing between parentheses, while needsParens False thing parses a thing optionally enclosed in parentheses [you should always accept more parentheses than necessary, ((((((1)))))) should parse fine as an Int].
Since the readsPrec p parsers are used to parse parts of the input as parts of the value when reading lists, tuples etc., they must return not only the parsed value, but also the remaining part of the input.
With that, a simple way to transform a parsec parser to a readsPrec parser would be
withRemaining :: Parser a -> Parser (a, String)
withRemaining p = (,) <$> p <*> getInput
parsecToReadsPrec :: Parser a -> Int -> ReadS a
parsecToReadsPrec parsecParser prec input
= case parse (withremaining $ needsParens (prec >= threshold) parsecParser) "" input of
Left _ -> []
Right result -> [result]
If you're using GHC, it may however be preferable to use a ReadPrec / ReadP parser (built using Text.ParserCombinators.ReadP[rec]) instead of a parsec parser and define readPrec instead of readsPrec.

understanding attoparsec

attoparsec was suggested to me for parsing a file, now I must to understand how to use it;
somebody gave me this piece of code:
#
type Environment = M.Map String String
import Data.Attoparsec (maybeResult)
import qualified Data.Attoparsec.Char8 as A
import qualified Data.ByteString.Char8 as B
environment :: A.Parser Environment
environment = M.fromList <$> A.sepBy entry A.endOfLine
parseEnvironment = maybeResult .flip A.feed B.empty . A.parse environment
spaces = A.many $ A.char ' '
entry = (,) <$> upTo ':' <*> upTo ';'
upTo delimiter = B.unpack <$> A.takeWhile (A.notInClass $ delimiter : " ")
<* (spaces >> A.char delimiter >> spaces)
that works very well, but I do not know why:
what the reason of using flip, is it not easier to put the argument of A.feed in a different order? and why is there B.empty?
is there some tutorial about that I can study?
thanks in advance
There's an explanation of the need for feed in the answers to this StackOverflow question. As Bryan O'Sullivan (the creator of Attoparsec) says there:
If you write an attoparsec parser that
consumes as much input as possible
before failing, you must tell the
partial result continuation when
you've reached the end of your input.
You can do this by feeding it an empty bytestring.
I'll admit that I wrote the code in question, and I actually didn't use pointfree in this case. Simple composition just makes sense to me here: you run the parser (A.parse environment), you tell it you're done (flip A.feed B.empty), and you convert to a Maybe as a kind of basic error handling (maybeResult). In my opinion this feels cleaner than the pointed version:
parseEnvironment b = maybeResult $ A.feed (A.parse environment b) B.empty
The rest is I think fairly idiomatic applicative parsing, although I'm not sure why I would have used >> instead of *>.

Resources