Writing followed by reading of *standard-input* Common Lisp - stream

This is a very simple question. I am trying to solve the HackerRank questions but don't fully understand how I can write to *standard-input* in order to run the code on my computer.
It asks to sum an array given the length of the array (N) followed by the array itself all on *standard-input*.
Hackerranks uses *standard-input* to give values and it would be easiest if I could store values in the input and then read them.
My question is how can I write to *standard-input*? This will make it a lot easier to work on my computer instead of in the cloud.

Use with-input-from-string.
(with-input-from-string (s "4 3 2")
(let ((a (read s))
(b (read s))
(c (read s)))
(format t "~a, ~a, ~a~%" a b c)))
You could also just read from a file, but reading from a string is much easier for making different test cases.

Pure Common Lisp does not provide streams, where you can easily write to, buffer the output, and read it back. Pure Common Lisp also does not provide extensible streams. But there is an extension called gray streams (proposed by David N. Gray as ANSI CL issue STREAM-DEFINITION-BY-USER), which allows to implement pipe streams with a fifo buffer.
Example for a pipe: cl-plumbing

Related

Curried functions and function application in Z3

I'm working on a language very similar to STLC that I'm converting to Z3 propositions, hence a few (sub)questions about how Z3 treats (uninterpreted) functions:
Functions are naturally curried in my language and as I'm converting the terms of my language recursively, I'd like to be able to build the corresponding Z3 AST recursively as well. That is, when I have a term f x y I'd like to first apply f to x and then apply that to y. Is there a way to do this? The API I've found so far (Z3_mk_func_decl/Z3_mk_app) seems to require me to collect all arguments first and apply them all at once.
Is there a reasonable way to represent something like (if b then f else g) x?
In both cases, I'm totally fine with functions being uninterpreted and restricting the reasoning to things like "b = True /\ f x = 0 => (if b then f else g) x = 0 holds".
SMTLib (as described in http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf) is a many-sorted first-order logic. All functions (uninterpreted or not) must be applied to all its arguments, and you cannot have any form of currying. Also, you cannot do higher-order if-then-else, i.e., the branches of an if-then-else will have to be first-order values. (However, they can be arrays, and you can imagine "faking" functions with arrays. But that's besides the point.)
It should be noted that the next iteration of SMTLib (v3) will be based on a higher-order logic, at which point features like you're asking might become available. See: http://smtlib.cs.uiowa.edu/version3.shtml. Of course, this is still a proposal and it'll take a while before it's settled and actual solvers start implementing it faithfully. Eventually that'll happen, but I wouldn't expect it in the very near term.
Aside: Since you mentioned STLC (simply-typed-lambda-calculus), I presume you might be familiar with functional languages like Haskell. If that's the case, you might want to look into using SBV: https://hackage.haskell.org/package/sbv. It provides a framework for doing some of these things by carefully translating them away behind the scenes. Here's an example:
Prelude Data.SBV> sat $ \b -> (ite b (uninterpret "f") (uninterpret "g")) (0::SInteger) .== (0::SInteger)
Satisfiable. Model:
s0 = True :: Bool
f :: Integer -> Integer
f _ = 0
g :: Integer -> Integer
g _ = 2
Here we created two functions and used the ite construct to "merge" them; and got the solver to return us a model. Behind the scenes, SBV will fully saturate these applications and let you "pretend" you're programming in a higher-order sense, just like in STLC or Haskell. Of course, the devil is in the details and there are limitations to the approach, but modeling STLC in Haskell is a classic pastime for many people and doing it symbolically using SBV can be a fun exercise.

Infinite stream without elements

We just learned about streams in one of my programming courses. I worked my way already through this weeks assigned tasks except this one: Define an infinite stream that has no elements. I already scimmed through the 3.5 chapter of the SICP book but I still not really get what the trick is. Some hints would be awesome ty ^^
The answer depends on the exact type of streams you are using.
Basically there are two different types of streams:
- empty streams
- streams consisting of an element followed by an stream
Therefore you need two different types of values to represent the two types.
In SICP (sicp) they write:
There is a distinguishable object, the-empty-stream, which cannot be
the result of any cons-stream operation, and which can be identified
with the predicate stream-null?
One way to define a distinguishable (unique) object in Scheme is to allocate
an new cons cell - its address is unique to that cell.
(define the-empty-stream (list 'this-is-the-empty-stream))
(define (stream-null? x) (eq? x the-empty-stream))
In Racket one would simply use a struct to represent the empty stream
(struct the-empty-stream ())
(define stream-null? the-empty-stream?)
Functions such as stream->list need to handle both types of lists.
Back to your question: If you are using SICP streams the answer is simple.
The expression the-empty-stream will give you an empty stream.
If you are defining your own, you will need to choose a way to represent
an empty stream - and then carefully modify all the stream functions you
have to handle the empty stream too.

Is there an established way to write parsers that can reconstruct their exact input?

Say I want to parse a file in language X. Really, I'm only interested in a small part of the information within. It's easy enough to write a parser in one of Haskell's many eDSLs for that purpose (e.g. Megaparsec).
data Foo = Foo Int -- the information I'm after.
parseFoo :: Parsec Text Foo
parseFoo = ...
That readily gives rise to a function getFoo :: Text -> Maybe Foo.
But now I would also like to modify the source of the Foo information, i.e. basically I want to implement
changeFoo :: (Foo -> Foo) -> Text -> Text
with the properties
changeFoo id ≡ id
getFoo . changeFoo f ≡ fmap f . getFoo
It's possible to do that by changing the result of the parser to something like a lens
parseFoo :: Parsec Text (Foo, Foo -> Text)
parseFoo = ...
but that makes the definition a lot more cumbersome – I can't just gloss over irrelevant information anymore, but need to store the match of every string subparse and manually reassemble it.
This could be somewhat automated by keeping the string-reassembage in a StateT layer around the parser monad, but I couldn't just use the existing primitive parsers.
Is there an existing solution for this problem?
Is this a case of "bidirectional transformation"? E.g., http://ceur-ws.org/Vol-1571/
In particular, "Invertible Syntax Descriptions: Unifying Parsing and Pretty Printing" by Rendel and Osterman
http://dblp.org/rec/conf/haskell/RendelO10 , Haskell Symposium 2010 (cf. http://lambda-the-ultimate.org/node/4191 )
A solution implemented in Haskell? I don't know of one; they may exist.
In general, though, one can store enough information to regenerate a legal version of the program that resembles the original to an arbitrary degree, by storing "formatting" information with collected tokens. In the limit, the format information is the original string for the token; any approximation of that will give successively less accurate answers.
If you keep whitespace as explicit tokens in the parse tree, in the limit you can even regenerate that. Whether that is useful likely depends on the application. In general, I think this is overkill.
Details on what/how to capture and how to regenerate can be found in my SO answer: Compiling an AST back to source code

DrRacket - I got a hint that local needs to be used here...but how?

Design at-0. The function consumes a list of functions from numbers to numbers and produces the list of results of applying these functions to 0.
Actually, the straightforward solution I can think of uses map, not local. Now, of course, if you're using a student language that doesn't support map, that's a different story. Anyway, here's a skeletal solution for you:
(define (at-0 funcs)
(map (lambda <???>
(<???> <???>))
funcs))

"Sub-parsers" in pipes-attoparsec

I'm trying to parse binary data using pipes-attoparsec in Haskell. The reason pipes (proxies) are involved is to interleave reading with parsing to avoid high memory use for large files. Many binary formats are based on blocks (or chunks), and their sizes are often described by a field in the file. I'm not sure what a parser for such a block is called, but that's what i mean by "sub-parser" in the title. The problem I have is to implement them in a concise way without a potentially large memory footprint. I've come up with two alternatives that each fail in some regard.
Alternative 1 is to read the block into a separate bytestring and start a separate parser for it. While concise, a large block will cause high memory use.
Alternative 2 is to keep parsing in the same context and track the number of bytes consumed. This tracking is error-prone and seems to infest all the parsers that compose into the final blockParser. For a malformed input file it could also waste time by parsing further than indicated by the size field before the tracked size can be compared.
import Control.Proxy.Attoparsec
import Control.Proxy.Trans.Either
import Data.Attoparsec as P
import Data.Attoparsec.Binary
import qualified Data.ByteString as BS
parser = do
size <- fromIntegral <$> anyWord32le
-- alternative 1 (ignore the Either for simplicity):
Right result <- parseOnly blockParser <$> P.take size
return result
-- alternative 2
(result, trackedSize) <- blockparser
when (size /= trackedSize) $ fail "size mismatch"
return result
blockParser = undefined
main = withBinaryFile "bin" ReadMode go where
go h = fmap print . runProxy . runEitherK $ session h
session h = printD <-< parserD parser <-< throwParsingErrors <-< parserInputD <-< readChunk h 128
readChunk h n () = runIdentityP go where
go = do
c <- lift $ BS.hGet h n
unless (BS.null c) $ respond c *> go
I like to call this a "fixed-input" parser.
I can tell you how pipes-parse will do it. You can see a preview of what I'm about to describe in pipes-parse in the parseN and parseWhile functions of the library. Those are actually for generic inputs, but I wrote similar ones for example String parsers as well here and here.
The trick is really simple, you insert a fake end of input marker where you want the parser to stop, run the parser (which will fail if it hits the fake end of input marker), then remove the end of input marker.
Obviously, that's not as easy as I make it sound, but it's the general principle. The tricky parts are:
Doing it in such a way that it still streams. The one I linked doesn't do that, yet, but the way you do this in a streaming way is to insert a pipe upstream that counts bytes flowing through it and then inserts the end-of-input marker at the correct spot.
Not interfering with existing end of input markers
This trick can be adapted for pipes-attoparsec, but I think the best solution would be for attoparsec to directly include this feature. However, if that solution is not available, then we can restrict the input that is fed to the attoparsec parser.
Ok, so I finally figured out how to do this and I've codified this pattern in the pipes-parse library. The pipes-parse tutorial explains how to do this, specifically in the "Nesting" section.
The tutorial only explains this for datatype-agnostic parsing (i.e. a generic stream of elements), but you can extend it to work with ByteStrings to work, too.
The two key tricks that make this work are:
Fixing StateP to be global (in pipes-3.3.0)
Embedding the sub-parser in a transient StateP layer so that it uses a fresh leftovers context
The pipes-attoparsec is going to release an update soon that builds on pipes-parse so that you can use these tricks in your own code.

Resources