SMT let expression binding scope - z3

I'm using a simple let expression to shorten my SMT formula. I want bindings to use previously defined bindings as follows, but if I remove the commented line and have n refer to s it doesn't work:
;;;;;;;;;;;;;;;;;;;;;
; ;
; This is our state ;
; ;
;;;;;;;;;;;;;;;;;;;;;
(declare-datatypes ((State 0))
(((rec
(myArray String)
(index Int))))
)
;;;;;;;;;;;;;;;;;;;;;;;;;;
; ;
; This is our function f ;
; ;
;;;;;;;;;;;;;;;;;;;;;;;;;;
(define-fun f ((in State)) State
(let (
(s (myArray in))
(n (str.len (myArray in))))
;;;;;;;;;;(n (str.len s)))
in
(rec (str.substr s 1 n) 1733))
)
I looked at the documentation here, and it's not clear whether it's indeed forbidden to have bindings refer to other (previously defined) bindings:
The whole let construct is entirely equivalent to replacing each new
parameter by its expression in the target expression, eliminating the
new symbols completely (...)
I guess it's a "shallow" replacement?

From Section 3.6.1 of http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf:
Let. The let binder introduces and defines one or more local variables
in parallel. Semantically, a term of the form (let ((x1 t1) · · · (xn tn)) t) (3.3) is equivalent to the term t[t1/x1, . . . , tn/xn]
obtained from t by simultaneously replacing each free occurrence of xi
in t by ti , for each i = 1, . . . , n, possibly after a suitable
renaming of t’s bound variables to avoid capturing any variables in
t1, . . . , tn. Because of the parallel semantics, the variables x1, .
. . , xn in (3.3) must be pairwise distinct.
Remark 3 (No sequential
version of let). The language does not have a sequential version of
let. Its effect is achieved by nesting lets, as in (let ((x1 t1)) (let ((x2 t2)) t)).
As indicated in Remark 3, if you want to refer to an earlier definition you have to nest the let-expressions.

Related

How can I define a function in z3 Python API since the new SMT-LIB standard?

The new SMT-LIB standard allows for a function defintion command of the form:
(define-fun f ((x1 σ1) · · · (xn σn)) σ t)
The spec clarifies that this is semantically equivalent to
(declare-fun f (σ1 · · · σn) σ)
(assert (forall ((x1 σ1) · · · (xn σn)) (= ( f x1 · · · xn) t))
At the moment I would define a function using the Python z3 API as follows:
s = z3.Solver()
f = z3.Function("f", [σ1 ... σn, σ])
s.add(z3.ForAll([x1, ...,xn], t == f(x1, ..., xn)))
Is that the cannonical way of doing it or is there a more straightforward or efficient way of handling this?
Typically, one simply uses a Python function instead. This function returns the result in term of its symbolic inputs, in a way unrolling the definition before z3 even sees it.
One exception to this is if you want to define a recursive function, whose termination depends on a symbolic argument. Note that you can use the same facility even if your function isn't recursive, which can solve your problem of definitions that come from other sources. For details see RecAddDefinition.

How to refresh, remake, lexical bindings on a lambda?

I am trying to see how to rebind a lexical binding, or redefine the
closure of a lambda. The expected usage of next-noun is just to call it as many times as desired with no arguments. It should return a random noun from the list, but one that has not been returned yet until the list is exhausted.
Here is the toy example I am using:
#lang racket
(define nouns `(time
year
people
way
day
man))
(define (next-noun)
(let* ([lst-nouns (shuffle nouns)]
[func-syn
`(λ ()
(let* ([n (car lst-nouns)]
[lst-nouns (if (null? (cdr lst-nouns))
(shuffle nouns)
(cdr lst-nouns))])
(set! next-noun (eval func-syn))
n))])
((eval func-syn))))
When trying to run it I get this error:
main.rkt>
main.rkt> (next-noun)
; lst-nouns: undefined;
; cannot reference an identifier before its definition
; in module: "/home/joel/projects/racket/ad_lib/main.rkt"
Which confuses me since there should be a binding for lst-nouns any
time (eval func-syn) is run. What's going on?
You don't need to use eval here, at all. It's making the solution more complex (and insecure) than needed. Besides, the "looping" logic is incorrect, because you're not updating the position in lst-nouns, and anyway it gets redefined every time the procedure is called. Also, see the link shared by Sorawee to understand why eval can't see local bindings.
In Scheme we try to avoid mutating state whenever possible, but for this procedure I think it's justified. The trick is to keep the state that needs to be updated inside a closure; this is one way to do it:
(define nouns '(time
year
people
way
day
man))
; notice that `next-noun` gets bound to a `lambda`
; and that `lst-nouns` was defined outside of it
; so it's the same for all procedure invocations
(define next-noun
; store list position in a closure outside lambda
(let ((lst-nouns '()))
; define `next-noun` as a no-args procedure
(λ ()
; if list is empty, reset with shuffled original
(when (null? lst-nouns)
(set! lst-nouns (shuffle nouns)))
; obtain current element
(let ((noun (car lst-nouns)))
; advance to next element
(set! lst-nouns (cdr lst-nouns))
; return current element
noun))))
#PetSerAl proposed a more idiomatic solution in the comments. My guess is that you want to implement this from scratch, for learning purposes - but in real-life we would do something like this, using Racket's generators:
(require racket/generator)
(define next-noun
(infinite-generator
(for-each yield (shuffle nouns))))
Either way it works as expected - repeatedly calling next-noun will return all the elements in nouns until exhausted, at that point the list will be reshuffled and the iteration will restart:
(next-noun)
=> 'day
(next-noun)
=> 'time
...
You issue is with eval. eval does not have lexical environment from where it is called rather it has at most the top level bindings. Eg.
(define x 12)
(let ((x 10))
(eval '(+ x x))) ; ==> 24
eval is almost always the wrong solution and can often be replaced with closures and called directly or with apply. Here is what I would have done:
(define (shuffle-generator lst)
(define shuffled (shuffle lst))
(define (next-element)
(when (null? shuffled)
(set! shuffled (shuffle lst)))
(begin0
(car shuffled)
(set! shuffled (cdr shuffled))))
next-element)
(define next-int15 (shuffle-generator '(1 2 3 4 5)))
(define random-bool (shuffle-generator '(#t #f)))
(random-bool) ; ==> #f
(next-int15) ; ==> 5
(next-int15) ; ==> 4
(next-int15) ; ==> 2
(next-int15) ; ==> 1
(next-int15) ; ==> 3
(next-int15) ; ==> 3
(random-bool) ; ==> #t
(random-bool) ; ==> #t
The returned values are random so it's just what I got my first round. Instead of naming next-element one could simply just return the lambda, but the name gives information on what it does and the debugger will show the name. eg.:
next-int15 ; ==> #<procedure:next-element>

How to tell whether parentheses are necessary or not?

I have written a parser in Haskell, which parses formulas in the form of string inputs and produces a Haskell data type defined by the BNF below.
formula ::= true
| false
| var
| formula & formula
| ∀ var . formula
| (formula)
var ::= letter { letter | digit }*
Now I would like to create an instance of Show so that I can nicely print the formulas defined by my types (I don't want to use deriving (Show)). My question is: How do I define my function so that it can tell when parentheses are necessary? I don't want too many, nor too little parentheses.
For example, given the formula ∀ X . (X & Y) & (∀ Y . Y) & false which, when parsed, produces the data structure
And (And (Forall "X" (And (Var "X") (Var "Y"))) (Forall "Y" (Var "Y"))) False
we have
Too little parentheses: ∀ X . X & Y & ∀ Y . Y & false
Too much parentheses: (∀ X . (((X) & (Y)))) & (∀ Y . (Y)) & (false)
Just right: ∀ X . (X & Y) & (∀ Y . Y) & false
Is there a way to gauge how many parenthesis are necessary so that the semantics is never ambiguous? I appreciate any feedback.
Untested pseudocode:
instance Show Formula where
showsPrec _p True = "True"
showsPrec _p False = "False"
showsPrec p (And f1 f2) = showParen (p > 5) $
showsPrec 5 f1 . (" & " ++) . showsPrec 5 f2
showsPrec p (Forall x f) = showParen (p > 8) $
("forall " ++ x ++) . showsPrec 8 f
...
(I should probably use showString instead of those ++ above. It should work anyway, I think.)
Above, the integer p represents the precedence of the context where we are showing the current formula. For example, if we are showing f inside f & ... then p will have the precedence level of &.
If we need to print a symbol in a context which has higher precedence, we need to add parentheses. E.g. if f is a | b we can't write a | b & ..., otherwise it is interpreted as a | (b & ...). We need to put parentheses around a | b. This is done by the showParen (p > ...).
When we recurse, we pass the precedence level of the symbol at hand to the subterms.
Above, I chose the precedence levels randomly. You need to adjust them to your tastes. You should also check that the levels you choose play along the standard libraries. E.g. printing Just someFormula should not generate things like Just a & b, but add parentheses.

What is the closure of a left-recursive LR(0) item with epsilon transitions?

Let's say I have this grammar:
A: ε
| B 'a'
B: ε
| B 'b'
What is considered to be the closure of the item A: • B 'a'?
In other words, how do I deal with the epsilon transitions when figuring out closures?
This is pretty straightforward. Included in the closure of
A = ... <dot> X ... ;
are all the rules
X = <dot> R1 R2 R3 ... ;
where first(R1) is not empty. For each (nonempty) token K in first(R1), you'll need to (transitively!) include
R1 = <dot> k ... ;
etc. but presumably you are already clear on this.
You specific question is what happens if R1 can be empty? Then you also
need to include
X = R1 <dot> R2 ... ;
Similarly for R2 being empty, if R1 can be empty, and similarly for Ri being empty if R1 .. Ri-1 can be empty. In extreme circumstances, all the Ri can be empty (lots of optional subclauses in your grammar), and you can end up including
X = R1 R2 ... Rn <dot> ;
Note that determining that first(R1) "can be empty" is itself a transitive closure question.
The GLR parser generator that I built for DMS precomputes first_can_be_empty using Warshall's algorithm and then uses that in the closure construction.

Compose example in Paul Graham's ANSI Common Lisp

Can anybody explain an example in Paul Graham's ANSI Common Lisp page 110?
The example try to explain the use &rest and lambda to create functional programming facilities. One of them is a function to compose functional arguments. I cannot find anything explaining how it worked. The code is as follows:
(defun compose (&rest fns)
(destructuring-bind (fn1 . rest) (reverse fns)
#'(lambda (&rest args)
(reduce #'(lambda (v f) (funcall f v))
rest
:initial-value (apply fn1 args)))))
The usage is:
(mapcar (compose #'list #'round #'sqrt)
'(4 9 16 25))
The output is:
((2) (3) (4) (5))
Line 2 and 6 look especially like magic to me.
The compose function returns a closure that calls each of the functions from last to first, passing on the result of each function call to the next.
The closure resulting from calling (compose #'list #'round #'sqrt) first calculates the square root of its argument, rounds the result to the nearest integer, then creates a list of the result. Calling the closure with say 3 as argument is equivalent to evaluating (list (round (sqrt 3))).
The destructuring-bind evaluates the (reverse fns) expression to get the arguments of compose in reverse order, and binds its first item of the resulting list to the fn1 local variable and the rest of the resulting list to the rest local variable. Hence fn1 holds the last item of fns, #'sqrt.
The reduce calls each the fns functions with the accumulated result. The :initial-value (apply fn1 args) provides the initial value to the reduce function and supports calling the closure with multiple arguments. Without the requirement of multiple arguments, compose can be simplified to:
(defun compose (&rest fns)
#'(lambda (arg)
(reduce #'(lambda (v f) (funcall f v))
(reverse fns)
:initial-value arg)))
destructuring-bind combines destructors with binding. A destructor is a function that lets you access a part of a data structure. car and cdr are simple destructors to extract the head and tail of a list. getf is a general destructor framework. Binding is most commonly performed by let. In this example, fns is (#'list #'round #'sqrt) (the arguments to compose), so (reverse fns) is (#'sqrt #'round #'list). Then
(destructuring-bind (fn1 . rest) '(#'sqrt #'round #'list)
...)
is equivalent to
(let ((tmp '(#'sqrt #'round #'list)))
(let ((fn1 (car tmp))
(rest (cdr tmp)))
...))
except that it doesn't bind tmp, of course. The idea of destructuring-bind is that it's a pattern matching construct: its first argument is a pattern that the data must match, and symbols in the pattern are bound to the corresponding pieces of the data.
So now fn1 is #'sqrt and rest is (#'round #'list). The compose function returns a function: (lambda (&rest args) ...). Now consider what happens when you apply that function to some argument such as 4. The lambda can be applied, yielding
(reduce #'(lambda (v f) (funcall f v))
'(#'round #'list)
:initial-value (apply #'sqrt 4)))
The apply function applies fn1 to the argument; since this argument is not a list, this is just (#'sqrt 4) which is 2. In other words, we have
(reduce #'(lambda (v f) (funcall f v))
'(#'round #'list)
:initial-value 2)
Now the reduce function does its job, which is to apply #'(lambda (v f) (funcall f v)) successively to the #'round and to #'list, starting with 2. This is equivalent to
(funcall #'list (funcall #'round 2))
→ (#'list (#'round 2))
→ '(2)
Okay, here goes:
It takes the functions given, reverses it (in your example, it becomes (#'sqrt #'round #'list)), then sticks the first item into fn1, and the rest into rest. We have: fn1 = #'sqrt, and rest = (#'round #'list).
Then it performs a fold, using (apply sqrt args) (where args are the values given to the resulting lambda) as the initial value, and with each iteration grabbing the next function from rest to call.
For the first iteration you end up with (round (apply sqrt args)), and the second iteration you end up with (list (round (apply sqrt args))).
Interestingly, only the initial function (sqrt in your case) is allowed to take multiple arguments. The rest of the functions are called with single arguments only, even if any particular function in the chain does a multiple-value return.
This example stumped me for a day. I could finally understand it by renaming some of the arguments and commenting each line before it made sense. Below is what helped me explain it to myself.
In the book example using the call:
(mapcar (compose #'list #'round #'sqrt) '(4 9 16 25))
The parameter functions becomes (#'LIST #'ROUND #'SQRT)
(defun compose (&rest functions)
(destructuring-bind (fx . fxs) (reverse functions)
;; fx becomes #'SQRT
;; fxs becomes '(#'ROUND #'LIST)
#'(lambda (&rest args) ; This is the function returned as result.
;; The args parameter will be (4) on the mapcar's first
;; iteration on the (4 9 16 25) list passed in the call:
;; (mapcar #'(compose #'List #'round #'sqrt) '(4 9 16 25)) => ((2) (3) (4) (5))
;; or e.g. the (4) in (funcall (compose #'list #'sqrt '(4)) => (2.0)
;; Note that args is not ((#'ROUND #'LIST)).
(reduce #'(lambda (x y) (funcall y x))
;; fxs is (#'ROUND #'LIST) - captuted as closure since it is now
;; locally unbound.
fxs
;; Initial value is: (apply #'SQRT '(4) => 2.0.
;; In Paul Graham's example, the mapcar passes
;; each square number individually.
;; The reverse order of parameters in the second lambda
;; first invokes: (ROUND 2.0) => 2
;; and then invokes: (LIST 2) => (2)
:initial-value (apply fx args)))))

Resources