Trying to simplify this boolean expression.
(not (and (or (not e) (not f) (not h))
(or (not f) (not h) d)
(or (not e) (not h) c)
(or (not h) d c)
(or (not e) (not f) (not g) a)
(or (not f) d (not g) a)
(or (not e) (not f) a i)
(or (not f) d a i)
(or (not e) c (not g) a)
(or d c (not g) a)
(or (not e) c a i)
(or d c a i)
(or (not e) (not f) a b)
(or (not f) d a b)
(or (not e) c a b)
(or d c a b)))
This should be reduced to a boolean expression like this in cnf form.
(and
(or (not a) h)
(or (not b) (not i) g h)
(or (not c) f)
(or (not d) e))
I've been trying to achieve this by using "ctx-solver-simplify" and "tseitin-cnf" tactic. If only "ctx-solver-simplify" is applied, no simplification is performed for this case. When both these tactics are applied by then("tseitin-cnf", "ctx-solver-simplify"), a result is returned which contains lots of auxiliary variables. Which is also not the reduced form expected.
Is there a way to simplify this expression to the expected output?
EDIT:
Asked the same question in Z3 github repo and got a really nice working response. Here is the issue.
https://github.com/Z3Prover/z3/issues/4822
Z3's simplification engine is not a good choice for these sorts of problems. It'll indeed "simplify" the expression, but it'll almost never match what a human would consider as simple. Many "obvious" simplifications will not be applied, as they simply do not matter from the SAT solver's perspective. You can search stack-overflow for many similar questions, and while z3 tactics can be used to great effect, they are not designed for what you are trying to do.
If you're looking for "minimal" (in a heuristic sense) simplification, you should look at other tools. For instance, sympy comes with a set of routines that does a decent job of conversion to canonical forms and simplification: https://stackoverflow.com/a/64626278/936310
Related
I'd like to use Z3 to solve problems that are most naturally expressed in terms of atoms (symbols), sets, predicates, and first order logic. For example (in pseudocode):
A = {a1, a2, a3, ...} # A is a set
B = {b1, b2, b3...}
C = {c1, c2, c3...}
def p = (a:A, b:B, c:C) -> Bool # p is unspecified predicate
def q = (a:A, b:B, c:C) -> Bool
# Predicates can be defined in terms of other predicates:
def teaches = (a:A, b:B) -> there_exists c:C
such_that [ p(a, b, c) OR q(a, b, c) ]
constraint1 = forall b:B there_exists a:A
such_that teaches(a, b)
solve(constraint1)
What are good ways to express atoms, sets, predicates, relations, and first order quantifiers in Z3 (or other SMTs)?
Is there a standard idiom for this? Must it be done manually? Is there perhaps a translation library (not necessarily specific to Z3) that can convert them?
I believe Alloy uses SMT to implement predicate logic and relations, but Alloy seems designed more for interactive use to explore consistency of models, rather than to find specific solutions for problems.
"Alloy seems designed more for interactive use to explore consistency of models, rather than to find specific solutions for problems."
IMHO, Alloy shines when it comes to validate your own way of thinking. You model something and through the visualization of several instances you can sometime come to realize that what you modeled is not exactly what you'd have hoped for.
In that sense, I agree with you.
Yet, Alloy can also be used to find specific solutions to problems. You can overload a model with constraints so that only one instance can be found (i.e. your solution).
It works also quite well when your domain space remains relatively small.
Here's your model translated in Alloy :
sig A,B,C{}
pred teaches(a:A,b:B) {
some c:C | a->b->c in REL.q or a->b->c in REL.p}
// I'm a bit rusted, so .. that's my unelegant take on defining an "undefined predicate"
one sig REL {
q: A->B ->C,
p: A->B->C
}
fact constraint1 {
all b:B | some a:A | teaches[a,b]
}
run{}
If you want to define the atoms in sets A,B,C yourself and refer to them in predicates you could always over-constraint this model as follows:
abstract sig A,B,C{}
one sig A1,A2 extends A{}
one sig B1 extends B{}
one sig C1,C2,C3 extends C{}
pred teaches(a:A,b:B) {
some c:C | a->b->c in REL.q or a->b->c in REL.p}
one sig REL {
q: A->B ->C,
p: A->B->C
}{
// here you could for example define the content of p and q yourself
q= A1->B1->C2 + A2 ->B1->C3
p= A1->B1->C3 + A1 ->B1->C2
}
fact constraint1 {
all b:B | some a:A | teaches[a,b]
}
run{}
Modeling predicate logic in SMTLib is indeed possible; though it might be a bit cumbersome compared to a regular theorem prover like Isabelle/HOL etc. And interpreting the results can require a fair amount of squinting.
Having said that, here's a direct encoding of your sample problem using SMTLib:
(declare-sort A)
(declare-sort B)
(declare-sort C)
(declare-fun q (A B C) Bool)
(declare-fun p (A B C) Bool)
(assert (forall ((b B))
(exists ((a A))
(exists ((c C)) (or (p a b c) (q a b c))))))
(check-sat)
(get-model)
A few notes:
declare-sort creates an uninterpreted sort. It's essentially a non-empty set of values. (Can be infinite as well, there are no cardinality assumptions made, aside from the fact that it's not empty.) For your specific problem, it doesn't seem to matter what this sort actually is since you didn't use any of its elements directly. If you do so, you might also want to try a "declared" sort, i.e., a data-type declaration. This can be an enumeration, or something even more complicated; depending on the problem. For the current question as posed, an uninterpreted sort works just fine.
declare-fun tells the solver that there's an uninterpreted function with that name and the signature. But otherwise it neither defines it, nor constrains it in any way. You can add "axioms" about them to be more specific on how they behave.
Quantifiers are supported, as you see with forall and exists in how your constraint1 is encoded. Note that SMTLib isn't that suitable for code-reuse, and one usually programs in a higher-level binding. (Bindings from C/C++/Java/Python/Scala/O'Caml/Haskell etc. are provided, with similar but varying degrees of support and features.) Otherwise, it should be easy to read.
We finally issue check-sat and get-model, to ask the solver to create a universe where all the asserted constraints are satisfied. If so, it'll print sat and will have a model. Otherwise, it'll print unsat if there's no such universe; or it can also print unknown (or loop forever!) if it cannot decide. Use of quantifiers are difficult for SMT solvers to deal with, and heavy use of quantifiers will no doubt lead to unknown as the answer. This is an inherent limitation of the semi-decidability of first-order predicate calculus.
When I run this specification through z3, I get:
sat
(
;; universe for A:
;; A!val!1 A!val!0
;; -----------
;; definitions for universe elements:
(declare-fun A!val!1 () A)
(declare-fun A!val!0 () A)
;; cardinality constraint:
(forall ((x A)) (or (= x A!val!1) (= x A!val!0)))
;; -----------
;; universe for B:
;; B!val!0
;; -----------
;; definitions for universe elements:
(declare-fun B!val!0 () B)
;; cardinality constraint:
(forall ((x B)) (= x B!val!0))
;; -----------
;; universe for C:
;; C!val!0 C!val!1
;; -----------
;; definitions for universe elements:
(declare-fun C!val!0 () C)
(declare-fun C!val!1 () C)
;; cardinality constraint:
(forall ((x C)) (or (= x C!val!0) (= x C!val!1)))
;; -----------
(define-fun q ((x!0 A) (x!1 B) (x!2 C)) Bool
(and (= x!0 A!val!0) (= x!2 C!val!0)))
(define-fun p ((x!0 A) (x!1 B) (x!2 C)) Bool
false)
)
This takes a bit of squinting to understand fully. The first set of values tell you how the solver constructed a model for the uninterpreted sorts A, B, and C; with witness elements and cardinality constraints. You can ignore this part for the most part, though it does contain useful information. For instance, it tells us that A is a set with two elements (named A!val!0 and A!val!1), so is C, and B only has one element. Depending on your constraints, you'll get different sets of elements.
For p, we see:
(define-fun p ((x!0 A) (x!1 B) (x!2 C)) Bool
false)
This means p always is False; i.e., it's the empty set, regardless of what the arguments passed to it are.
For q we get:
(define-fun q ((x!0 A) (x!1 B) (x!2 C)) Bool
(and (= x!0 A!val!0) (= x!2 C!val!0)))
Let's rewrite this a little more simply:
q (a, b, c) = a == A0 && c == C0
where A0 and C0 are the members of the sorts A and C respectively; see the sort declarations above. So, it says q is True whenever a is A0, c is C0, and it doesn't matter what b is.
You can convince yourself that this model does indeed satisfy the constraint you wanted.
To sum up; modeling these problems in z3 is indeed possible, though a bit clumsy and heavy use of quantifiers can make the solver loop-forever or return unknown. Interpreting the output can be a bit cumbersome, though you'll realize that the models will follow a similar schema: First the uninterpreted sorts, and then the the definitions for the predicates.
Side note
As I mentioned, programming z3 in SMTLib is cumbersome and error-prone. Here's the same program done using the Python interface:
from z3 import *
A = DeclareSort('A')
B = DeclareSort('B')
C = DeclareSort('C')
p = Function('p', A, B, C, BoolSort())
q = Function('q', A, B, C, BoolSort())
dummyA = Const('dummyA', A)
dummyB = Const('dummyB', B)
dummyC = Const('dummyC', C)
def teaches(a, b):
return Exists([dummyC], Or(p(a, b, dummyC), q(a, b, dummyC)))
constraint1 = ForAll([dummyB], Exists([dummyA], teaches(dummyA, dummyB)))
s = Solver()
s.add(constraint1)
print(s.check())
print(s.model())
This has some of its idiosyncrasies as well, though hopefully it'll provide a starting point for your explorations should you choose to program z3 in Python. Here's the output:
sat
[p = [else -> And(Var(0) == A!val!0, Var(2) == C!val!0)],
q = [else -> False]]
Which has the exact same info as the SMTLib output, though written slightly differently.
Function definition style
Note that we defined teaches as a regular Python function. This is the usual style in z3py programming, as the expression it produces gets substituted as calls are made. You can also create a z3-function as well, like this:
teaches = Function('teaches', A, B, BoolSort())
s.add(ForAll([dummyA, dummyB],
teaches(dummyA, dummyB) == Exists([dummyC], Or(p(dummyA, dummyB, dummyC), q(dummyA, dummyB, dummyC)))))
Note that this style of definition will rely on quantifier instantiation internally, instead of the general function-definition facilities of SMTLib. So, you should prefer the python function style in general as it translates to "simpler" internal constructs. It is also much easier to define and use in general.
One case where you need the z3 function definition style is if the function you're defining is recursive and its termination relies on a symbolic argument. For a discussion of this, see: https://stackoverflow.com/a/68457868/936310
I was trying to understand the concept of dynamic/static scope with deep and shallow binding. Below is the code-
(define x 0)
(define y 0)
(define (f z) (display ( + z y))
(define (g f) (let ((y 10)) (f x)))
(define (h) (let ((x 100)) (g f)))
(h)
I understand at dynamic scoping value of the caller function is used by the called function. So using dynamic binding I should get the answer- 110. Using static scoping I would get the answer 0. But I got these results without considering shallow or deep binding. What is shallow and deep binding and how will it change the result?
There's an example in these lecture notes 6. Names, Scopes, and Bindings: that explains the concepts, though I don't like their pseudo-code:
thres:integer
function older(p:person):boolean
return p.age>thres
procedure show(p:person, c:function)
thres:integer
thres:=20
if c(p)
write(p)
procedure main(p)
thres:=35
show(p, older)
As best I can tell, this would be the following in Scheme (with some, I hope, more descriptive names:
(define cutoff 0) ; a
(define (above-cutoff? person)
(> (age person) cutoff))
(define (display-if person predicate)
(let ((cutoff 20)) ; b
(if (predicate person)
(display person))))
(define (main person)
(let ((cutoff 35)) ; c
(display-if person above-cutoff?)))
With lexical scoping the cutoff in above-cutoff? always refers to binding a.
With dynamic scoping as it's implemented in Common Lisp (and most actual languages with dynamic scoping, I think), the value of cutoff in above-cutoff?, when used as the predicate in display-if, will refer to binding b, since that's the most recent on on the stack in that case. This is shallow binding.
So the remaining option is deep binding, and it has the effect of having the value of cutoff within above-cutoff? refer to binding c.
Now let's take a look at your example:
(define x 0)
(define y 0)
(define (f z) (display (+ z y))
(define (g f) (let ((y 10)) (f x)))
(define (h) (let ((x 100)) (g f)))
(h)
I'm going to add some newlines so that commenting is easier, and use a comment to mark each binding of each of the variables that gets bound more than once.
(define x 0) ; x0
(define y 0) ; y0
(define (f z) ; f0
(display (+ z y)))
(define (g f) ; f1
(let ((y 10)) ; y1
(f x)))
(define (h)
(let ((x 100)) ; x1
(g f)))
Note the f0 and f1 there. These are important, because in the deep binding, the current environment of a function passed as an argument is bound to that environment. That's important, because f is passed as a parameter to g within f. So, let's cover all the cases:
With lexical scoping, the result is 0. I think this is the simplest case.
With dynamic scoping and shallow binding the answer is 110. (The value of z is 100, and the value of y is 10.) That's the answer that you already know how to get.
Finally, dynamic scoping and deep binding, you get 100. Within h, you pass f as a parameter, and the current scope is captured to give us a function (lambda (z) (display (+ z 0))), which we'll call ff for sake of convenience. Once, you're in g, the call to the local variable f is actually a call to ff, which is called with the current value of x (from x1, 100), so you're printing (+ 100 0), which is 100.
Comments
As I said, I think the deep binding is sort of unusual, and I don't know whether many languages actually implement that. You could think of it as taking the function, checking whether it has any free variables, and then filling them in with values from the current dynamic environment. I don't think this actually gets used much in practice, and that's probably why you've received some comments asking about these terms. I do see that it could be useful in some circumstances, though. For instance, in Common Lisp, which has both lexical and dynamic (called 'special') variables, many of the system configuration parameters are dynamic. This means that you can do things like this to print in base 16 (since *print-radix* is a dynamic variable):
(let ((*print-radix* 16))
(print value))
But if you wanted to return a function that would print things in base 16, you can't do:
(let ((*print-radix* 16))
(lambda (value)
(print value)))
because someone could take that function, let's call it print16, and do:
(let ((*print-radix* 10))
(print16 value))
and the value would be printed in base 10. Deep binding would avoid that issue. That said, you can also avoid it with shallow binding; you just return
(lambda (value)
(let ((*print-radix* 16))
(print value)))
instead.
All that said, I think that this discussion gets kind of strange when it's talking about "passing functions as arguments". It's strange because in most languages, an expression is evaluated to produce a value. A variable is one type of expression, and the result of evaluating a variable is the expression of that variable. I emphasize "the" there, because that's how it is: a variable has a single value at any given time. This presentation of deep and shallow binding makes it gives a variable a different value depending on where it is evaluated. That seems pretty strange. What I think would make much more sense is if the discussions were about what you get back when you evaluate a lambda expression. Then you could ask "what will the values of the free variables in the lambda expression be"? The answer, in shallow binding, will be "whatever the dynamic values of those variables are when the function is called later. The answer, in deep binding, is "whatever the dynamic values of those variables are when the lambda expression is evaluated."
Then we wouldn't have to consider "functions being passed as arguments." The whole "functions being passed as arguments" is bizarre, because what happens when you pass a function as a parameter (capturing its dynamic environment) and whatever you're passing it to then passes it somewhere else? Is the dynamic environment supposed to get re-bound?
Related Questions and Answers
Dynamic Scoping - Deep Binding vs Shallow Binding
Shallow & Deep Binding - What would this program print?
Dynamic/Static scope with Deep/Shallow binding (exercises) (The answer to this question mentions that "Dynamic scope with deep binding is much trickier, since few widely-deployed languages support it.")
I am wondering how the substitution model can be used to show certain things about infinite streams. For example, say you have a stream that puts n in the nth spot and so on inductively. I define it below:
(define all-ints
(lambda ((n <integer>))
(stream-cons n (all-ints (+ 1 n)))))
(define integers (all-ints 1))
It is pretty clear that this does what it is supposed to, but how would someone go about proving it? I decided to use induction. Specifically, induction on k where
(last (stream-to-list integers k))
provides the last value of the first k values of the stream provided, in this case integers. I define stream-to-list below:
(define stream-to-list
(lambda ((s <stream>) (n <integer>))
(cond ((or (zero? n) (stream-empty? s)) '())
(else (cons (stream-first s)
(stream-to-list (stream-rest s) (- n 1)))))))
What I'd like to prove, specifically, is the property that k = (last (stream-to-list integers k)) for all k > 1.
Getting the base case is fairly easy and I can do that, but how would I go about showing the "inductive case" as thoroughly as possible? Since computing the item in the k+1th spot requires that the previous k items also be computed, I don't know how this could be shown. Could someone give me some hints?
In particular, if someone could explain how, exactly, streams are interpreted using the substitution model, I'd really appreciate it. I know they have to be different from the other constructs a regular student would have learned before streams, because they delay computation and I feel like that means they can't be evaluated completely. In turn this would man, I think, the substitution model's apply eval apply etc pattern would not be followed.
stream-cons is a special form. It equalent to wrapping both arguments in lambdas, making them thunks. like this:
(stream-cons n (all-ints (+ 1 n))) ; ==>
(cons (lambda () n) (lambda () (all-ints (+ n 1))))
These procedures are made with the lexical scopes so here n is the initial value while when forcing the tail would call all-ints again in a new lexical scope giving a new n that is then captured in the the next stream-cons. The procedures steam-first and stream-rest are something like this:
(define (stream-first s)
(if (null? (car s))
'()
((car s))))
(define (stream-rest s)
(if (null? (cdr s))
'()
((cdr s))))
Now all of this are half truths. The fact is they are not functional since they mutates (memoize) the value so the same value is not computed twice, but this is not a problem for the substitution model since side effects are off limits anyway. To get a feel for how it's really done see the SICP wizards in action. Notice that the original streams only delayed the tail while modern stream libraries delay both head and tail.
I am experimenting with Z3 where I combine the theories of arithmetic, quantifiers and equality. This does not seem to be very efficient, in fact it seems to be more efficient to replace the quantifiers with all instantiated ground instances when possible. Consider the following example, in which I have encoded the unique names axiom for a function f that takes two arguments of sort Obj and returns an interpreted sort S. This axiom states that each unique list of arguments to f returns a unique object:
(declare-datatypes () ((Obj o1 o2 o3 o4 o5 o6 o7 o8)))
(declare-sort S 0)
(declare-fun f (Obj Obj) S)
(assert (forall ((o11 Obj) (o12 Obj) (o21 Obj) (o22 Obj))
(=>
(not (and (= o11 o21) (= o12 o22)))
(not (= (f o11 o12) (f o21 o22))))))
Although this is a standard way of defining such an axiom in logic, implementing it like this is computationally very expensive. It contains 4 quantified variables, which each can have 8 values. This means that this results in 8^4 = 4096 equalities. It takes Z3 0.69s and 2016 quantifier instantiations to prove this. When I write a simple script that generates the instances of this formula:
(assert (distinct (f o1 o1) (f o1 o2) .... (f o8 o7) (f o8 o8)))
It takes 0.002s to generate these axioms, and another 0.01s (or less) to prove it in Z3. When we increase the objects in the domain, or the number of arguments to the function f this different increases rapidly, and the quantified case quickly becomes unfeasible.
This makes me wonder: when we have a bounded domain, why would we use quantifiers in Z3 in the first place? I know that SMT uses heuristics to find solutions, but I get the feeling that it still cannot compete in efficiency with a simple domain-specific grounder that feeds the grounded instances to SMT, which is then nothing more than SAT solving. Is my intuition correct?
Your intuition is correct. The heuristics for handling quantifiers in Z3 are not tuned for problems where universal variables range over finite/bounded domains.
In this kind of problem, using quantifiers is a good option only if a very small percentage of the instances are needed to show that a problem is unsatisfiable.
I usually suggest that users should expand this quantifiers using the programmatic API.
Here a two related posts. They contain links to Python code that implements this approach.
Does Z3 take a longer time to give an unsat result compared to a sat result?
Quantifier Vs Non-Quantifier
Here is one of the code fragments:
VFunctionAt = Function('VFunctionAt', IntSort(), IntSort(), IntSort())
s = Solver()
s.add([VFunctionAt(V,S) >= 0 for V in range(1, 5) for S in range(1, 9)])
print s
In this example, I'm essentially encoding forall V in [1,4] S in [1,8] VFunctionAt(V,S) >= 0.
Finally, your encoding (assert (distinct (f o1 o1) (f o1 o2) .... (f o8 o7) (f o8 o8)) is way more compact than expanding the quantifier 4096 times. However, even if we use a naive encoding (just expand the quantifier 4096 times), it is stil faster to solve the expanded version.
Does
(=> (f g))
always mean the same thing as
(or (not f) g))
?
The two expressions behave differently in my model. While using => gives me UNSAT, using the other variant does not yield any result (timeout). I would be content just having a list of operators and their meanings. I am aware of the SMTLIB standard, but the documents don't explicitly talk about the meanings of operators. Specifically '=>' seems to double as an alias for the 'ite' (if_then_else) operator if used in a ternary expression and I'm quite confused about that.
I set the AUFLIA logic, if that's relevant.
I'm looking for a simple yes or no answer first. And a proper documentation about SMT2 (maybe a book) second.
I have this rather large model generated from Daniel Jackson's marksweep model for alloy4 of those of you who are willing to see for yourself.
Your expressions are incorrect/unwellformed.
=> indeed means 'implies'. In other words, (=> f g) is equivalent to (or (not f) g).
If in doubt, you could prove it using Z3. The below query is unsat:
(declare-const p Bool)
(declare-const q Bool)
(define-fun conjecture () Bool
(= (=> p q)
(or (not p) q)))
(assert (not conjecture))
(check-sat)