I'd like to use Z3 to solve problems that are most naturally expressed in terms of atoms (symbols), sets, predicates, and first order logic. For example (in pseudocode):
A = {a1, a2, a3, ...} # A is a set
B = {b1, b2, b3...}
C = {c1, c2, c3...}
def p = (a:A, b:B, c:C) -> Bool # p is unspecified predicate
def q = (a:A, b:B, c:C) -> Bool
# Predicates can be defined in terms of other predicates:
def teaches = (a:A, b:B) -> there_exists c:C
such_that [ p(a, b, c) OR q(a, b, c) ]
constraint1 = forall b:B there_exists a:A
such_that teaches(a, b)
solve(constraint1)
What are good ways to express atoms, sets, predicates, relations, and first order quantifiers in Z3 (or other SMTs)?
Is there a standard idiom for this? Must it be done manually? Is there perhaps a translation library (not necessarily specific to Z3) that can convert them?
I believe Alloy uses SMT to implement predicate logic and relations, but Alloy seems designed more for interactive use to explore consistency of models, rather than to find specific solutions for problems.
"Alloy seems designed more for interactive use to explore consistency of models, rather than to find specific solutions for problems."
IMHO, Alloy shines when it comes to validate your own way of thinking. You model something and through the visualization of several instances you can sometime come to realize that what you modeled is not exactly what you'd have hoped for.
In that sense, I agree with you.
Yet, Alloy can also be used to find specific solutions to problems. You can overload a model with constraints so that only one instance can be found (i.e. your solution).
It works also quite well when your domain space remains relatively small.
Here's your model translated in Alloy :
sig A,B,C{}
pred teaches(a:A,b:B) {
some c:C | a->b->c in REL.q or a->b->c in REL.p}
// I'm a bit rusted, so .. that's my unelegant take on defining an "undefined predicate"
one sig REL {
q: A->B ->C,
p: A->B->C
}
fact constraint1 {
all b:B | some a:A | teaches[a,b]
}
run{}
If you want to define the atoms in sets A,B,C yourself and refer to them in predicates you could always over-constraint this model as follows:
abstract sig A,B,C{}
one sig A1,A2 extends A{}
one sig B1 extends B{}
one sig C1,C2,C3 extends C{}
pred teaches(a:A,b:B) {
some c:C | a->b->c in REL.q or a->b->c in REL.p}
one sig REL {
q: A->B ->C,
p: A->B->C
}{
// here you could for example define the content of p and q yourself
q= A1->B1->C2 + A2 ->B1->C3
p= A1->B1->C3 + A1 ->B1->C2
}
fact constraint1 {
all b:B | some a:A | teaches[a,b]
}
run{}
Modeling predicate logic in SMTLib is indeed possible; though it might be a bit cumbersome compared to a regular theorem prover like Isabelle/HOL etc. And interpreting the results can require a fair amount of squinting.
Having said that, here's a direct encoding of your sample problem using SMTLib:
(declare-sort A)
(declare-sort B)
(declare-sort C)
(declare-fun q (A B C) Bool)
(declare-fun p (A B C) Bool)
(assert (forall ((b B))
(exists ((a A))
(exists ((c C)) (or (p a b c) (q a b c))))))
(check-sat)
(get-model)
A few notes:
declare-sort creates an uninterpreted sort. It's essentially a non-empty set of values. (Can be infinite as well, there are no cardinality assumptions made, aside from the fact that it's not empty.) For your specific problem, it doesn't seem to matter what this sort actually is since you didn't use any of its elements directly. If you do so, you might also want to try a "declared" sort, i.e., a data-type declaration. This can be an enumeration, or something even more complicated; depending on the problem. For the current question as posed, an uninterpreted sort works just fine.
declare-fun tells the solver that there's an uninterpreted function with that name and the signature. But otherwise it neither defines it, nor constrains it in any way. You can add "axioms" about them to be more specific on how they behave.
Quantifiers are supported, as you see with forall and exists in how your constraint1 is encoded. Note that SMTLib isn't that suitable for code-reuse, and one usually programs in a higher-level binding. (Bindings from C/C++/Java/Python/Scala/O'Caml/Haskell etc. are provided, with similar but varying degrees of support and features.) Otherwise, it should be easy to read.
We finally issue check-sat and get-model, to ask the solver to create a universe where all the asserted constraints are satisfied. If so, it'll print sat and will have a model. Otherwise, it'll print unsat if there's no such universe; or it can also print unknown (or loop forever!) if it cannot decide. Use of quantifiers are difficult for SMT solvers to deal with, and heavy use of quantifiers will no doubt lead to unknown as the answer. This is an inherent limitation of the semi-decidability of first-order predicate calculus.
When I run this specification through z3, I get:
sat
(
;; universe for A:
;; A!val!1 A!val!0
;; -----------
;; definitions for universe elements:
(declare-fun A!val!1 () A)
(declare-fun A!val!0 () A)
;; cardinality constraint:
(forall ((x A)) (or (= x A!val!1) (= x A!val!0)))
;; -----------
;; universe for B:
;; B!val!0
;; -----------
;; definitions for universe elements:
(declare-fun B!val!0 () B)
;; cardinality constraint:
(forall ((x B)) (= x B!val!0))
;; -----------
;; universe for C:
;; C!val!0 C!val!1
;; -----------
;; definitions for universe elements:
(declare-fun C!val!0 () C)
(declare-fun C!val!1 () C)
;; cardinality constraint:
(forall ((x C)) (or (= x C!val!0) (= x C!val!1)))
;; -----------
(define-fun q ((x!0 A) (x!1 B) (x!2 C)) Bool
(and (= x!0 A!val!0) (= x!2 C!val!0)))
(define-fun p ((x!0 A) (x!1 B) (x!2 C)) Bool
false)
)
This takes a bit of squinting to understand fully. The first set of values tell you how the solver constructed a model for the uninterpreted sorts A, B, and C; with witness elements and cardinality constraints. You can ignore this part for the most part, though it does contain useful information. For instance, it tells us that A is a set with two elements (named A!val!0 and A!val!1), so is C, and B only has one element. Depending on your constraints, you'll get different sets of elements.
For p, we see:
(define-fun p ((x!0 A) (x!1 B) (x!2 C)) Bool
false)
This means p always is False; i.e., it's the empty set, regardless of what the arguments passed to it are.
For q we get:
(define-fun q ((x!0 A) (x!1 B) (x!2 C)) Bool
(and (= x!0 A!val!0) (= x!2 C!val!0)))
Let's rewrite this a little more simply:
q (a, b, c) = a == A0 && c == C0
where A0 and C0 are the members of the sorts A and C respectively; see the sort declarations above. So, it says q is True whenever a is A0, c is C0, and it doesn't matter what b is.
You can convince yourself that this model does indeed satisfy the constraint you wanted.
To sum up; modeling these problems in z3 is indeed possible, though a bit clumsy and heavy use of quantifiers can make the solver loop-forever or return unknown. Interpreting the output can be a bit cumbersome, though you'll realize that the models will follow a similar schema: First the uninterpreted sorts, and then the the definitions for the predicates.
Side note
As I mentioned, programming z3 in SMTLib is cumbersome and error-prone. Here's the same program done using the Python interface:
from z3 import *
A = DeclareSort('A')
B = DeclareSort('B')
C = DeclareSort('C')
p = Function('p', A, B, C, BoolSort())
q = Function('q', A, B, C, BoolSort())
dummyA = Const('dummyA', A)
dummyB = Const('dummyB', B)
dummyC = Const('dummyC', C)
def teaches(a, b):
return Exists([dummyC], Or(p(a, b, dummyC), q(a, b, dummyC)))
constraint1 = ForAll([dummyB], Exists([dummyA], teaches(dummyA, dummyB)))
s = Solver()
s.add(constraint1)
print(s.check())
print(s.model())
This has some of its idiosyncrasies as well, though hopefully it'll provide a starting point for your explorations should you choose to program z3 in Python. Here's the output:
sat
[p = [else -> And(Var(0) == A!val!0, Var(2) == C!val!0)],
q = [else -> False]]
Which has the exact same info as the SMTLib output, though written slightly differently.
Function definition style
Note that we defined teaches as a regular Python function. This is the usual style in z3py programming, as the expression it produces gets substituted as calls are made. You can also create a z3-function as well, like this:
teaches = Function('teaches', A, B, BoolSort())
s.add(ForAll([dummyA, dummyB],
teaches(dummyA, dummyB) == Exists([dummyC], Or(p(dummyA, dummyB, dummyC), q(dummyA, dummyB, dummyC)))))
Note that this style of definition will rely on quantifier instantiation internally, instead of the general function-definition facilities of SMTLib. So, you should prefer the python function style in general as it translates to "simpler" internal constructs. It is also much easier to define and use in general.
One case where you need the z3 function definition style is if the function you're defining is recursive and its termination relies on a symbolic argument. For a discussion of this, see: https://stackoverflow.com/a/68457868/936310
Related
I know the procedure simply by transforming each clause l1 ∨ ⋯ ∨ ln to a conjunction of n − 2 clauses .
I try to show the values satisfying the original and the final formula with Z3 to show that they are equisatisfiable as an SMT file.
For clarify can you give any example about this procedure . Thank you.
Equivalence checking is done by asking the solver if there's an assignment that can distinguish two formulas. If there's no such assignment, then you can conclude the formulas are equivalent, or in the SAT context, equi-satisfiable.
Here's a simple example:
from z3 import *
a, b, c = Bools('a b c')
fml1 = Or(And(a, b), And(a, c))
fml2 = And(a, Or(b, c))
s = Solver()
s.add(Distinct(fml1, fml2))
print(s.check())
Now if fml1 is an arbitrary SAT formula, and fml2 is a 3-SAT converted version (I'm not saying the above are SAT and 3-SAT conversions, but substitute the result of your algorithm here), then we'd expect that the SAT solver cannot distinguish them, i.e., the formula Distinct(fml1, fml2) would be unsatisfiable. Indeed, we get:
unsat
establishing that they are the same.
If you are using SMTLib only, then the template to use is:
(declare-fun a () Bool)
(declare-fun b () Bool)
(declare-fun c () Bool)
(assert (distinct (or (and a b) (and a c))
(and a (or b c))))
(check-sat)
How does forall statement work in SMT? I could not find information about usage. Can you please simply explain this? There is an example from
https://rise4fun.com/Z3/Po5.
(declare-fun f (Int) Int)
(declare-fun g (Int) Int)
(declare-const a Int)
(declare-const b Int)
(declare-const c Int)
(assert (forall ((x Int))
(! (= (f (g x)) x)
:pattern ((g x)))))
(assert (= (g a) c))
(assert (= (g b) c))
(assert (not (= a b)))
(check-sat)
For general information on quantifiers (and everything else SMTLib) see the official SMTLib document:
http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf
Quoting from Section 3.6.1:
Exists and forall quantifiers. These binders correspond to the usual
universal and existential quantifiers of first-order logic, except
that each variable they quantify is also associated with a sort. Both
binders have a non-empty list of variables, which abbreviates a
sequential nesting of quantifiers. Specifically, a formula of the form
(forall ((x1 σ1) (x2 σ2) · · · (xn σn)) ϕ) (3.1) has the same
semantics as the formula (forall ((x1 σ1)) (forall ((x2 σ2)) (· · ·
(forall ((xn σn)) ϕ) · · · ) (3.2) Note that the variables in the list
((x1 σ1) (x2 σ2) · · · (xn σn)) of (3.1) are not required to be
pairwise disjoint. However, because of the nested quantifier
semantics, earlier occurrences of same variable in the list are
shadowed by the last occurrence—making those earlier occurrences
useless. The same argument applies to the exists binder.
If you have a quantified assertion, that means the solver has to find a satisfying instance that makes that formula true. For a forall quantifier, this means it has to find a model that the assertion is true for all assignments to the quantified variables of the relevant sorts. And likewise, for exists the model needs to be able to exhibit a particular value satisfying the assertion.
Top-level exists quantifiers are usually left-out in SMTLib: By skolemization, declaring a top-level variable fills that need, and also has the advantage of showing up automatically in models. (That is, any top-level declared variable is automatically existentially quantified.)
Using forall will typically make the logic semi-decidable. So, you're likely to get unknown as an answer if you use quantifiers, unless some heuristic can find a satisfying assignment. Similarly, while the syntax allows for nested quantifiers, most solvers will have very hard time dealing with them. Patterns can help, but they remain hard-to-use to this day. To sum up: If you use quantifiers, then SMT solvers are no longer decision-procedures: They may or may not terminate.
If you are using the Python interface for z3, also take a look at: https://ericpony.github.io/z3py-tutorial/advanced-examples.htm. It does contain some quantification examples that can clarify things for you. (Even if you don't use the Python interface, I heartily recommend going over that page to see what the capabilities are. They more or less translate to SMTLib directly.)
Hope that gets you started. Stack-overflow works the best if you ask specific questions, so feel free to ask clarification on actual code as you need.
Semantically, a quantifier forall x: T . e(x) is equivalent to e(x_1) && e(x_2) && ..., where the x_i are all the values of type T. If T has infinitely many (or statically unknown many) values, then it is intuitively clear that an SMT solver cannot simply turn a quantifier into the equivalent conjunction.
The classical approach in this case are patterns (also called triggers), pioneered by Simplify and available in Z3 and others. The idea is rather simple: users annotate a quantifier with a syntactical pattern that serves a heuristic for when (and how) to instantiate the quantifier.
Here is an example (in pseudo-code):
assume forall x :: {foo(x)} foo(x) ==> false
Here, {foo(x)} is the pattern, indicating to the SMT solver that the quantifier should be instantiated whenever the solver gets a ground term foo(something). For example:
assume forall x :: {foo(x)} foo(x) ==> 0 < x
assume foo(y)
assert 0 < y
Since the ground term foo(y) matches the trigger foo(x) when the quantified variable x is instantiated with y, the solver will instantiate the quantifier accordingly and learn 0 < y.
Patterns and quantfier triggering is difficult, though. Consider this example:
assume forall x :: {foo(x)} (foo(x) || bar(x)) ==> 0 < y
assume bar(y)
assert 0 < y
Here, the quantifier won't be instantiated because the ground term bar(y) does not match the chosen pattern.
The previous example shows that patterns can cause incompletenesses. However, they can also cause termination problems. Consider this example:
assume forall x :: {f(x)} !f(x) || f(f(x))
assert f(y)
The pattern now admits a matching loop, which can cause nontermination. Ground term f(y) allows to instantiate the quantifier, which yields the ground term f(f(y)). Unfortunately, f(f(y)) matches the trigger (instantiate x with f(y)), which yields f(f(f(y))) ...
Patterns are dreaded by many and indeed tricky to get right. On the other hand, working out a triggering strategy (given a set of quantifiers, find patterns that allow the right instantiations, but ideally not more than these) ultimately "only" required logical reasoning and discipline.
Good starting points are:
* https://rise4fun.com/Z3/tutorial/, section "Quantifiers"
* http://moskal.me/smt/e-matching.pdf
* https://dl.acm.org/citation.cfm?id=1670416
* http://viper.ethz.ch/tutorial/, section "Quantifiers"
Z3 also has offers Model-based Quantifier Instantiation (MBQI), an approach to quantifiers that doesn't use patterns. As far as I know, it is unfortunately also much less well documented, but the Z3 tutorial has a short section on MBQI as well.
I have a set of symbolic variables:
int a, b, c, d, e;
A set of unknown functions, constrained by a number of axioms:
f1(a, b) = f2(c, b)
f1(d, e) = f1(e, d)
f3(b, c, e) = f1(b, e)
c = f1(a, b)
b = d
Here functions f1, f2, f3 are unknown, but fixed. So it is not a theory of uninterpreted functions.
I want to prove validity of the following assertions:
c = f2(f1(a, b), b)
f3(d, f2(c, b), e) = f1(e, b)
using substitutions based on the axiomatic equalities above.
Is there a theory, for such theorems that would only use the provided equalities to try to combine the answer, rather than coming up with interpretation for the functions?
If so, what is the name of the theory, and what SMT solver does support it?
Can it be mixed with other theories, like linear arithmetic?
This is still uninterpreted functions, because if there exist functions that satisfy your axioms then this would be sat in the theory of uninterpreted functions. Similarly, if such functions do not exist, then this is unsat in uninterpreted functions. So what you are picturing is satisfiable if and only if the problem in uninterpreted functions is satisfiable, so the two theories are isomorphic, i.e. the same.
Given that you're trying to prove that certain theorems are valid based on your axioms, it shouldn't matter how the solver represents a satisfiable result, because sat results correspond to invalid models. To prove your theorems using an SMT solver, you should assert your axioms, assert the negation of the theorem, and then look for an unsatisfiable result. See this question for a more detailed explanation of the connection between satisfiability and validity.
To prove your first theorem using Z3, the following suffices in SMT-LIB 2:
(declare-fun a () Int)
(declare-fun b () Int)
(declare-fun c () Int)
(declare-fun f1 (Int Int) Int)
(declare-fun f2 (Int Int) Int)
(assert (= (f1 a b) (f2 c b)))
(assert (= c (f1 a b)))
(assert (not (= c (f2 (f1 a b) b))))
(check-sat)
I am experimenting with Z3 where I combine the theories of arithmetic, quantifiers and equality. This does not seem to be very efficient, in fact it seems to be more efficient to replace the quantifiers with all instantiated ground instances when possible. Consider the following example, in which I have encoded the unique names axiom for a function f that takes two arguments of sort Obj and returns an interpreted sort S. This axiom states that each unique list of arguments to f returns a unique object:
(declare-datatypes () ((Obj o1 o2 o3 o4 o5 o6 o7 o8)))
(declare-sort S 0)
(declare-fun f (Obj Obj) S)
(assert (forall ((o11 Obj) (o12 Obj) (o21 Obj) (o22 Obj))
(=>
(not (and (= o11 o21) (= o12 o22)))
(not (= (f o11 o12) (f o21 o22))))))
Although this is a standard way of defining such an axiom in logic, implementing it like this is computationally very expensive. It contains 4 quantified variables, which each can have 8 values. This means that this results in 8^4 = 4096 equalities. It takes Z3 0.69s and 2016 quantifier instantiations to prove this. When I write a simple script that generates the instances of this formula:
(assert (distinct (f o1 o1) (f o1 o2) .... (f o8 o7) (f o8 o8)))
It takes 0.002s to generate these axioms, and another 0.01s (or less) to prove it in Z3. When we increase the objects in the domain, or the number of arguments to the function f this different increases rapidly, and the quantified case quickly becomes unfeasible.
This makes me wonder: when we have a bounded domain, why would we use quantifiers in Z3 in the first place? I know that SMT uses heuristics to find solutions, but I get the feeling that it still cannot compete in efficiency with a simple domain-specific grounder that feeds the grounded instances to SMT, which is then nothing more than SAT solving. Is my intuition correct?
Your intuition is correct. The heuristics for handling quantifiers in Z3 are not tuned for problems where universal variables range over finite/bounded domains.
In this kind of problem, using quantifiers is a good option only if a very small percentage of the instances are needed to show that a problem is unsatisfiable.
I usually suggest that users should expand this quantifiers using the programmatic API.
Here a two related posts. They contain links to Python code that implements this approach.
Does Z3 take a longer time to give an unsat result compared to a sat result?
Quantifier Vs Non-Quantifier
Here is one of the code fragments:
VFunctionAt = Function('VFunctionAt', IntSort(), IntSort(), IntSort())
s = Solver()
s.add([VFunctionAt(V,S) >= 0 for V in range(1, 5) for S in range(1, 9)])
print s
In this example, I'm essentially encoding forall V in [1,4] S in [1,8] VFunctionAt(V,S) >= 0.
Finally, your encoding (assert (distinct (f o1 o1) (f o1 o2) .... (f o8 o7) (f o8 o8)) is way more compact than expanding the quantifier 4096 times. However, even if we use a naive encoding (just expand the quantifier 4096 times), it is stil faster to solve the expanded version.
Does
(=> (f g))
always mean the same thing as
(or (not f) g))
?
The two expressions behave differently in my model. While using => gives me UNSAT, using the other variant does not yield any result (timeout). I would be content just having a list of operators and their meanings. I am aware of the SMTLIB standard, but the documents don't explicitly talk about the meanings of operators. Specifically '=>' seems to double as an alias for the 'ite' (if_then_else) operator if used in a ternary expression and I'm quite confused about that.
I set the AUFLIA logic, if that's relevant.
I'm looking for a simple yes or no answer first. And a proper documentation about SMT2 (maybe a book) second.
I have this rather large model generated from Daniel Jackson's marksweep model for alloy4 of those of you who are willing to see for yourself.
Your expressions are incorrect/unwellformed.
=> indeed means 'implies'. In other words, (=> f g) is equivalent to (or (not f) g).
If in doubt, you could prove it using Z3. The below query is unsat:
(declare-const p Bool)
(declare-const q Bool)
(define-fun conjecture () Bool
(= (=> p q)
(or (not p) q)))
(assert (not conjecture))
(check-sat)