I'm wondering what is the difference between these two encoding of the same list axiom:
(define-sort T1 () Int)
(declare-fun list_length ( (List T1) ) Int)
(assert (forall ( (i T1) (l (List T1)) )
(ite (= l (as nil (List T1)))
(= (list_length l) 0)
(= (list_length (insert i l)) (+ 1 (list_length l))))))
and
(define-sort T1 () Int)
(declare-fun list_length ( (List T1) ) Int)
(assert (= (list_length (as nil (List T1))) 0))
(assert (forall ( (i T1) (l (List T1)) )
(= (list_length (insert i l)) (+ 1 (list_length l)))))
For this benchmark:
(declare-const a T1)
(declare-const b T1)
(assert (not
(= (list_length (insert b (insert a (as nil (List T1))))) 2)))
(check-sat)
Somehow z3 is able to reason about the second version but not the first (where it seems to just loop forever).
Edit: same with cvc4 with the first version returning unknown.
First-order logic with quantifiers is essentially semi-decidable. In the SMT context, this means that there is no decision procedure to answer every query correctly as sat/unsat.
(Theoretical aside, not that it's that important: If you completely ignore efficiency considerations, then there are algorithms that can answer all satisfiable queries correctly, but there are no algorithms that can correctly deduce unsat. In this latter case, they'd loop forever. But this is a digression.)
So, to deal with quantifiers, SMT solvers usually employ a technique known as E-matching. Essentially, when they form a ground term mentioning uninterpreted functions, they try to instantiate quantified axioms to match them and rewrite accordingly. This technique can be quite effective in practice and scales well with typical software verification problems, but it obviously is not a panacea. For details, see this paper: https://pdfs.semanticscholar.org/4eb2/c5e05ab5c53f20c6050f8252a30cc23561be.pdf.
Regarding your question: Essentially, when you have the ite form of the axiom, the e-matching algorithm simply fails to find the proper substitution to instantiate your axiom. For efficiency considerations, the e-matcher really looks at almost "exact" matches. (Take this with a grain of salt; it's smarter than that, but not by much.) Being too smart here hardly ever pays off in practice, since you can end up generating way too many matchings and end up exploding your search space. As usual, it's a balance between practicality, efficiency, and covering as many cases as possible.
Z3 allows specifying patterns to guide that search to a certain extent, but patterns are rather tricky to use and fragile. (I'd have pointed you to the right place in the documentation for patterns, alas the z3 documentation site is down for the time being as you yourself noticed!) You might want to play around with them to see if they give you better results. But the rule of thumb is to keep your quantified axioms as simple and obvious as possible. And your second variant precisely does that, as compared to the first. For this particular problem, definitely split the axiom into two parts, and assert both separately to cover the nil/insert cases. Combining them into one rule simply exceeds the capabilities of the current e-matcher.
Related
Given this formula,
(p & (x < 0)) | (~p & (x > 0)).
How could I get these 2 "parametric" models in Z3:
{p=true, x<0}
{p=false, x>0}
When I submit this SMTLIB program to Z3,
(declare-const p Bool)
(declare-const x Int)
(assert (or (and p (< x 0)) (and (not p) (> x 0))))
(check-sat)
(get-model)
(assert (or (not p) (not (= x -1))))
(check-sat)
(get-model)
(exit)
it gives me concrete models instead (e.g. {p=true, x=-1}, {p=true, x=-2}, ...).
You can't.
SMT solvers do not produce non-concrete models; it's just not how they work. What you want is essentially some form of "simplification" in steroids, and while you can use an SMT solver to help you in simplifying expressions, you'll have to build a tool on top that understands the kind of simplifications you'd like to see. Bottom line: What you'd consider "simple" as a person, and what an automated SMT-solver sees as "simple" are usually quite different from each other; and given lack of normal forms over arbitrary theories, you cannot expect them to do a good job.
If these sorts of simplifications is what you're after, you might want to look at symbolic math packages, such as sympy, Mathematica, etc.
I'm confused and struggling to understand how two different input formats for Z3 fixedpoint engine are related. Short example: suppose I want to prove the existance of negative numbers. I declare a function that returns 1 for non-negative numbers and 0 for negative and then asking solver to fail if there are arguments for which function returns 0. But there is one restriction: I want solver to respond sat when there exists at least one negative number and unsat if all numbers are non-negative.
It is trivially with using declare-rel and query format:
(declare-rel f (Int Int))
(declare-rel fail ())
(declare-var n Int)
(declare-var m Int)
(rule (=> (< n 0) (f n 0)))
(rule (=> (>= n 0) (f n 1)))
(rule (=> (and (f n m) (= m 0)) fail))
(query fail)
But it becomes tricky while using pure SMT-LIB2 format (with forall). For example, straightforward
(set-logic HORN)
(declare-fun f (Int Int) Bool)
(declare-fun fail () Bool)
(assert (forall ((n Int))
(=> (< n 0) (f n 0))))
(assert (forall ((n Int))
(=> (>= n 0) (f n 1))))
(assert (forall ((n Int) (m Int))
(=> (and (f n m) (= m 0)) fail)))
(assert (not fail))
(check-sat)
returns unsat. Unsurprisingly, changing (= m 0) to (= m 1) results the same. We can get sat only implying fail from (= m 2). The problem is that I can't understand, how to ask solver using this format.
How I'm understanding it at the moment, while using forall-form we can ask to find only ∀-solutions, i.e. the answer sat means that solver managed to find interpretation (or invariant) satisfiying all assertions for all values, and unsat means that there are no such functions. In other words, it tries to prove, putting the 'proof' (the invariant) into the model (obviously, when sat).
On the contrary, when querying the solution in the declare-rel format solver searches the solution for some variables, just like the constraints are under the ∃-quantifier. In other words, it gives the counter-example. It can only print the invariant in case of unsat.
I have a couple of questions:
Am I understanding it correct? I feel like I miss some key ideas. For example, a general idea of how to express (query ...) in terms of (assert (forall ...)) will be really helpfull (and will answer question 2 automaticly).
Is there a way to solve such ∃-constraints (outputting sat when counterexample was found) with pure SMT-LIB2 format? If yes then how?
First of all, the format that uses "declare-rel", "declare-var", "rule" and "query" is a custom extension to SMT-LIB2. The "declare-var" feature is convenient for omitting bound variables from multiple rules. It also allows formulating Datalog rules with stratified negation and the semantics of this is what you should expect from stratified negation. By convention it uses "sat" to indicate that a query has a derivation, and "unsat" that no derivation exists for a query.
It turns out that standard SMT-LIB2 can express pretty much what you want for
Horn clauses without negation. Rules become implications and queries are implications of the form: (=> query false), or as you wrote it (not query).
A derivation in the custom format corresponds to a proof of the empty clause (e.g., proof of "query", which then proves "false"). So existence of a derivation means that the SMT-LIB2 assertions are "unsat". Conversely, if there is an interpretation (a model) for the Horn clauses, then such a model establishes that there is no derivation. The clauses are "sat".
In other words:
"sat" for datalog extension <=> "unsat" for SMT-LIB2 formulation
"unsat" for datalog extension <=> "sat" for SMT-LIB2 formulation
The advantage of using the pure SMT-LIB2 format, when it applies, is that
there are no special syntax extensions. These are plain SMT formulas and
others who wish to solve this class of formulas don't have to write special
extensions, they just have to ensure that the solvers that are tuned to
Horn clauses recognize the appropriate class of formulas. (Z3's implementation
of the HORN fragment does allow some flexibility in writing down Horn clauses.
You can have disjunctions in the bodies and you can have Curried implications).
There is one drawback with using the SMT-LIB2 format that the rule-based format helps with: when there is a derivation of the query, then the rule-based format has pragmas for printing elements of a tuple. Note that in general the query relation can take arguments. This feature is useful for finite domain relations.
Your example above uses integers, so the relations are not finite domain, but examples in the online-tutorial contain finite domain instances.
Now a derivation of a query also corresponds to a resolution proof. You can extract a resolution proof from the SMT-LIB2 case, but I have to say it is rather
convoluted and I have not found a way to use it effectively. The "duality" engine for Horn clauses generates derivations in a more accessible format than
the default proof format of Z3. Either way, it is likely that users run into obstacles if they try to work with the proof certificates because they are rarely used. The rule-based format does have another feature that assembles a set of predicates with instances that correspond to a derivation trail. It is easier to eyeball this output.
How to specify initial 'soft' values for the model? This initial model is the result of solving a similar query, and it is likely that this model has a correct pieces or even may be true for the current query.
Currently I am simulating this with an incremental solving and hard/soft constraints:
(define-fun trans_assumed ((a Int)) Int
; an initial model, which may be (partially) true
)
(declare-fun trans_sought ((a Int)) Int)
(declare-const p Bool)
(assert (=> p (forall ((a Int)) (= (trans_assumed a) (trans_sought a)))))
(check-sat p) ; in hope that trans_assumed values will be used as initial below
; add here the main constraints for trans_sought function
(check-sat) ; Z3 will use trans_assumed as a starting point for trans_sought
Does this really specify initial values for trans_sought to be trans_assumed?
Incremental mode of solving is slow compared to sequential. Any better ways of introducing initial values?
I think this is a good approach, but you may consider using more Boolean variables. Right now, it is a "all" or "nothing" approach. In your script, when (check-sat p) is executed, Z3 will look for a model where trans_assumed and trans_sought have the same interpretation. If such model does not exist, it will return with the unsat core containing p. When (check) is executed, Z3 is free to assign p to false, and the universal quantifier is essentially a don't care. That is, trans_assumed and trans_sought can be completely different.
If you use multiple Boolean variables to control the interpretation of trans_sought, you will have more flexibility.
If the rest of your problem is quantifier free, you should consider dropping the universal quantifier. This can be done if you only care about the value of trans_sought in a finite number of points.
Suppose we have that trans_assumed(0) = 1 and trans_assumed(1) = 10. Then, we can write:
assert (=> p0 (= (trans_sought 0) 1)))
assert (=> p1 (= (trans_sought 1) 10)))
In this encoding, we can query (check-sat p0 p1), (check-sat p0), (check-sat p1)
How to specify initial 'soft' values for the model? This initial model is the result of solving a similar query, and it is likely that this model has a correct pieces or even may be true for the current query.
Currently I am simulating this with an incremental solving and hard/soft constraints:
(define-fun trans_assumed ((a Int)) Int
; an initial model, which may be (partially) true
)
(declare-fun trans_sought ((a Int)) Int)
(declare-const p Bool)
(assert (=> p (forall ((a Int)) (= (trans_assumed a) (trans_sought a)))))
(check-sat p) ; in hope that trans_assumed values will be used as initial below
; add here the main constraints for trans_sought function
(check-sat) ; Z3 will use trans_assumed as a starting point for trans_sought
Does this really specify initial values for trans_sought to be trans_assumed?
Incremental mode of solving is slow compared to sequential. Any better ways of introducing initial values?
I think this is a good approach, but you may consider using more Boolean variables. Right now, it is a "all" or "nothing" approach. In your script, when (check-sat p) is executed, Z3 will look for a model where trans_assumed and trans_sought have the same interpretation. If such model does not exist, it will return with the unsat core containing p. When (check) is executed, Z3 is free to assign p to false, and the universal quantifier is essentially a don't care. That is, trans_assumed and trans_sought can be completely different.
If you use multiple Boolean variables to control the interpretation of trans_sought, you will have more flexibility.
If the rest of your problem is quantifier free, you should consider dropping the universal quantifier. This can be done if you only care about the value of trans_sought in a finite number of points.
Suppose we have that trans_assumed(0) = 1 and trans_assumed(1) = 10. Then, we can write:
assert (=> p0 (= (trans_sought 0) 1)))
assert (=> p1 (= (trans_sought 1) 10)))
In this encoding, we can query (check-sat p0 p1), (check-sat p0), (check-sat p1)
(This is my second try to get help. If the question/approach do not make sense or not clear, please just let me know. I would also be pleased about any small hint or reference, which can help me to understand the behaviour of Z3 with my SBAs)
I am working on bounded verification of relational specification using the UFBV Z3 logic. The current problem I am investigating, needs the falsification of all possible models (because of a negative use of a reachability predicate), which kills the solver performance in higher bounds.
Because only a part of the possible models are indeed interesting (not isomorphic to others), I am trying to introduce symmetry breaking techniques (known in the SAT area).
However the use of what I call symmetry breaking axioms can improve the performance of Z3 in some cases, but the general, behaviour of the solver becomes instable.
One of my approaches (I think the most promising one), bases on breaking the symmetry on relations w.r.t. their domains. It introduces of each domains D of a relation R and each atom a \in D axioms, which enforce an order on the binary representation of R^{M} and R^{M[a+1/a]}, where M is a model for the specification. For homogeneous relations the axioms are relaxed.
Let be R \subset AxA a relation. My relaxed symmetry breaking axioms for R look like this:
;; SBA(R, A)_upToDiag
(assert
(forall ( (ai A) (aj A) )
(=>
(bvult ai aj)
(=>
(forall ((x A))
(=>
(bvult x aj)
(= (R ai x) (R (bvadd ai (_ bv1 n)) x))
)
)
(=>
(R ai aj)
(R (bvadd ai (_ bv1 n)) aj)
)))))
;; SBA(R, A)_diag
(assert
(forall ( (ai A) )
(=>
(forall ((x A))
(=>
(bvult x ai)
(= (R ai x) (R (bvadd ai (_ bv1 n)) x))
)
)
(=>
(R ai ai)
(R (bvadd ai (_ bv1 n)) (bvadd ai (_ bv1 n)))
))))
My problem is, that the effect of using this SBAs is not stable/consistent. It differs from bound to bound and form specification to another. Also the use of all or only one of the SBAs affects the performance.
In the SAT context the success of the so-called symmetry breaking predicate (SBP) approach bases on the backtracking capability of the SAT solver, which (somehow) guaranty, that if the solver back track, it will then prune the search space using, amongst others, the SBPs.
What is the differences (if any) in the context of Z3?
How can I enforce the solve to use these axioms to prune the search space (when it back track)?
Would the use of (quantifier) patterns for my SBAs helps?
Regards,
Aboubakr Achraf El Ghazi
In Z3 3.2, there are two main engines for handling quantified formulas: E-matching and MBQI (model based quantifier instantiation). E-matching is only effective in unsatisfiable formulas. Z3 will not be able to show that a formula is satisfiable using this engine. MBQI is more expensive, but it can show that several classes of formulas (containing quantifiers) are satisfiable. The Z3 guide describes these two engines (and other options). To use Z3 effectively on nontrivial problems, it is very useful to understand how these two engines work.
Symmetry breaking is usually very effective way to reduce the search space. It is hard to pinpoint exactly what is going on in your problem. I can see the following explanations for the non stable behavior:
MBQI is having a hard time creating a model that satisfies the SBAs. Although the SBAs prune the search space, if the problem is satisfiable, Z3 will try to build an interpretation (model) that satisfies them. So, in this case, the SBA is just overhead. This is particularly true, if the input formula is very easy to satisfy, but becomes hard when you add the SBAs. You can confirm this hypothesis by using the option MBQI_TRACE=true. Z3 will display messages such as: [mbqi] failed k!18. Where k![line-number] is the quantifier id. You can assign your own ids using the tag :qid. Here is an example:
(assert (forall ((x T) (y T)) (! (=> (and (subtype x y)
(subtype y x))
(= x y))
:qid antisymmetry)))
BTW, you can disable the MBQI module using MBQI=false.
In future versions of Z3, we are planning to add an option to disable MBQI for some quantified formulas. This feature may be useful for SBAs.
Another explanation is that E-matching is creating too many instances of the SBAs. You can confirm that using the option QI_PROFILE=true. Z3 will dump information such as:
[quantifier_instances] antisymmetry : 12 : 1 : 2.00
The first number is the number of generated instances. If that is the source of the problem, one solution is to assign restrictive patterns for the SBAs that are generating too many instances. For example, Z3 will use (R ai aj) as a pattern for SBA(R, A)_upToDiag. This kind of pattern may create a quadratic number of instances. Another experiment consists in disabling E-matching. Example, the option
AUTO_CONFIG=false EMATCHING=false MBQI=true
You may also try to disable relevancy propagation in the configuration above, option: RELEVANCY=0.
Finally, another option is to generate the instances of the SBAs that you believe are useful, and remove the quantified formulas.