I am interested in testing the "practical" impact of the decision/instantiation procedure (its implementation inclusive) discussed in [1].
I need:
1) A "tool" that take an SMT benchmark and returns a (possibly complete) instantiated version of it, applying the strategy. If this is not possible,
2) The Z3 version implementing this strategy and an option for switching it on and off.
Can you help me on that?
[1] Complete Instantiation for Quantified Formulas in Satisfiabiliby Modulo Theories
As far as I know, there is no tool that will return the instantiated version of an SMT benchmark.
Z3 instantiates the quantifiers on-demand using Model-Based Quantifier Instantiation (MBQI) described on Section 6 of [1]. The actual loop in the latest Z3 is more complicated that the one described on this section.
Here are some notes on how to enable/disable the MBQI module.
First, we should disable automatic configuration using the command
(set-option :auto-config false)
Z3 4.x uses MBQI and E-Matching for handling quantifiers. We should use the commands for disabling both of them:
(set-option :ematching false)
(set-option :mbqi false)
To enable them, we should use:
(set-option :ematching true)
(set-option :mbqi true)
With these options you can check the effect of MBQI and E-Matching in different problems. Note that, if we use only E-matching, then Z3 will return unknown for any satisfiable problem that contains quantifiers.
The MBQI module is implemented in the files src/smt/smt_model_finder.cpp and src/smt/smt_model_checker.cpp. The file src/smt/smt_model_finder.cpp is essentially converting a model produced for the quantifier-free formulas in a model that may potentially satisfy the universally quantified formulas. Note that the class auf_solver is the one that actually "solves" the set constraints and "builds" the projection functions described in [1].
If we want to trace the actual instances generated by the MBQI module, we can modify the method void model_checker::assert_new_instances() at src/smt/smt_model_checker.cpp. Note that these method already has some tracing commands sending data to tout (trace output). We can replace tout with std::cout to get the information on the standard output.
For example, if we add the following piece of code, then whenever a universal quantifier q is instantiated (by the MBQI module) with some bindings, Z3 will display the information in the standard output:
std::cout << "[New-instance]\n" << mk_pp(q, m_manager) << "\n";
std::cout << "[Bindings]\n";
for (unsigned i = 0; i < num_decls; i++) {
expr * b = inst->m_bindings[i];
std::cout << mk_pp(b, m_manager) << "\n";
}
std::cout << "[End-New-Instance]\n";
Related
Using Z3's Horn clause solver:
If the answer is SAT, one can get a satisfying assignment to the unknown predicates (which, in most applications, correspond to inductive invariants of some kind of transition system or procedure call system).
If the answer is unsat, then this means the exists an unfolding of the Horn clauses and an assignment to the universally quantified variables in the Horn clauses such that at least one of the safety conditions (the clauses with a false head) is violated. This constitutes a concrete witness why the system had no solution.
I suspect that if Z3 can conclude unsat, then it has some form of such witness internally (and this anyway is the case in PDR, if I remember well). Is there a way to print it out?
Maybe I badly read the documentation, but I can't find a way. (get-proof) prints something unreadable, and, besides, (set-option :produce-proofs true) makes some problems intractable.
The refutation that Z3 produces for HORN logic problems is in the form of a tree of unit-resulting resolution steps. The counterexample you're looking for is hiding in the conclusions of the unit-resolution steps. These conclusions (the last arguments of the rules) are ground facts that correspond to program states (or procedure summaries or whatever) in the counterexample. The variable bindings that produce these facts can be found in "quant-inst" rules.
Obviously, this is not human readable, and actually is pretty hard to read by machine. For Boogie I implemented a more regular format, but it is currently only available with the duality engine and only for the fixedpoint format using "rule" and "query". You can get this using the following command.
(query :engine duality :print-certificate true)
In a (rather large) Z3 problem, we have a few axioms of the shape:
forall xs :: ( (P(xs) ==> (exists ys :: Q(xs,ys))) && ((exists zs :: Q(xs,zs)) ==> P(xs)) )
All three quantifiers (including the existentials) have explicit triggers provided (omitted here). When running the problem and gathering quantifier statistics, we observed the following data (amongst many other instantiations):
[quantifier_instances] k!244 : 804 : 3 : 4
[quantifier_instances] k!232 : 10760 : 29 : 30
Here, line 244 corresponds to the end of the outer forall quantifier, and line 232 to the end of the first inner exists. Furthermore, there are no reported instantiations of the second inner exists (which I believe Z3 will pull out into a forall); given the triggers this is surprising.
My understanding is that existentials in this inner position should be skolemised by a function (depending on the outer quantifier). It's not clear to me what quantifier statistics mean for such existentials.
Here are my specific questions:
Are quantifier statistics meaningful for existential quantifiers (those which remain as existentials - i.e. in positive positions)? If so, what do they mean?
Does skolemisation of such an existential happen once and for all, or each time the outer quantifier is instantiated? Why is a substantially higher number reported for this quantifier than for the outer forall?
Does Z3 apply some internal rewriting of this kind of (A==>B)&&(B==>A) assertion? If so, how does that affect quantifier statistics for quantifiers in A and B?
From our point of view, understanding question 2 is most urgent, since we are trying to investigate how the generated existentials affect the performance of the overall problem.
The original smt file is available here:
https://gist.github.com/anonymous/16e489ce5c513e8c4bc6
and a summary of the statistics generated (with Z3 4.4.0, but we observed the same with 4.3.2) is here:
https://gist.github.com/anonymous/ce7b96acf712ac16299e
The answer to all these questions is 'it depends', mainly on what else appears in the problem and what options are set. For instance, if there are only bit-vector variables, then Skolemization will indeed be performed during preprocessing, once and forall, but this is not the case for all other theories or theory combinations.
Briefly looking at your SMT2 file, it seems to me that all existentials appear in the left hand side of implications, i.e., they are in fact negated (and actually rewritten into universals somewhere along the line), so those statistics do make sense for the existentials appearing in this particular problem.
If I give Z3 a formula like p | q, I would expect Z3 to return p=true, q=don't care (or with p and q switched) but instead it seems to insist on assigning values to both p and q (even though I don't have completion turned on when calling Eval()). Besides being surprised at this, my question then is what if p and q are not simple prop. vars but expensive expressions and I know that typically either p or q will be true. Is there an easy way to ask Z3 to return a "minimal" model and not waste its time trying to satisfy both p and q? I already tried MkITE but that makes no difference. Or do i have to use some kind of tactic to enforce this?
thanks!
PS. I wanted to add that I have turned off AUTO_CONFIG, yet Z3 is trying to assign values to constants in both branches of the or: eg in the snippet below I want it to assign either to path2_2 and path2_1 or to path2R_2 and path2R_1 but not both
(or (and (select a!5 path2_2) a!6 (select a!5 path2_1) a!7)
(and (select a!5 path2R_2) a!8 (select a!5 path2R_1) a!9))
Z3 has a feature called relevancy propagation. It is described in this article. It does what you want. Note that, in most cases relevancy propagation has a negative impact on performance. In our experiments, it is only useful for problems containing quantifiers (quantifier reasoning is so expensive that it is pays off). By default, Z3 will use relevancy propagation in problems that contain quantifiers. Otherwise, it will not use it.
Here is an example on how to turn it on when the problem does not have quantifiers (the example is also available online here)
x, y = Bools('x y')
s = Solver()
s.set(auto_config=False, relevancy=2)
s.add(Or(x, y))
print s.check()
print s.model()
I am currently experimenting with Z3 as bounded engine for specifications written in Alloy (a relational logic/language). I am using the UFBV as target language.
I detect a problem using the Z3 option (set-option :pull-nested-quantifiers true).
For two semantically identical SMT specifications Spec1 and Spec2, Z3 times out (200 sec) for proving Spec1 but proves Spec2.
The only different between Spec1 and Spec2 is that they have different function identifiers (because I use java hash names). Can this be related to a bug?
The second observation I would like to share and discuss, is the "iff" operator in the context of UFBV. This operator is not supported, if (set-logic UFBV) is set. My solution was to use "=" instead but this do not work well if the operands contains deeply nested quantifiers and the "pull-nested-quantifiers" is set. The other saver solution is to use double implication.
Now the question:
Is there any other better solution for model "iff" in UFBV, because I think, that using double implication will in general loose maybe useable semantic information for improvement/simplifications.
http://i12www.ira.uka.de/~elghazi/tmp/
you can find: spec1 and spec2 the tow (I think) semantically identical SMT specifications, and spec3 an SMT specification using "=" to model "iff", for which z3 times out.
The default strategy for the UFBV logic is not effective for your problems. Actually, the default strategy solves all of them in less than 1 sec. To force Z3 to use the default strategy, you just need to comment the following lines in your script.
; (set-logic UFBV)
; (set-option :pull-nested-quantifiers true)
; (set-option :macro-finder true)
If the warning messages are bothering you, you can add:
(set-option :print-warning false)
That being said, I will try to address the issues you raised.
Does identifier names affect the behavior of Z3? Yes, they do.
Starting at version 3.0, we started using a total order on Z3 expressions for performing operations such as: sorting the arguments of associative-commutative operators.
This total order is based on the identifier names.
Ironically, this modification was motivated by user feedback. In previous versions, we used an internal ID for performing operations such as sorting, and breaking ties in many different heuristics. However, these IDs are based on the order Z3 creates/deletes expressions, which is based on the order users declare symbols. So, Z3 2.x behavior would be affected by trivial modifications such as removing unused declarations.
Regarding iff, it is not part of SMT-LIB 2.0 standard. In SMT-LIB 2.0, = is used for formulas and terms. To make sure Z3 is fully compliant with the SMT-LIB 2.0 standard, whenever users specify a SMT-LIB supported logic (or soon to be supported such as UFBV), Z3 only "loads" the symbols defined in it. When, a logic is not specified, Z3 assumes the user is using the "Z3 logic" that contains all supported theories in Z3, and many extra aliases such as: iff for Boolean =, if for ite, etc.
I know that Z3 cannot check the satisfiability of formulas that contain recursive functions. But, I wonder if Z3 can handle such formulas over bounded data structures. For example, I've defined a list of length at most two in my Z3 program and a function, called last, to return the last element of the list. However, Z3 does not terminate when asked to check the satisfiability of a formula that contains last.
Is there a way to use recursive functions over bounded lists in Z3?
(Note that this related to your other question as well.) We looked at such cases as part of the Leon verifier project. What we are doing there is avoiding the use of quantifiers and instead "unrolling" the recursive function definitions: if we see the term length(lst) in the formula, we expand it using the definition of length by introducing a new equality: length(lst) = if(isNil(lst)) 0 else 1 + length(tail(lst)). You can view this as a manual quantifier instantiation procedure.
If you're interested in lists of length at most two, doing the manual instantiation for all terms, then doing it once more for the new list terms should be enough, as long as you add the term:
isCons(lst) => ((isCons(tail(lst)) => isNil(tail(tail(lst))))
for each list. In practice you of course don't want to generate these equalities and implications manually; in our case, we wrote a program that is essentially a loop around Z3 adding more such axioms when needed.
A very interesting property (very related to your question) is that it turns out that for some functions (such as length), using successive unrollings will give you a complete decision procedure. Ie. even if you don't constrain the size of the datastructures, you will eventually be able to conclude SAT or UNSAT (for the quantifier-free case).
You can find more details in our paper Satisfiability Modulo Recursive Programs, or I'm happy to give more here.
You may be interested in the work of Erik Reeber on SULFA, the ``Subclass of Unrollable List Formulas in ACL2.'' He showed in his PhD thesis how a large class of list-oriented formulas can be proven by unrolling function definitions and applying SAT-based methods. He proved decidability for the SULFA class using these methods.
See, e.g., http://www.cs.utexas.edu/~reeber/IJCAR-2006.pdf .