I see that I can create goals, add them to a tactic, and create a solver from the tactic.
What is the advantage of this approach over simply creating a z3::solver instance and adding my expressions to it?
Tactics have a different purpose. You can create a goal that contains your assertions/constraints and then you run a Tactic ON the goal, the result of which will be a new set of (sub-)goals, i.e., new assertions/constraints. Solvers determine satisfiability and won't produce new (sub-)goals.
Tactics can be converted into solvers, such that the resulting solver will run the tactic, and if the result is conclusive (trivial sat/unsat), it will return that result. If the sub-goals produced by the tactic are not conclusive, it will return "unknown".
Related
I am currently dealing with a situation where the assertions given to Z3 contain a large number of inequalities and equalities. They are dependent on each other in a way that it is most efficient to start solving the formula by assigning values to the variables used in the equalities.
Is there a way to alter the heuristics of Z3 such that the solver always chooses to "start" at these formulas?
My guess would be to use a tactic which initially processes a goal containing the mentioned equalities. It would then continue with the other assertions, restarting the whole process if necessary.
However, I'm not sure how to go about implementing this - how can I create custom goals from sets of formulas?
You can try asserting the first set of formulas you want, then issue check-sat, issue the next set, issue check-sat; repeat as necessary. You can also use push-pop to go back to these points if you want.
By issuing multiple check-sats this way, you would be forcing the solver to explore the formulas you asserted up to that point. Whether this would actually achieve what you want of course depends on exactly what your formulas look like and how much the solver can derive at each check-sat call.
I am a beginner to SMT solvers and I am trying to use them for a variation on program synthesis. Anyway, what the problem boils down to is find a sequence of applied operations (composition of previously defined functions) which for the given input gives the requested output.
Is there any existing practice of using SMT solvers for finding out in which order to compose functions in order to reach a specific output? If you have any reading material for me I am happy to read up.
I began using Z3 for the task, but if there is any reasoning to choose other SMT solvers, shoot!
Thanks.
You'll need to define constants that describe what operations are to be applied. First, define a compound operation that switches based on what operation to use:
int operation; //constant, constrain it to [0, 2]
Expr result =
operation == 0 ? applyFunction0(inputExpr) :
operation == 1 ? applyFunction1(inputExpr) :
applyFunction2(inputExpr);
Very rough pseudo code for what expression to build. The ?: operator maps to ITE in Z3.
That way Z3 can find a suitable value for operation to pick one concrete operation. You can obtain the concrete value from the model.
You can iterate this approach to apply multiple operations in sequence.
I notice that the Z3 C++ (and C) API allows you to supply the logic to be used.
I have two questions about this that I couldn't answer by looking online:
Are these supposed to be the standard SMT-LIB logics i.e. QF_LRA
When are these worth supplying i.e. when will Z3 actually use this information
My context is mainly QF no BV but everything else possible, I am using the SMT solver incrementally and I can always work out what logic I will be in at the start.
Z3 will also try to figure out what the logic is (when run with default options), but it doesn't have custom tactics for all combinations of theories (see default_tactic.cpp and smt_strategic_solver.cpp). When you are not sure what Z3 will decide to do, then it's best to set the tactic right up front, so that you will get errors if you try to use things that are not in that logic. It will also use that information to set up the smt kernel, e.g., enabling various preprocessors, various solver features, and chosing heuristics (see e.g., smt_setup.cpp).
Try it out and see!
Usually it does make a big difference. Setting the logic means the solver will use a specialized tactic to solve the formula, instead of going through the generic loop. Z3 will also try to guess the logic, but it's usually better to just provide it upfront.
since current version, there is some problem in "ctx-solver-simplify"like in the example http://rise4fun.com/Z3/CqRv z3 gives the wrong answer. I replace "ctx-solver-simplify" by "simplify" like http://rise4fun.com/Z3/x9X4
I am wondering, what's the difference between these two tactic "simplify" and "ctx-solver-simplify"?
The tactic simplify only performs "local simplifications". For every term t, we have that simplify(t) is a new term equivalent to t. Moreover, the result of simplify(t) does not depend on the context where t occurs. By context, I meant the assertion F where t occurs and all other assertions. Since, simplify is local, it is very efficient. The implementation is essentially based on a bottom up application of simplification rules. Moreover, since the result of simplify(t) does not depend on contextual information, we can cache it. Thus, even if t occurs N times in a formula F, we only need to simplify it once. All builtin solvers in Z3 apply this kind of simplification. Thus, tactics such as simplify have been extensively tested.
The tactic ctx-solver-simplify uses the context where t occurs to apply simplifications. The basic idea is to simplify a formula F by traversing it using a solver S. The solver S essentially contains the "context". Whenever S.check() returns unsat, we know the current context is inconsistent, then we can replace the current formula by false. The ctx-solver-simplify is much more expensive. First, it performs many calls to S.check(). Each one of these calls is potentially very expensive. It is also much harder to cache intermediate results. Z3 may have to simplify a subformula t many times because it occurs in different contexts.
The bug you reported in your question have been fixed. The fix will be available in the next release (version 4.1). If you need we can provide you a pre-release version of Z3 4.1
I know that Z3 cannot check the satisfiability of formulas that contain recursive functions. But, I wonder if Z3 can handle such formulas over bounded data structures. For example, I've defined a list of length at most two in my Z3 program and a function, called last, to return the last element of the list. However, Z3 does not terminate when asked to check the satisfiability of a formula that contains last.
Is there a way to use recursive functions over bounded lists in Z3?
(Note that this related to your other question as well.) We looked at such cases as part of the Leon verifier project. What we are doing there is avoiding the use of quantifiers and instead "unrolling" the recursive function definitions: if we see the term length(lst) in the formula, we expand it using the definition of length by introducing a new equality: length(lst) = if(isNil(lst)) 0 else 1 + length(tail(lst)). You can view this as a manual quantifier instantiation procedure.
If you're interested in lists of length at most two, doing the manual instantiation for all terms, then doing it once more for the new list terms should be enough, as long as you add the term:
isCons(lst) => ((isCons(tail(lst)) => isNil(tail(tail(lst))))
for each list. In practice you of course don't want to generate these equalities and implications manually; in our case, we wrote a program that is essentially a loop around Z3 adding more such axioms when needed.
A very interesting property (very related to your question) is that it turns out that for some functions (such as length), using successive unrollings will give you a complete decision procedure. Ie. even if you don't constrain the size of the datastructures, you will eventually be able to conclude SAT or UNSAT (for the quantifier-free case).
You can find more details in our paper Satisfiability Modulo Recursive Programs, or I'm happy to give more here.
You may be interested in the work of Erik Reeber on SULFA, the ``Subclass of Unrollable List Formulas in ACL2.'' He showed in his PhD thesis how a large class of list-oriented formulas can be proven by unrolling function definitions and applying SAT-based methods. He proved decidability for the SULFA class using these methods.
See, e.g., http://www.cs.utexas.edu/~reeber/IJCAR-2006.pdf .