can smt/z3 be used for optimazation - z3

Can SMT solver efficiently find a solution (or an assignment) for the pseudo-Boolean problem as described as follows:
\sum {i..m} f_i x1 x2.. xn *w_i
where f_i x1 x2 .. xn is a Boolean function, and w_i is a weight of Int type.
For your convenience, I highlight the contents in page 1 and 3, which is enough for specifying
the pseudo-Boolean problem.

SMT solvers typically address the question: given a logical formula, optionally using functions and predicates from underlying theories (such as the theory of arithmetic, the theory of bit-vectors, arrays), is the formula satisfiable or not.
They typically don't expose a way for you specify objective functions
and typically don't have built-in optimization procedures.
Some special cases are formulas that only use Booleans or a combination of Booleans and either bit-vectors or integers. Pseudo Boolean constraints can be formulated with either integers or encoded (with some care taking overflow semantics into account) using bit-vectors, or they can be encoded directly into SAT. For some formulas using bounded integers that fall in the class of psuedo-boolean problems, Z3 will try automatic reductions into bit-vectors. This applies only to benchmkars in the SMT-LIB2 format tagged as QF_LIA or applies if you explicitly invoke a tactic that performs this reduction (the "qflia" tactic should apply).
While Z3 does not directly expose objective functions, the question of augmenting
SMT solvers with objective functions is actively pursued in the research community.
One approach suggested by Nieuwenhuis and Oliveras in SAT 2006 was to build in
solving for the "weighted max SMT" problem as a custom theory. Yices comes with built-in
features for weighted max SMT, Z3 does not, but it is possible to write a custom
theory that performs the backtracking search of a weighted max SMT solver, but nothing
out of the box.
Sometimes people try to specify objective functions using quantified formulas.
In theory one could hope that quantifier elimination procedures then can solve
for the objective.
This is generally pretty bad when it comes to performance. Quantifier elimination
is an overfit and the routines (that we have) will not be efficient.

For your problem, if you want to find an optimized (maximum or minimum) result from the sum, yes Z3 has this ability. You can use the Optimize class of Z3 library instead of Solver class. The class provides two methods for 'maximization' and 'minimization' respectively. You can pass the SMT variable that is needed to be optimized and Optimization class model will give the solution for you. It actually worked with C# API using Microsoft.Z3 library. For your inconvenience, I am attaching a snippet:
Optimize opt; // initializing object
opt.MkMaximize(*your variable*);
opt.MkMinimize(*your variable*);
opt.Assert(*anything you need to do*);

Related

Skolem functions in SMT and ATPs

While reading Extending Sledgehammer with SMT solvers I read the following:
In the original Sledgehammer architecture, the available lemmas were rewritten
to clause normal form using a naive application of distributive laws before the relevance filter was invoked. To avoid clausifying thousands of lemmas on each invocation, the clauses were kept in a cache. This design was technically incompatible with the (cache-unaware) smt method, and it was already unsatisfactory for ATPs, which include custom polynomial-time clausifiers.
My understanding of SMT so far is as follows: SMTs don't work over clauses. Instead, they try to build a model for the quantifier-free part of a problem. The search is refined by instantiating quantifiers according to some set of active terms. Thus, indeed no clausal form is needed for SMT solvers.
We rewrote the relevance filter so that it operates on arbitrary HOL formulas, trying to simulate the old behavior. To mimic the penalty associated with Skolem functions in the clause-based code, we keep track of polarities and detect quantifiers that give rise to Skolem functions.
What's the penalty associated with Skolem functions? I could understand they are not good for SMTs, but here it seems that they are bad for ATPs too...
First, SMT solvers do work over clauses and there is definitely some (non-naive) normalization internally (e.g., miniscoping). But you do not need to do the normalization before calling the SMT solver (especially, since it will be more naive and generate a larger number of clauses).
Anyway, Section 6.6.7 explains why skolemization was done on the Isabelle side. To summarize: it is not possible to introduce polymorphic constants in a proof in Isabelle; hence it must be done before starting the proof.
It seems likely that, when writing the paper, not changing the filtering lead to worse performance and, hence, the penalty was added. However, I tried to find the relevant code simulating clausification in Sledgehammer, so I don't believe that this happens anymore.

Assumptions in Z3 or Z3Py

is there a way to express assumptions in Z3 (I am using the Z3Py library) such that the engine does not check their validity but takes them as underlying theories, just like in theorem proving?
For example, lets say that I have two unary functions with argument of type Real. I would like to tell the Z3 engine that for all input values, f1(t) is equal to f2(t).
Encoded in Z3Py that would look something like the following:
t = Real("t")
assumption1 = ForAll(t, f1(t) = f2(t)).
The problem with the presented code is that my assertion set is quite big and I use quantifiers (I am trying to prove satisfiability of a real-time system). If I add the above assertion to the set of the other assertions the checking procedure does not terminate.
is there a way to express assumptions in Z3 (I am using the Z3Py library) such that the engine does not check their validity but takes them as underlying theories, just like in theorem proving?
In fact, all assertions you add to Z3 are treated as what you call assumptions. Z3 checks satisfiability of the assertions, it does not check validity. To check validity of a formula F, you assert (not F), and check for satisfiability of (not F). If (not F) is unsat, then F is valid. If you have background axioms, you are essentially checking validity of Background => F, so you can check satisifiability of Background & (not F).
Whether Z3 terminates on your query depends on which combination of theories and quantifiers you use. The more features your queries combine the tougher it is.
For formulas over pure linear arithmetic or polynomial real arithmetic,
these are called LRA, LIA and NRA in the SMT-LIB classification (see smtlib.org) Z3 uses specialized decision procedures that have recently been added.
Yes, that's possible just as you describe it, but you will end up with quantifiers, which does of course mean that you're solving a harder problem and Z3 will behave differently (it's possible you end up using completely different solvers that don't even share much source code).
For the particular example given, it's possible to eliminate the quantifier cheaply because it has the form of a function definition (ForAll x . f(x) = ...), i.e., we can just replace all occurrences of f with the right hand side and then the quantifier is trivially satisfied. In Z3, this is done by the macro finder, which may be applied as a tactic (with name "macro-finder"), or if you are using the "smt" tactic (implicitly via others or directly), then you can set smt.macro_finder=true.

When should I use a function instead of a variable in Z3?

I have written two Sudoku solvers in Z3, once using 81 variables, and once using a function that maps x and y coordinates to the number in square[x][y].
I guess one could also use an Array instead.
What is the difference between having a python array of Z3 variables, having a Z3 array or having a function in Z3?
When should I use which?
There is no generally applicable answer to this question. There is usually more than one way to model a problem and it's never clear which one will be the best in practice. As a general rule it makes sense to keep it within one theory as that avoids costly cross-theory reasoning; i.e., stick with bit-vectors or (bounded) integers, but don't try to translate integers to bit-vectors (e.g., the int2bv term is essentially treated as uninterpreted by Z3). Also, it's known that there are better solutions for solving problems with arrays, than the one implemented in Z3, so if they are not really necessary it helps to eliminate them.

Custom theory solver for order theory?

My program, bounded synthesizer of reactive finite state systems, produces SMT queries to annotate a product automaton of the (uninterpreted) system and a specification. Essentially it is a model checking with uninterpreted functions. If the annotation exists => the model found by Z3 satisfies the spec. The queries contain:
datatype (to encode states of a system and of a specification automaton)
>= (greater), > (strictly) (to specify ranking function of states of automaton system*spec, which is used to search lassos with bad states)or in other words, ordering of states of that automaton, which
uninterpreted functions with boolean domain and range
all clauses are horn clauses
An example is https://dl.dropboxusercontent.com/u/444947/posts/full_arbiter2.smt2
('forall' are used to encode "don't care" inputs to functions)
Currently queries take strictly greater > operator from integers arithmetic (that is a ranking function has Int range).
Question: is it worth developing a custom theory solver in Z3 for such queries? It could exploit DFS based search of lassos which might be faster than integers theory solver (or diff-neg tactic).
Or Z3 already efficiently handles this? (efficiently means "comparable to graph-based search of lassos").
Arithmetic is not the bottleneck of your benchmark.
We can check that by using
valgrind --tool=callgrind z3 full_arbiter2.smt2
kcachegrind
Valgrind and kcachegrind are available in most Linux distros.
So, I don't think you will get a significant performance improvement if you implement a solver for order theory.
One bottleneck is the datatype theory. You may get a performance boost if you encode the types Q and T using Bit-vectors. Another bottleneck is quantifier reasoning. Have you tried to expand them before invoking Z3?
In Z3, the qe (quantifier elimination) tactic will essentially expand Boolean quantifiers.
I got a small speedup by replacing
(check-sat)
with
(check-sat-using (then qe smt))

Quantifier elimination for multilinear rational arithmetic in Z3

As far as I understand, Z3, when encountering quantified linear real/rational arithmetic, applies a form of quantifier elimination described in Bjørner, IJCAR 2010 and more recent work by Bjørner and Monniaux (that's what qe_sat_tactic.cpp says, at least).
I was wondering
Whether it still works if the formula is multilinear, in the sense that the "constants" are symbolic. E.g. ∀x, ax≤b ⇒ ax ≤ 0 can be dealt with by separating the cases a<0, a=0 and a>0. This is possible using Weispfenning's virtual substitution approach, but I don't know what ended up being implemented in Z3 (that is, whether it implements the general approach or the one restricted to constant coefficients).
Whether it is possible, in Z3, to output the result of elimination instead of just solving for one model. There might be a Z3 tactic to do so but I don't know how this is supposed to be requested.
Whether it is possible, in Z3, to perform elimination as described above, then use the new nonlinear solver to obtain a model. Again, a succession of tactics might do the trick, but I don't know how this is supposed to be requested.
Thanks.
After long travels (including a travel where I met David at a conference), here is a short summary to answer the questions as they are posed.
There is no specific support for multi-linear forms.
The 'qe' tactic produces results of elimination, but may as a side-effect decide satisfiablity.
This is a very interesting problem to investigate, but it is not supported out of the box.

Resources