I want to find the n-dimensional point (x1...xn) in integer space that satisfies some properties, while also maximizing the minimum distance between x and any element of a collection of m (pre-defined/constant) n-dimensional points (z11...z1n, z21...z2n... zm1...zmn). Is there a way to do this using Z3?
Sure. See: https://rise4fun.com/Z3/tutorial/optimization
The above link talks about the SMTLib interface, but the same is also available from the Python interface as well. (And from most other bindings to Z3.)
Note that optimization is largely for linear properties. If you have non-linear terms, you might want to formulate them so that a linear-counter-part can be optimized instead. Even with non-linear terms, you might get good results, impossible to know without trying.
Related
My team has been using the Z3 solver to perform passive learning. Passive learning entails obtaining from a set of observations a model consistent with all observations in the set. We consider models of different formalisms, the simplest being Deterministic Finite Automata (DFA) and Mealy machines. For DFAs, observations are just positive or negative samples.
The approach is very simplistic. Given the formalism and observations, we encode each observation into a Z3 constraint over (uninterpreted) functions which correspond to functions in the formalism definition. For DFAs for example, this definition includes a transition function (trans: States X Inputs -> States) and an output function (out: States -> Boolean).
Encoding say the observation (aa, +) would be done as follows:
out(trans(trans(start,a),a)) == True
Where start is the initial state. To construct a model, we add all the observation constraints to the solver. We also add a constraint which limits the number of states in the model. We solve the constraints for a limit of 1, 2, 3... states until the solver can find a solution. The solution is a minimum state-model that is consistent with the observations.
I posted a code snippet using Z3Py which does just this. Predictably, our approach is not scalable (the problem is NP-complete). I was wondering if there were any (small) tweaks we could perform to improve scalability? (in the way of trying out different sorts, strategies...)
We have already tried arranging all observations into a Prefix Tree and using this tree in encoding, but scalability was only marginally improved. I am well aware that there are much more scalable SAT-based approaches to this problem (reducing it to a graph coloring problem). We would like to see how far a simple SMT-based approach can take us.
So far, what I have found is that the best Sorts for defining inputs and states are DeclareSort. It also helps if we eliminate quantifiers from the state-size constraint. Interestingly enough, incremental solving did not really help. But it could be that I am not using it properly (I am an utter novice in SMT theory).
Thanks! BTW, I am unsure how viable/useful this test is as a benchmark for SMT solvers.
I am trying to devise ways to improve performance of z3 on my problems. I am aware of the the CAV'06 paper and the tech report . Do relevant parts of z3 v4.3.1 differ from what is described in these documents, and if so in what ways? Also, what is the strategy followed by default in z3 for deciding when to check for consistency in Linear Real Arithmetic, of the theory atoms corresponding to the decided (and propagated) propositional literals?
Linear arithmetic is implemented in the files at src/smt/theory_arith*.
See http://z3.codeplex.com/SourceControl/latest#src/smt/theory_arith_core.h
Regarding the paper you pointed out, the ideas are used in the implementation. However, the actual code contains many extensions for linear integer, nonlinear arithmetic and proof generation. If you only care about linear real arithmetic, you should focus only on theory_arith.h, theory_arith_core.h. The file theory_arith_aux.h also contains useful functionality.
Can SMT solver efficiently find a solution (or an assignment) for the pseudo-Boolean problem as described as follows:
\sum {i..m} f_i x1 x2.. xn *w_i
where f_i x1 x2 .. xn is a Boolean function, and w_i is a weight of Int type.
For your convenience, I highlight the contents in page 1 and 3, which is enough for specifying
the pseudo-Boolean problem.
SMT solvers typically address the question: given a logical formula, optionally using functions and predicates from underlying theories (such as the theory of arithmetic, the theory of bit-vectors, arrays), is the formula satisfiable or not.
They typically don't expose a way for you specify objective functions
and typically don't have built-in optimization procedures.
Some special cases are formulas that only use Booleans or a combination of Booleans and either bit-vectors or integers. Pseudo Boolean constraints can be formulated with either integers or encoded (with some care taking overflow semantics into account) using bit-vectors, or they can be encoded directly into SAT. For some formulas using bounded integers that fall in the class of psuedo-boolean problems, Z3 will try automatic reductions into bit-vectors. This applies only to benchmkars in the SMT-LIB2 format tagged as QF_LIA or applies if you explicitly invoke a tactic that performs this reduction (the "qflia" tactic should apply).
While Z3 does not directly expose objective functions, the question of augmenting
SMT solvers with objective functions is actively pursued in the research community.
One approach suggested by Nieuwenhuis and Oliveras in SAT 2006 was to build in
solving for the "weighted max SMT" problem as a custom theory. Yices comes with built-in
features for weighted max SMT, Z3 does not, but it is possible to write a custom
theory that performs the backtracking search of a weighted max SMT solver, but nothing
out of the box.
Sometimes people try to specify objective functions using quantified formulas.
In theory one could hope that quantifier elimination procedures then can solve
for the objective.
This is generally pretty bad when it comes to performance. Quantifier elimination
is an overfit and the routines (that we have) will not be efficient.
For your problem, if you want to find an optimized (maximum or minimum) result from the sum, yes Z3 has this ability. You can use the Optimize class of Z3 library instead of Solver class. The class provides two methods for 'maximization' and 'minimization' respectively. You can pass the SMT variable that is needed to be optimized and Optimization class model will give the solution for you. It actually worked with C# API using Microsoft.Z3 library. For your inconvenience, I am attaching a snippet:
Optimize opt; // initializing object
opt.MkMaximize(*your variable*);
opt.MkMinimize(*your variable*);
opt.Assert(*anything you need to do*);
I'm pretty new in the field of machine learning (even if I find it extremely interesting), and I wanted to start a small project where I'd be able to apply some stuff.
Let's say I have a dataset of persons, where each person has N different attributes (only discrete values, each attribute can be pretty much anything).
I want to find clusters of people who exhibit the same behavior, i.e. who have a similar pattern in their attributes ("look-alikes").
How would you go about this? Any thoughts to get me started?
I was thinking about using PCA since we can have an arbitrary number of dimensions, that could be useful to reduce it. K-Means? I'm not sure in this case. Any ideas on what would be most adapted to this situation?
I do know how to code all those algorithms, but I'm truly missing some real world experience to know what to apply in which case.
K-means using the n-dimensional attribute vectors is a reasonable way to get started. You may want to play with your distance metric to see how it affects the results.
The first step to pretty much any clustering algorithm is to find a suitable distance function. Many algorithms such as DBSCAN can be parameterized with this distance function then (at least in a decent implementation. Some of course only support Euclidean distance ...).
So start with considering how to measure object similarity!
In my opinion you should also try expectation-maximization algorithm (also called EM). On the other hand, you must be careful while using PCA because this algorithm may reduce the dimensions relevant to clustering.
Has anyone tried proving Z3 with Z3 itself?
Is it even possible, to prove that Z3 is correct, using Z3?
More theoretical, is it possible to prove that tool X is correct, using X itself?
The short answer is: “no, nobody tried to prove Z3 using Z3 itself” :-)
The sentence “we proved program X to be correct” is very misleading.
The main problem is: what does it mean to be correct.
In the case of Z3, one could say that Z3 is correct if, at least, it never returns “sat” for an unsatisfiable problem, and “unsat” for a satisfiable one.
This definition may be improved by also including additional properties such as: Z3 should not crash; the function X in the Z3 API has property Y, etc.
After we agree on what we are supposed to prove, we have to create models of the runtime, programming language semantics (C++ in the case of Z3), etc.
Then, a tool (aka verifier) is used to convert the actual code into a set of formulas that we should check using a theorem prover such as Z3.
We need the verifier because Z3 does not “understand” C++.
The Verifying C Compiler (VCC) is an example of this kind of tool.
Note that, proving Z3 to be correct using this approach does not provide a definitive guarantee that Z3 is really correct since our models may be incorrect, the verifier may be incorrect, Z3 may be incorrect, etc.
To use verifiers, such as VCC, we need to annotate the program with the properties we want to verify, loop invariants, etc. Some annotations are used to specify what code fragments are supposed to do. Other annotations are used to "help/guide" the theorem prover. In some cases, the amount of annotations is bigger than the program being verified. So, the process is not completely automatic.
Another problem is cost, the process would be very expensive. It would be much more time consuming than implementing Z3.
Z3 has 300k lines of code, some of this code is based on very subtle algorithms and implementation tricks.
Another problem is maintenance, we are regularly adding new features and improving performance. These modifications would affect the proof.
Although the cost may be very high, VCC has been used to verify nontrivial pieces of code such as the Microsoft Hyper-V hypervisor.
In theory, any verifier for programming language X can be used to prove itself if it is also implemented in language X.
The Spec# verifier is an example of such tool.
Spec# is implemented in Spec#, and several parts of Spec# were verified using Spec#.
Note that, Spec# uses Z3 and assumes it is correct. Of course, this is a big assumption.
You can find more information about these issues and Z3 applications on the paper:
http://research.microsoft.com/en-us/um/people/leonardo/ijcar10.pdf
No, it is not possible to prove that a nontrivial tool is correct using the tool itself. This was basically stated in Gödel's second incompleteness theorem:
For any formal effectively generated theory T including basic arithmetical truths and also certain truths about formal provability, if T includes a statement of its own consistency then T is inconsistent.
Since Z3 includes arithmetic, it cannot prove its own consistency.
Because it was mentioned in a comment above: Even if the user provides invariants, Gödels's theorem still applies. This is not a question of computability. The theorem states that no such prove can exist in a consistent system.
However you could verify parts of Z3 with Z3.
Edit after 5 years:
Actually the argument is easier than Gödel's incompleteness theorem.
Let's say Z3 is correct if it only returns UNSAT for unsatisfiable formulas.
Assume we find a formula A, such that if A is unsatisfiable then Z3 is correct (and we somehow have proven this relation).
We can give this formula to Z3, but
if Z3 returns UNSAT it could be because Z3 is correct or because of a bug in Z3. So we have not verified anything.
if Z3 returns SAT and a countermodel, we might be able to find a bug in Z3 by analyzing the model
otherwise we don't know anything.
So we can use Z3 to find bugs in Z3 and to improve confidence about Z3 (to an extremely high level), but not to formally verify it.