I can explain to whatever depth is necessary, but the long story short is that my project uses Z3's optimizer to find the maximal solution to what is effectively a SAT problem with weights associated to the boolean variables. Separately to that, another part of my project effectively implements a rule-based saturation engine, which I've lately been considering rewriting using the fixedpoint solver. I thought I could consider the two parts of the problem separately, but it turns out that there is a chicken-and-egg issue where the rule engine requires the solution to the SAT instance in order for its input and output to be correct and meaningful, and that the SAT instance itself is derived from the output of the rule engine. Therefore, I thought I might try to combine Fixedpoint and Optimize. Is this possible? Does it even make sense?
Related
While reading Extending Sledgehammer with SMT solvers I read the following:
In the original Sledgehammer architecture, the available lemmas were rewritten
to clause normal form using a naive application of distributive laws before the relevance filter was invoked. To avoid clausifying thousands of lemmas on each invocation, the clauses were kept in a cache. This design was technically incompatible with the (cache-unaware) smt method, and it was already unsatisfactory for ATPs, which include custom polynomial-time clausifiers.
My understanding of SMT so far is as follows: SMTs don't work over clauses. Instead, they try to build a model for the quantifier-free part of a problem. The search is refined by instantiating quantifiers according to some set of active terms. Thus, indeed no clausal form is needed for SMT solvers.
We rewrote the relevance filter so that it operates on arbitrary HOL formulas, trying to simulate the old behavior. To mimic the penalty associated with Skolem functions in the clause-based code, we keep track of polarities and detect quantifiers that give rise to Skolem functions.
What's the penalty associated with Skolem functions? I could understand they are not good for SMTs, but here it seems that they are bad for ATPs too...
First, SMT solvers do work over clauses and there is definitely some (non-naive) normalization internally (e.g., miniscoping). But you do not need to do the normalization before calling the SMT solver (especially, since it will be more naive and generate a larger number of clauses).
Anyway, Section 6.6.7 explains why skolemization was done on the Isabelle side. To summarize: it is not possible to introduce polymorphic constants in a proof in Isabelle; hence it must be done before starting the proof.
It seems likely that, when writing the paper, not changing the filtering lead to worse performance and, hence, the penalty was added. However, I tried to find the relevant code simulating clausification in Sledgehammer, so I don't believe that this happens anymore.
Problem
I'm trying to use z3 to disprove reachability assertions on a Petri net.
So I declare N state variables v0,..v_n-1 which are positive integers, one for each place of a Petri net.
My main strategy given an atomic proposition P on states is the following :
compute (with an exterior engine) any "easy" positive invariants as linear constraints on the variables, of the form alpha_0 * v_0 + ... = constant with only positive or zero alpha_i, then check_sat if any state reachable under these constraints satisfies P, if unsat conclude, else
compute (externally to z3) generalized invariants, where the alpha_i can be negative as well and check_sat, conclude if unsat, else
add one positive variable t_i per transition of the system, and assert the Petri net state equation, that any reachable state has a Parikh firing count vector (a value of t_i's) such that M0 the initial state + product of this Parikh vector by incidence matrix gives the reached state. So this one introduces many new variables, and involves some multiplication of variables, but stays a linear integer programming problem.
I separate the steps because since I want UNSAT, any check_sat that returns UNSAT stops the procedure, and the last step in particular is very costly.
I have issues with larger models, where I get prohibitively long answer times or even the dreaded "unknown" answer, particularly when adding state equation (step 3).
Background
So besides splitting the problem into incrementally harder segments I've tried setting logic to QF_LRA rather than QF_LIA, and declaring the variables as Real than integers.
This overapproximation is computationally friendly (z3 is fast on these !) but unfortunately for many models the solutions are not integers, nor is there an integer solution.
So I've tried setting Reals, but specifying that each variable is either =0 or >=1, to remove solutions with fractions of firings < 1. This does eliminate spurious solutions, but it "kills" z3 (timeout or unknown) in many cases, the problem is obviously much harder (e.g. harder than with just integers).
Examples
I don't have a small example to show, though I can produce some easily. The problem is if I go for QF_LIA it gets prohibitively slow at some number of variables. As a metric, there are many more transitions than places, so adding the state equation really ups the variable count.
This code is generating the examples I'm asking about.
This general presentation slides 5 and 6 express the problem I'm encoding precisely, and slides 7 and 8 develop the results of what "unsat" gives us, if you want more mathematical background.
I'm generating problems from the Model Checking Contest, with up to thousands of places (primary variables) and in some cases above a hundred thousand transitions. These are extremum, the middle range is a few thousand places, and maybe 20 thousand transitions that I would really like to deal with.
Reals + the greater than 1 constraint is not a good solution even for some smaller problems. Integers are slow from the get-go.
I could try Reals then iterate into Integers if I get a non integral solution, I have not tried that, though it involves pretty much killing and restarting the solver it might be a decent approach on my benchmark set.
What I'm looking for
I'm looking for some settings for Z3 that can better help it deal with the problems I'm feeding it, give it some insight.
I have some a priori idea about what could solve these problems, traditionally they've been fed to ILP solvers. So I'm hoping to trigger a simplex of some sort, but maybe there are conditions preventing z3 from using the "good" solution strategy in some cases.
I've become a decent level SMT/Z3 user, but I've never played with the fine settings of :options, to guide the solver.
Have any of you tried feeding what are basically ILP problems to SMT, and found options settings or particular encodings that help it deploy the right solutions ? thanks.
My team has been using the Z3 solver to perform passive learning. Passive learning entails obtaining from a set of observations a model consistent with all observations in the set. We consider models of different formalisms, the simplest being Deterministic Finite Automata (DFA) and Mealy machines. For DFAs, observations are just positive or negative samples.
The approach is very simplistic. Given the formalism and observations, we encode each observation into a Z3 constraint over (uninterpreted) functions which correspond to functions in the formalism definition. For DFAs for example, this definition includes a transition function (trans: States X Inputs -> States) and an output function (out: States -> Boolean).
Encoding say the observation (aa, +) would be done as follows:
out(trans(trans(start,a),a)) == True
Where start is the initial state. To construct a model, we add all the observation constraints to the solver. We also add a constraint which limits the number of states in the model. We solve the constraints for a limit of 1, 2, 3... states until the solver can find a solution. The solution is a minimum state-model that is consistent with the observations.
I posted a code snippet using Z3Py which does just this. Predictably, our approach is not scalable (the problem is NP-complete). I was wondering if there were any (small) tweaks we could perform to improve scalability? (in the way of trying out different sorts, strategies...)
We have already tried arranging all observations into a Prefix Tree and using this tree in encoding, but scalability was only marginally improved. I am well aware that there are much more scalable SAT-based approaches to this problem (reducing it to a graph coloring problem). We would like to see how far a simple SMT-based approach can take us.
So far, what I have found is that the best Sorts for defining inputs and states are DeclareSort. It also helps if we eliminate quantifiers from the state-size constraint. Interestingly enough, incremental solving did not really help. But it could be that I am not using it properly (I am an utter novice in SMT theory).
Thanks! BTW, I am unsure how viable/useful this test is as a benchmark for SMT solvers.
Since the question is a little hard to describe, I will use a small example to describe my question.
Suppose there is a propositional formula set whose elements are Boolean variables a,b and c.
When using z3 to get a truth assignment of this formula set, does there exist some way to set the priority of the variables? I mean that if the priority is a>b>c, then during the searching process z3 firstly assumes a is true and if a is impossible to be true it assumes b is true and so on. In another words, if z3 gives a truth assignment: not a,b,c under the aforementioned priority it means a is impossible to be true because a is high-priority compared with b. Hope I describe the question clearly.
There is not easy way to do it in the current release (v4.3.1). The only way I can see is to hack/modify the Z3 source code (http://z3.codeplex.com). We agree that setting priorities is a useful feature for some applications, however there are some problems.
First, Z3 applies several transformations (aka preprocessing steps) before solving a problem. Variables are created and eliminated. Thus, a case-split priority for the original problem may be meaningless for the actual problem (the one generated after applying all transformations) that is solved by Z3.
One dramatic example is a formula containing only Bit-vectors. By default, Z3 will reduce this formula into Propositional logic and invoke a Propositional SAT solver. In this reduction, all Bit-vector variables are eliminated.
Z3 is a collection of solvers and preprocessors. By default, Z3 will select a solver automatically for the user. Some of these solvers use completely different algorithms. So, the provided priority may be useless for the solver being used.
As GManNickG pointed out, it is possible to set the phase selection strategy for a particular solver. See the post provided in his comment for additional details.
Has anyone tried proving Z3 with Z3 itself?
Is it even possible, to prove that Z3 is correct, using Z3?
More theoretical, is it possible to prove that tool X is correct, using X itself?
The short answer is: “no, nobody tried to prove Z3 using Z3 itself” :-)
The sentence “we proved program X to be correct” is very misleading.
The main problem is: what does it mean to be correct.
In the case of Z3, one could say that Z3 is correct if, at least, it never returns “sat” for an unsatisfiable problem, and “unsat” for a satisfiable one.
This definition may be improved by also including additional properties such as: Z3 should not crash; the function X in the Z3 API has property Y, etc.
After we agree on what we are supposed to prove, we have to create models of the runtime, programming language semantics (C++ in the case of Z3), etc.
Then, a tool (aka verifier) is used to convert the actual code into a set of formulas that we should check using a theorem prover such as Z3.
We need the verifier because Z3 does not “understand” C++.
The Verifying C Compiler (VCC) is an example of this kind of tool.
Note that, proving Z3 to be correct using this approach does not provide a definitive guarantee that Z3 is really correct since our models may be incorrect, the verifier may be incorrect, Z3 may be incorrect, etc.
To use verifiers, such as VCC, we need to annotate the program with the properties we want to verify, loop invariants, etc. Some annotations are used to specify what code fragments are supposed to do. Other annotations are used to "help/guide" the theorem prover. In some cases, the amount of annotations is bigger than the program being verified. So, the process is not completely automatic.
Another problem is cost, the process would be very expensive. It would be much more time consuming than implementing Z3.
Z3 has 300k lines of code, some of this code is based on very subtle algorithms and implementation tricks.
Another problem is maintenance, we are regularly adding new features and improving performance. These modifications would affect the proof.
Although the cost may be very high, VCC has been used to verify nontrivial pieces of code such as the Microsoft Hyper-V hypervisor.
In theory, any verifier for programming language X can be used to prove itself if it is also implemented in language X.
The Spec# verifier is an example of such tool.
Spec# is implemented in Spec#, and several parts of Spec# were verified using Spec#.
Note that, Spec# uses Z3 and assumes it is correct. Of course, this is a big assumption.
You can find more information about these issues and Z3 applications on the paper:
http://research.microsoft.com/en-us/um/people/leonardo/ijcar10.pdf
No, it is not possible to prove that a nontrivial tool is correct using the tool itself. This was basically stated in Gödel's second incompleteness theorem:
For any formal effectively generated theory T including basic arithmetical truths and also certain truths about formal provability, if T includes a statement of its own consistency then T is inconsistent.
Since Z3 includes arithmetic, it cannot prove its own consistency.
Because it was mentioned in a comment above: Even if the user provides invariants, Gödels's theorem still applies. This is not a question of computability. The theorem states that no such prove can exist in a consistent system.
However you could verify parts of Z3 with Z3.
Edit after 5 years:
Actually the argument is easier than Gödel's incompleteness theorem.
Let's say Z3 is correct if it only returns UNSAT for unsatisfiable formulas.
Assume we find a formula A, such that if A is unsatisfiable then Z3 is correct (and we somehow have proven this relation).
We can give this formula to Z3, but
if Z3 returns UNSAT it could be because Z3 is correct or because of a bug in Z3. So we have not verified anything.
if Z3 returns SAT and a countermodel, we might be able to find a bug in Z3 by analyzing the model
otherwise we don't know anything.
So we can use Z3 to find bugs in Z3 and to improve confidence about Z3 (to an extremely high level), but not to formally verify it.