Even for simplest arithmetic SMT problems the existential quantifier is required to declare symbolic variables. And ∀ quantifier can be turned into ∃ by inverting the constraint. So, I can use both of them in QF_* logics and it works.
I take it, "quantifier free" means something else for such SMT logics, but what exactly?
The claim is that
∀ quantifier can be turned into ∃ by inverting the constraint
AFAIK, the following two relations hold:
∀x.φ(x) <=> ¬∃x.¬φ(x)
¬∀x.φ(x) <=> ∃x.¬φ(x)
Since a quantifier-free SMT formula φ(x) is equisatisfiable to its existential closure ∃x.φ(x), we can use the quantifier-free fragment of an SMT Theory to express a (simple) negated occurrence of universal quantification, and [AFAIK] also a (simple) positive occurrence of universal quantification over trivial formulas (e.g. if [∃x.]φ(x) is unsat then ∀x.¬φ(x)¹).
¹: assuming φ(x) is quantifier-free; As #Levent Erkok points out in his answer, this approach is inconclusive when both φ(x) and ¬φ(x) are satisfiable
However, we cannot, for example, find a model for the following quantified formula using the quantifier-free fragment of SMT:
[∃y.]((∀x.y <= f(x)) and (∃z.y = f(z)))
For the records, this is an encoding of the OMT problem min(y), y=f(x) as a quantified SMT formula. [related paper]
A term t is quantifier-free iff t syntactically contains no quantifiers. A quantifier-free formula φ is equisatisfiable with its existential closure
(∃x1. (∃x2 . . .(∃xn.φ ). . .))
where x1, x2, . . . , xn is any enumeration of free(φ), the free variables in φ.
The set of free variables of a term t, free(t), is defined inductively as:
free(x) = {x} if x is a variable,
free((f t1 t2 . . . tk)) = \cup_{i∈[1,k]} free(ti) for function applications,
free(∀x.φ) = free(φ) \ {x}, and
free(∃x.φ) = free(φ) \ {x}.
[source]
Patrick gave an excellent answer, but here're a few more thoughts. (I'd have put this up as a comment, but StackOverflow thinks it's too long for that!)
Notice that you cannot always play the "negate and check the opposite" trick. This only works because if the negation of a property is unsatisfiable, then the property must be true for all inputs. But it doesn't go the other way around: A property can be satisfiable, and its negation can be satisfiable as well. Simple example: x < 10. This is obviously satisfiable, and so is its negation x >= 10. So, you cannot always get rid of quantifiers by playing this trick. It only works if you want to prove something: Then you can negate it and see if that negation is unsatisfiable. If you're concerned about finding a model to a formula, the method doesn't apply.
You can always skolemize a formula and eliminate all the existential quantifiers by replacing them with uninterpreted functions. What you then end up with is an equisatisfiable formula that has all prefix universals. Clearly, this is not quantifier free, but this is a very common trick that most tools do for you automatically.
Where all this hurts is alternating quantifiers. Regardless of skolemization, if you have alternating quantifiers than your problem is already too difficult to deal with. The wikipedia page on quantifier elimination is rather terse, but it gives a very good introduction: https://en.wikipedia.org/wiki/Quantifier_elimination Bottom line: Not every theory admits quantifier elimination, and even those that do might require exponential algorithms to get rid of them; causing performance issues.
Related
I'm working on a language very similar to STLC that I'm converting to Z3 propositions, hence a few (sub)questions about how Z3 treats (uninterpreted) functions:
Functions are naturally curried in my language and as I'm converting the terms of my language recursively, I'd like to be able to build the corresponding Z3 AST recursively as well. That is, when I have a term f x y I'd like to first apply f to x and then apply that to y. Is there a way to do this? The API I've found so far (Z3_mk_func_decl/Z3_mk_app) seems to require me to collect all arguments first and apply them all at once.
Is there a reasonable way to represent something like (if b then f else g) x?
In both cases, I'm totally fine with functions being uninterpreted and restricting the reasoning to things like "b = True /\ f x = 0 => (if b then f else g) x = 0 holds".
SMTLib (as described in http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf) is a many-sorted first-order logic. All functions (uninterpreted or not) must be applied to all its arguments, and you cannot have any form of currying. Also, you cannot do higher-order if-then-else, i.e., the branches of an if-then-else will have to be first-order values. (However, they can be arrays, and you can imagine "faking" functions with arrays. But that's besides the point.)
It should be noted that the next iteration of SMTLib (v3) will be based on a higher-order logic, at which point features like you're asking might become available. See: http://smtlib.cs.uiowa.edu/version3.shtml. Of course, this is still a proposal and it'll take a while before it's settled and actual solvers start implementing it faithfully. Eventually that'll happen, but I wouldn't expect it in the very near term.
Aside: Since you mentioned STLC (simply-typed-lambda-calculus), I presume you might be familiar with functional languages like Haskell. If that's the case, you might want to look into using SBV: https://hackage.haskell.org/package/sbv. It provides a framework for doing some of these things by carefully translating them away behind the scenes. Here's an example:
Prelude Data.SBV> sat $ \b -> (ite b (uninterpret "f") (uninterpret "g")) (0::SInteger) .== (0::SInteger)
Satisfiable. Model:
s0 = True :: Bool
f :: Integer -> Integer
f _ = 0
g :: Integer -> Integer
g _ = 2
Here we created two functions and used the ite construct to "merge" them; and got the solver to return us a model. Behind the scenes, SBV will fully saturate these applications and let you "pretend" you're programming in a higher-order sense, just like in STLC or Haskell. Of course, the devil is in the details and there are limitations to the approach, but modeling STLC in Haskell is a classic pastime for many people and doing it symbolically using SBV can be a fun exercise.
This might not beyond the scope of z3, I know in Z3 we can simplify expression, but I wonder if z3 can solve the equation instead of giving a model.
For example, I want the following equation always be true for any value of a. Using ForAll quantifier in this case would return unsat.
a == b - c + 2
The solution I expect is a formula for one specified variable while simplify doesn't deal with this, like
b == a + c - 2 or c == b - a + 2
Is there any API for this ? Thanks in advance.
Not sure what exactly the question is here, but it sounds like you want to get all satisfying solutions instead of a single satisfying solution. It is possible to encode that (with quantifiers), but not necessarily easy to solve; depending on the fragment of logic used within the quantifiers, Z3 may be slow, or it may simply give up relatively early (returning unknown). There is no dedicated API for this purpose, but the existing API has all the pieces required to solve the puzzle (e.g., use an uninterpreted function f to encode the solution symbolically and then get the func_interp for f from the model.)
For more information about getting more than one solution, see also Z3: finding all satisfying models and Z3 Enumerating all satisfying assignments.
In a (rather large) Z3 problem, we have a few axioms of the shape:
forall xs :: ( (P(xs) ==> (exists ys :: Q(xs,ys))) && ((exists zs :: Q(xs,zs)) ==> P(xs)) )
All three quantifiers (including the existentials) have explicit triggers provided (omitted here). When running the problem and gathering quantifier statistics, we observed the following data (amongst many other instantiations):
[quantifier_instances] k!244 : 804 : 3 : 4
[quantifier_instances] k!232 : 10760 : 29 : 30
Here, line 244 corresponds to the end of the outer forall quantifier, and line 232 to the end of the first inner exists. Furthermore, there are no reported instantiations of the second inner exists (which I believe Z3 will pull out into a forall); given the triggers this is surprising.
My understanding is that existentials in this inner position should be skolemised by a function (depending on the outer quantifier). It's not clear to me what quantifier statistics mean for such existentials.
Here are my specific questions:
Are quantifier statistics meaningful for existential quantifiers (those which remain as existentials - i.e. in positive positions)? If so, what do they mean?
Does skolemisation of such an existential happen once and for all, or each time the outer quantifier is instantiated? Why is a substantially higher number reported for this quantifier than for the outer forall?
Does Z3 apply some internal rewriting of this kind of (A==>B)&&(B==>A) assertion? If so, how does that affect quantifier statistics for quantifiers in A and B?
From our point of view, understanding question 2 is most urgent, since we are trying to investigate how the generated existentials affect the performance of the overall problem.
The original smt file is available here:
https://gist.github.com/anonymous/16e489ce5c513e8c4bc6
and a summary of the statistics generated (with Z3 4.4.0, but we observed the same with 4.3.2) is here:
https://gist.github.com/anonymous/ce7b96acf712ac16299e
The answer to all these questions is 'it depends', mainly on what else appears in the problem and what options are set. For instance, if there are only bit-vector variables, then Skolemization will indeed be performed during preprocessing, once and forall, but this is not the case for all other theories or theory combinations.
Briefly looking at your SMT2 file, it seems to me that all existentials appear in the left hand side of implications, i.e., they are in fact negated (and actually rewritten into universals somewhere along the line), so those statistics do make sense for the existentials appearing in this particular problem.
In Ulf Norell's thesis he mentions that Agda is based on Luo's UTT. However, I can't find a way to use Prop there. Is there any way to do so?
The Prop universe was available in an early version of Agda, but it has since been removed. In fact, Prop is still a keyword in Agda, but using it gives an error Prop is no longer supported. Depending on what you want to achieve, you have a few options:
You may want to take a look at Agda's proof irrelevance feature.
I have seen some people use the synonym Prop = Set. This is useful if you want to make a logical difference between propositions and more general sets, but of course it doesn't give you any of the additional axioms of Prop.
Finally, there is the type of (homotopy) propositions from HoTT, which can be defined in Agda by hProp = Σ[ X ∈ Set ] ((x y : X) → x ≡ y). This guarantees that propositions have at most one proof, but can cause quite some overhead.
If I give Z3 a formula like p | q, I would expect Z3 to return p=true, q=don't care (or with p and q switched) but instead it seems to insist on assigning values to both p and q (even though I don't have completion turned on when calling Eval()). Besides being surprised at this, my question then is what if p and q are not simple prop. vars but expensive expressions and I know that typically either p or q will be true. Is there an easy way to ask Z3 to return a "minimal" model and not waste its time trying to satisfy both p and q? I already tried MkITE but that makes no difference. Or do i have to use some kind of tactic to enforce this?
thanks!
PS. I wanted to add that I have turned off AUTO_CONFIG, yet Z3 is trying to assign values to constants in both branches of the or: eg in the snippet below I want it to assign either to path2_2 and path2_1 or to path2R_2 and path2R_1 but not both
(or (and (select a!5 path2_2) a!6 (select a!5 path2_1) a!7)
(and (select a!5 path2R_2) a!8 (select a!5 path2R_1) a!9))
Z3 has a feature called relevancy propagation. It is described in this article. It does what you want. Note that, in most cases relevancy propagation has a negative impact on performance. In our experiments, it is only useful for problems containing quantifiers (quantifier reasoning is so expensive that it is pays off). By default, Z3 will use relevancy propagation in problems that contain quantifiers. Otherwise, it will not use it.
Here is an example on how to turn it on when the problem does not have quantifiers (the example is also available online here)
x, y = Bools('x y')
s = Solver()
s.set(auto_config=False, relevancy=2)
s.add(Or(x, y))
print s.check()
print s.model()