Creating List in z3 using function - z3

I'm trying to convert this piece of pseudocode to SMT-LIB language, but I got stuck.
List function my_fun(int x)
{
list = nil
for(i in 1 to x):
if(some_condition_on_i)
list.concat(i)
return list
}
what I've done so far is this:
(declare-const l1 (List Int))
(define-fun my_fun ((x Int)) (List Int)
(forall ((t Int))
(ite (and (some_condition_on_t) (< t x)) (insert t l1) l1 )
)
)
)
which I know it is wrong, and does not work. can you help me to understand how can I do this?

SMT-LIB models logic, where variables are always immutable; your code, on the other hand, appears to be imperative, i.e. variables such as list and i are mutable. This crucial difference will be the biggest challenge in encoding your program and the challenge of reasoning about imperative programs has sparked research tools such as Dafny, Boogie, or Viper
Here are a few pointers:
(insert t l1) represents a new list, obtained by inserting t into l1. It will not modify l1 (and there is no way to modify l1 since it is a logical variable)
A logical forall is a boolean formula (it evaluates to true or false), it is not a statement that you can execute (e.g. for its side effects)
If the value of x were statically known (i.e. if it were 5), then you could unroll the loop (here in pseudo-code):
l0 := Nil
l1 := ite(condition(1), insert(1, l0), l0)
l2 := ite(condition(2), insert(2, l1), l1)
...
l4 := ite(condition(4), insert(4, l3), l3)
If the value of x isn't statically known then you'll most likely either need a loop invariant or work with fix points in order to account for an unknown number of loop iterations

Related

Z3 fails to find simple sat solution after quantified bitvector assertion

I noticed that if I create my own array type that stores bitvectors and assert the first array update axiom, simple assertions afterwards fail to find a solution (my example below neither returns sat nor unsat but just keeps running):
(declare-sort MyArray)
; Indices into the array
(declare-sort Id)
; Returns the value in the array located at the specified index
(declare-fun index
(MyArray Id)
(_ BitVec 8))
; Updates the array so that the provided value is stored at the specified index
(declare-fun upd
(MyArray Id (_ BitVec 8))
MyArray)
; First array update axiom
(assert (forall ((a MyArray) (i Id) (v (_ BitVec 8)))
(=
(index (upd a i v) i)
v)))
(declare-const x Int)
(declare-const y Int)
(echo "")
(echo "Sanity check, should be sat:")
(assert (= x y))
(check-sat)
However, if I instead specify that my array stores a custom sort z3 finds a solution very quickly:
(declare-sort MyArray)
; Indices into the array
(declare-sort Id)
; Values stored in the array
(declare-sort Elem)
; Returns the value in the array located at the specified index
(declare-fun index
(MyArray Id)
Elem)
; Updates the array so that the provided value is stored at the specified index
(declare-fun upd
(MyArray Id Elem)
MyArray)
; First array update axiom
(assert (forall ((a MyArray) (i Id) (v Elem))
(=
(index (upd a i v) i)
v)))
(declare-const x Int)
(declare-const y Int)
(echo "")
(echo "Sanity check, should be sat:")
(assert (= x y))
(check-sat)
Does anyone know why this is the case? It's possible that z3 gets caught in some kind of instantiation loop (since the upd function both takes and returns MyArray sort), but I'm surprised that it only seems to get tripped up with bitvectors as the elements. Is this related to Nikolaj's answer that the quantifier elimination tactic is currently fairly simplistic when it comes to bit-vectors?
I'm using bitvectors because my problem ultimately involves some bitvector operations (especially bvxor). Is it better just to define my own operations and essentially recreate part of the theory of bitvectors? Or is there a better way to go about this (than mixing quantifiers, bitvectors, and part of the theory of arrays)? I'm really just interested in operations on bytes so all my bitvectors are of length 8.
I don't think there's a good reason why z3 is not terminating on the first program you gave here. Running with z3 -v:10 suggests it gets into some sort of an unproductive loop, that the second version avoids. I think you should report this at https://github.com/Z3Prover/z3/issues. Even though it's not strictly speaking a "bug," it's surprising behavior and the developers might want to look at it. (Please report back what you find.)
Regarding your second question: Do not reinvent what z3 already has support for! Use the internal arrays; they have a custom decision procedure and it has gone years of tuning. The introduction of quantified axioms will no doubt create more work than is necessary. Does something go wrong if you use internal arrays? Why would you not use them anyhow? Only look at your own axiomatization, if the internal built-in one isn't working well. (Even then, I'd first check with developers to see why.)

Z3 start values [duplicate]

How to specify initial 'soft' values for the model? This initial model is the result of solving a similar query, and it is likely that this model has a correct pieces or even may be true for the current query.
Currently I am simulating this with an incremental solving and hard/soft constraints:
(define-fun trans_assumed ((a Int)) Int
; an initial model, which may be (partially) true
)
(declare-fun trans_sought ((a Int)) Int)
(declare-const p Bool)
(assert (=> p (forall ((a Int)) (= (trans_assumed a) (trans_sought a)))))
(check-sat p) ; in hope that trans_assumed values will be used as initial below
; add here the main constraints for trans_sought function
(check-sat) ; Z3 will use trans_assumed as a starting point for trans_sought
Does this really specify initial values for trans_sought to be trans_assumed?
Incremental mode of solving is slow compared to sequential. Any better ways of introducing initial values?
I think this is a good approach, but you may consider using more Boolean variables. Right now, it is a "all" or "nothing" approach. In your script, when (check-sat p) is executed, Z3 will look for a model where trans_assumed and trans_sought have the same interpretation. If such model does not exist, it will return with the unsat core containing p. When (check) is executed, Z3 is free to assign p to false, and the universal quantifier is essentially a don't care. That is, trans_assumed and trans_sought can be completely different.
If you use multiple Boolean variables to control the interpretation of trans_sought, you will have more flexibility.
If the rest of your problem is quantifier free, you should consider dropping the universal quantifier. This can be done if you only care about the value of trans_sought in a finite number of points.
Suppose we have that trans_assumed(0) = 1 and trans_assumed(1) = 10. Then, we can write:
assert (=> p0 (= (trans_sought 0) 1)))
assert (=> p1 (= (trans_sought 1) 10)))
In this encoding, we can query (check-sat p0 p1), (check-sat p0), (check-sat p1)

Avoiding quantifiers in Z3

I am experimenting with Z3 where I combine the theories of arithmetic, quantifiers and equality. This does not seem to be very efficient, in fact it seems to be more efficient to replace the quantifiers with all instantiated ground instances when possible. Consider the following example, in which I have encoded the unique names axiom for a function f that takes two arguments of sort Obj and returns an interpreted sort S. This axiom states that each unique list of arguments to f returns a unique object:
(declare-datatypes () ((Obj o1 o2 o3 o4 o5 o6 o7 o8)))
(declare-sort S 0)
(declare-fun f (Obj Obj) S)
(assert (forall ((o11 Obj) (o12 Obj) (o21 Obj) (o22 Obj))
(=>
(not (and (= o11 o21) (= o12 o22)))
(not (= (f o11 o12) (f o21 o22))))))
Although this is a standard way of defining such an axiom in logic, implementing it like this is computationally very expensive. It contains 4 quantified variables, which each can have 8 values. This means that this results in 8^4 = 4096 equalities. It takes Z3 0.69s and 2016 quantifier instantiations to prove this. When I write a simple script that generates the instances of this formula:
(assert (distinct (f o1 o1) (f o1 o2) .... (f o8 o7) (f o8 o8)))
It takes 0.002s to generate these axioms, and another 0.01s (or less) to prove it in Z3. When we increase the objects in the domain, or the number of arguments to the function f this different increases rapidly, and the quantified case quickly becomes unfeasible.
This makes me wonder: when we have a bounded domain, why would we use quantifiers in Z3 in the first place? I know that SMT uses heuristics to find solutions, but I get the feeling that it still cannot compete in efficiency with a simple domain-specific grounder that feeds the grounded instances to SMT, which is then nothing more than SAT solving. Is my intuition correct?
Your intuition is correct. The heuristics for handling quantifiers in Z3 are not tuned for problems where universal variables range over finite/bounded domains.
In this kind of problem, using quantifiers is a good option only if a very small percentage of the instances are needed to show that a problem is unsatisfiable.
I usually suggest that users should expand this quantifiers using the programmatic API.
Here a two related posts. They contain links to Python code that implements this approach.
Does Z3 take a longer time to give an unsat result compared to a sat result?
Quantifier Vs Non-Quantifier
Here is one of the code fragments:
VFunctionAt = Function('VFunctionAt', IntSort(), IntSort(), IntSort())
s = Solver()
s.add([VFunctionAt(V,S) >= 0 for V in range(1, 5) for S in range(1, 9)])
print s
In this example, I'm essentially encoding forall V in [1,4] S in [1,8] VFunctionAt(V,S) >= 0.
Finally, your encoding (assert (distinct (f o1 o1) (f o1 o2) .... (f o8 o7) (f o8 o8)) is way more compact than expanding the quantifier 4096 times. However, even if we use a naive encoding (just expand the quantifier 4096 times), it is stil faster to solve the expanded version.

Does not 'check-sat' support Boolean function as assumption?

In the following example, I tried to use uninterpreted Boolean function like "(declare-const p (Int) Bool)" rather than single Boolean constant for each assumption. But it does not work (it gives compilation error).
(set-option :produce-unsat-cores true)
(set-option :produce-models true)
(declare-fun p (Int) Bool)
;(declare-const p1 Bool)
;(declare-const p2 Bool)
; (declare-const p3 Bool)
;; We assert (=> p C) to track C using p
(declare-const x Int)
(declare-const y Int)
(assert (=> (p 1) (> x 10)))
;; An Boolean constant may track more than one formula
(assert (=> (p 1) (> y x)))
(assert (=> (p 2) (< y 5)))
(assert (=> (p 3) (> y 0)))
(check-sat (p 1) (p 2) (p 3))
(get-unsat-core)
Output
Z3(18, 16): ERROR: invalid check-sat command, 'not' expected, assumptions must be Boolean literals
Z3(19, 19): ERROR: unsat core is not available
I understand that it is not possible (unsupported) to use Boolean function. Is there any reason behind that? Is there different way to do that?
We have this restriction because Z3 applies many simplifications before it solves a problem. Some of them will rewrite formulas and terms. The problem that is actually solved by Z3 is very often quite different from the input problem. We would have trace back the simplified assumptions to the original assumptions, or introduce auxiliary variables. Restricting to Boolean literals avoids this issue, and makes the interface very clean. Note that this restriction does not limit the expressiveness. If you think it is too annoying to declare many Boolean variables to track different assertions. I suggest you take a look at the new Python front-end for Z3 called Z3Py. It is much more convenient to use than SMT 2.0. Here is your example in Z3Py: http://rise4fun.com/Z3Py/cL
In this example, instead of creating an uninterpreted predicate p, a "vector" (actually, it is a Python list) o Boolean constants is created.
The Z3Py online tutorial contains many examples.
It is also possible to implement in Z3Py the approach that creates auxiliary variables.
Here is the script that does the trick. I defined a function check_ext that does all the plumbing. http://rise4fun.com/Z3Py/B4

Specifying initial model values for Z3

How to specify initial 'soft' values for the model? This initial model is the result of solving a similar query, and it is likely that this model has a correct pieces or even may be true for the current query.
Currently I am simulating this with an incremental solving and hard/soft constraints:
(define-fun trans_assumed ((a Int)) Int
; an initial model, which may be (partially) true
)
(declare-fun trans_sought ((a Int)) Int)
(declare-const p Bool)
(assert (=> p (forall ((a Int)) (= (trans_assumed a) (trans_sought a)))))
(check-sat p) ; in hope that trans_assumed values will be used as initial below
; add here the main constraints for trans_sought function
(check-sat) ; Z3 will use trans_assumed as a starting point for trans_sought
Does this really specify initial values for trans_sought to be trans_assumed?
Incremental mode of solving is slow compared to sequential. Any better ways of introducing initial values?
I think this is a good approach, but you may consider using more Boolean variables. Right now, it is a "all" or "nothing" approach. In your script, when (check-sat p) is executed, Z3 will look for a model where trans_assumed and trans_sought have the same interpretation. If such model does not exist, it will return with the unsat core containing p. When (check) is executed, Z3 is free to assign p to false, and the universal quantifier is essentially a don't care. That is, trans_assumed and trans_sought can be completely different.
If you use multiple Boolean variables to control the interpretation of trans_sought, you will have more flexibility.
If the rest of your problem is quantifier free, you should consider dropping the universal quantifier. This can be done if you only care about the value of trans_sought in a finite number of points.
Suppose we have that trans_assumed(0) = 1 and trans_assumed(1) = 10. Then, we can write:
assert (=> p0 (= (trans_sought 0) 1)))
assert (=> p1 (= (trans_sought 1) 10)))
In this encoding, we can query (check-sat p0 p1), (check-sat p0), (check-sat p1)

Resources