Get UNSAT Core using Z3 from .smt2 file - z3

I need to get unsat core from z3. The contents of .smt2 file are:
(set-option :produce-unsat-cores true)
(set-logic QF_AUFBV )
(declare-fun a () (Array (_ BitVec 32) (_ BitVec 8) ) )
; Constraints
(! (assert (bvslt (concat (select a (_ bv3 32) ) (concat (select a (_ bv2 32) ) (concat (select a (_ bv1 32) ) (select a (_ bv0 32) ) ) ) ) (_ bv10 32) ) )
:named ?U0)
(! (assert (bvslt (_ bv10 32) (concat (select a (_ bv3 32) ) (concat (select a (_ bv2 32) ) (concat (select a (_ bv1 32) ) (select a (_ bv0 32) ) ) ) ) ) )
:named ?U1)
(check-sat)
(get-unsat-core)
(exit)
I am getting the following output when running z3:unsupported
; !
unsupported
; !
sat
(error "line 11 column 15: unsat core is not available")
I am new to z3, can't understand what is happening here (I am sure that the expression is unsat).
Thanks.

You use ! incorrectly. The exclamation point is used for naming formulae (not asserts). See section 3.9.8 from the Tutorial.
This should fix it: (assert (! (bvslt ...) :named ?U0)).

Related

Z3 switching between unknown/unsat by just changing the range sort of array

I am facing a weird result when using Z3. Consider these two benchmarks written in smt-lib:
(declare-fun a () (Array (_ BitVec 32) (_ BitVec 32)))
(declare-fun b () (Array (_ BitVec 32) (_ BitVec 32)))
(assert (forall ((fqv (Array (_ BitVec 32) (_ BitVec 8))))
(= (select a
(concat (select fqv #x00000003)
(concat (select fqv #x00000002)
(concat (select fqv #x00000001)
(select fqv #x00000000)))))
(select b
(concat (select fqv #x00000003)
(concat (select fqv #x00000002)
(concat (select fqv #x00000001)
(select fqv #x00000000))))))))
(assert (= false (= (select a #x00000000) (select b #x00000000))))
(check-sat)
(get-model)
and
(declare-fun a () (Array (_ BitVec 32) (_ BitVec 8)))
(declare-fun b () (Array (_ BitVec 32) (_ BitVec 8)))
(assert (forall ((fqv (Array (_ BitVec 32) (_ BitVec 8))))
(= (select a
(concat (select fqv #x00000003)
(concat (select fqv #x00000002)
(concat (select fqv #x00000001)
(select fqv #x00000000)))))
(select b
(concat (select fqv #x00000003)
(concat (select fqv #x00000002)
(concat (select fqv #x00000001)
(select fqv #x00000000))))))))
(assert (= false (= (select a #x00000000) (select b #x00000000))))
(check-sat)
(get-model)
Their only difference is in the range of a and b. In the first case, it is a bit-vector of size 32. While in the second case, it is a bit-vector of size 8.
The interesting part is that (check-sat) is returning unsat for the first case, and unknown for the second one.
Is there an explanation for this? Here is a link where you can quickly run this experiment: https://people.csail.mit.edu/cpitcla/z3.wasm/z3.html
This looks odd indeed. I'd have expected you to get unknown regardless of which, since you have a quantified assert that ranges over an array.
The most likely reason is that there's some "internal" heuristic that kicks in for the first case, but fails to do so for the second. (So, in a sense, you're getting "lucky" for the first one, and the second one is the expected behavior given the quantified assert.)
Please report this at https://github.com/Z3Prover/z3/issues. Even though this isn't strictly a bug, I'm sure the developers would appreciate taking a look to see what internal heuristic might be improved to handle both of these cases.

Is there a way to specify the domain of a variable when defining it using an array?

My program reads the constraints from a smt2 file, and all the variables are defined as an array. For example
(declare-fun x () (Array (_ BitVec 32) (_ BitVec 8) ) )
(declare-fun y () (Array (_ BitVec 32) (_ BitVec 8) ) )
(assert (bvslt (concat (select x (_ bv3 32) ) (concat (select x (_ bv2 32) ) (concat (select x (_ bv1 32) ) (select x (_ bv0 32) ) ) ) ) (concat (select y (_ bv3 32) ) (concat (select y (_ bv2 32) ) (concat (select y (_ bv1 32) ) (select y (_ bv0 32) ) ) ) ) ) )
(check-sat)
(exit)
Some other constraints are omitted. Sometimes the solver gives a value of x as:
(store (store (store ((as const (Array (_ BitVec 32) (_ BitVec 8))) #xfe)
#x00000002
#x00)
#x00000001
#xff)
#x00000003
#x80)
According to the definition, each element of the array is a hex value, so the value should be 0x8000fffe. This value is beyond the upper bounds of integer in C++. When I covert it back to int, it is a negative value. So I guess Z3 treats all variables defined by an array as unsigned int.
For example, if the constraint is x > y, the solver may give
x = 0x8000fffe
and
y = 0x00000001. The values satisfy the constraint in unsigned comparison, but when conducting a signed comparison, x is negative and y is positive so it is wrong. I am wondering if there is a way to tell the solver that the numbers are signed when defining them as an array?
Added 22:26:43 09/14/2019
I got two smt2 files, one is
(set-logic QF_AUFBV )
(declare-fun x () (Array (_ BitVec 32) (_ BitVec 8) ) )
(declare-fun y () (Array (_ BitVec 32) (_ BitVec 8) ) )
(assert (bvslt (concat (select x (_ bv3 32) ) (concat (select x (_ bv2 32) ) (concat (select x (_ bv1 32) ) (select x (_ bv0 32) ) ) ) ) (concat (select y (_ bv3 32) ) (concat (select y (_ bv2 32) ) (concat (select y (_ bv1 32) ) (select y (_ bv0 32) ) ) ) ) ) )
(check-sat)
(exit)
The constraint is simply x < y.
The other one is
(set-logic QF_AUFBV )
(declare-fun x () (Array (_ BitVec 32) (_ BitVec 8) ) )
(declare-fun y () (Array (_ BitVec 32) (_ BitVec 8) ) )
(assert (let ( (?B1 (concat (select y (_ bv3 32) ) (concat (select y (_ bv2 32) ) (concat (select y (_ bv1 32) ) (select y (_ bv0 32) ) ) ) ) ) (?B2 (concat (select x (_ bv3 32) ) (concat (select x (_ bv2 32) ) (concat (select x (_ bv1 32) ) (select x (_ bv0 32) ) ) ) ) ) ) (let ( (?B3 (bvsub ?B1 ?B2 ) ) ) (and (and (and (and (and (= false (= (_ bv0 32) ?B2 ) ) (= false (= (_ bv0 32) ?B1 ) ) ) (= false (bvslt ?B1 ?B2 ) ) ) (= false (= (_ bv0 32) ?B3 ) ) ) (= false (bvslt ?B3 ?B2 ) ) ) (= (_ bv0 32) (bvsub ?B3 ?B2 ) ) ) ) ) )
(check-sat)
(exit)
which is
[(! (0 == x)),
(! (0 == y)),
(! ( y < x)),
(! (0 ==( y - x))),
(! (( y - x) < x)),
(0 ==(( y - x) - x)) ]
These smt2 files are generated by Klee.The solver gives
x = (store (store (store ((as const (Array (_ BitVec 32) (_ BitVec 8))) #xfe)
#x00000002
#x00)
#x00000001
#xff)
#x00000003
#x80)
y = before minimize: (store (store (store ((as const (Array (_ BitVec 32) (_ BitVec 8))) #xfc)
#x00000002
#x01)
#x00000001
#xff)
#x00000003
#x00)
so x=0x8000fffe, and y=0x0001fffc. Converted to decimal, we have x=2147549180, and y=131068. So y-x-x is-4294967296, not decimal 0. The solver thinks it is satisfied bacause 4294967296 is
1 00000000 00000000 00000000 00000000
in binary, where the "1" is the 33rd bit, and will be removed. So -4294967296 is considered 0x00 in the memory. This is the reason I asked this question. X and y should be integers, so 0x8000fffe is -0x0000fffe, aka -65534. And y is 131068. And y-x-x is apparently not 0. So in terms of integer, the values don't satisfy the constraints. The expression y - x - x seems to be computed in unsigned rules.
Bit-vectors have no signs
There's no notion of signed or unsigned bit-vector in SMTLib. A bit-vector is simply a sequence of bits, without any attached semantics as to how to treat it as a number.
It is the operations, however, that distinguish signedness. This is why you have bvslt and bvult; for signed and unsigned less-than comparison, for instance. You might want to read the logic description here: http://smtlib.cs.uiowa.edu/theories-FixedSizeBitVectors.shtml
Long story short, all the solver is telling you is that the result contains these bits; how you interpret that as an unsigned word or a signed 2's complement number is totally up to you. Note that this perfectly matches how machine arithmetic is done in hardware, where you simply have registers that contain bit-sequences. It's the instructions that treat the values according to whatever convention they might choose to do so.
I hope that's clear; feel free to ask about a specific case; posting full programs is always helpful as well, so long as they abstract away from details and describe what you're trying to do.
Also see this earlier question that goes into a bit more detail: How to model signed integer with BitVector?
Avoiding overflow/underflow
You can ask z3 to avoid overflow/underflow during bit-vector arithmetic. However, this will require adding extra assertions for each operation you want to perform, so it can get rather messy. (Also, looks like you want to use Klee; I'm not sure if Klee allows you to do this to start with.) The technique is explained in detail in this paper: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/z3prefix.pdf
In particular, you want to read through Section 5.1: Authors describe how to "annotate" each arithmetic operation and assert that it does not overflow explicitly. For instance, if you want to make sure addition doesn't overflow; you first zero-extend your bit-vectors from 32-bits to 33-bits; do the addition, and check if the 33-bit of the result is 1. To avoid overflow, you simply write an assertion saying that bit cannot be 1. Here's an example:
; Two 32-bit variables
(declare-fun x () (_ BitVec 32))
(declare-fun y () (_ BitVec 32))
; Zero-Extend them to 33-bits
(define-fun x33 () (_ BitVec 33) (concat #b0 x))
(define-fun y33 () (_ BitVec 33) (concat #b0 y))
; Add them
(define-fun extendedAdd () (_ BitVec 33) (bvadd x33 y33))
; Get the sign bit
(define-fun signBit () (_ BitVec 1) ((_ extract 32 32) extendedAdd))
; Assert that the addition won't overflow:
(assert (= signBit #b0))
; Regular addition result:
(define-fun addResult () (_ BitVec 32) ((_ extract 31 0) extendedAdd))
; Now you can use addResult as the result of x+y; and you'll
; be assured that this addition will never overflow
(check-sat)
(get-model)
You'd also have to check for underflow at each operation. Further adding complexity.
As you can see, this can get very hairy and the rules for multiplication are actually quite tricky. To simplify this z3 actually provides built-in primitives for multiplication overflow-checking, called:
bvsmul_noovfl: True only if signed multiplication doesn't overflow
bvsmul_noudfl: True only if signed multiplication doesn't underflow
bvumul_noovfl: True only if unsigned multiplication doesn't overflow
There is no predicate for checking if an unsigned multiplication can underflow because that cannot happen. But the point remains: You have to annotate each operation and explicitly assert the relevant conditions. This is best done by a higher-level API during code generation, and some z3 bindings do support such operations. (For instance, see http://hackage.haskell.org/package/sbv-8.4/docs/Data-SBV-Tools-Overflow.html for how the Haskell layer on top of SMT-solvers handles this.) If you'll do this at scale, you probably want to build some mechanism that automatically generates for you as doing it manually would be extremely error-prone.
Or you can switch and use Int type, which never overflows! But then, of course, you're no longer modeling an actual running program but reasoning about actual integer values; which might be acceptable depending on your problem domain.

AUFBV logic theory: get value of the array in decimal based format in Z3 model

How to get the value of a variable in Integer sort in QF_AUFBV logic theory?
Consider the following script in SMTLIB2 uses QF_AUFBV logic theory
(set-logic QF_AUFBV)
(set-option :model_compress false)
(declare-fun a () (Array (_ BitVec 32) (_ BitVec 8) ) )
(declare-fun b () (Array (_ BitVec 32) (_ BitVec 8) ) )
(declare-fun out () (Array (_ BitVec 32) (_ BitVec 8) ) )
(assert
(= (concat (select out (_ bv3 32) ) (concat (select out (_ bv2 32) ) (concat (select out (_ bv1 32) ) (select out (_ bv0 32) ) ) ) )
;; 10<a is false
(ite (= false (bvslt (_ bv10 32) (concat (select a (_ bv3 32) ) (concat (select a (_ bv2 32) ) (concat (select a (_ bv1 32) ) (select a (_ bv0 32) ) ) ) ) ) )
;;b-15
(bvadd (_ bv4294967281 32) (concat (select b (_ bv3 32) ) (concat (select b (_ bv2 32) ) (concat (select b (_ bv1 32) ) (select b (_ bv0 32) ) ) ) ) )
;;b+15
(bvadd (_ bv15 32) (concat (select b (_ bv3 32) ) (concat (select b (_ bv2 32) ) (concat (select b (_ bv1 32) ) (select b (_ bv0 32) ) ) ) ) ))))
;;out>15
(assert
(bvsgt (concat (select out (_ bv3 32) ) (concat (select out (_ bv2 32) ) (concat (select out (_ bv1 32) ) (select out (_ bv0 32) ) ) ) ) (_ bv15 32)))
(check-sat)
(get-model)
When we use Z3 to check the satisfiability it produces the following model.
sat
(model
(define-fun b () (Array (_ BitVec 32) (_ BitVec 8))
(_ as-array k!2))
(define-fun out () (Array (_ BitVec 32) (_ BitVec 8))
(_ as-array k!0))
(define-fun a () (Array (_ BitVec 32) (_ BitVec 8))
(_ as-array k!1))
(define-fun k!0 ((x!0 (_ BitVec 32))) (_ BitVec 8)
(ite (= x!0 #x00000003) #x00
(ite (= x!0 #x00000002) #x00
(ite (= x!0 #x00000000) #x11
(ite (= x!0 #x00000001) #x00
#x00)))))
(define-fun k!1 ((x!0 (_ BitVec 32))) (_ BitVec 8)
(ite (= x!0 #x00000003) #x80
(ite (= x!0 #x00000002) #x00
(ite (= x!0 #x00000000) #x0e
(ite (= x!0 #x00000001) #x00
#x00)))))
(define-fun k!2 ((x!0 (_ BitVec 32))) (_ BitVec 8)
(ite (= x!0 #x00000003) #x00
(ite (= x!0 #x00000002) #x00
(ite (= x!0 #x00000000) #x20
(ite (= x!0 #x00000001) #x00
#x00)))))
)
Is there any way to print the value of array in decimal-based format? Is it possible to use some C/C++ z3 API to extract the value in decimal-based format?
In the given model the value of array out is 17 and b is 32.
It's not quite clear what you're asking. But I'm guessing you want to see the values in regular decimal notation as opposed to the default hexadecimal?
If that's the case, then alas there's no direct option to cause the SMTLib output to just use decimals for bit-vectors; as it would be ambiguous from the literal only to figure out how wide it is. However, you can instruct the solver to print the bit-vector values in the so called bv format. Simply call:
z3 pp.bv_literals=false input.smt2
This will print the literals like this:
(_ bv128 8)
The way to read this is that the type is 8-bits wide bit-vector and the value is 128. This is closer to what you're asking, I suppose.
The other option is of course to post-process the output as you see fit; but that goes without saying.

Understanding z3 bvsmod behavior with SSA

I am trying to learn using z3. So this question might be silly.
Why do I get a unexpected values for x___0 from Z3 when I use bvsmod as compared to bvadd in the following code. I'm using SSA to implement execution flow here.
Z3 instructions:
(set-option :pp.bv-literals false)
;
; The code
; x %= 5
; x * 2 == 8
; Implement SSA
; x1 = x0 % 5
; x1 * 2 == 8
;
(push)
(set-info :status unknown)
(declare-const x___0 (_ BitVec 32))
(declare-const x___1 (_ BitVec 32))
(assert (= x___1 (bvsmod x___0 (_ bv5 32))))
(assert (= (bvmul x___1 (_ bv2 32)) (_ bv8 32)))
(check-sat)
(get-model)
(pop)
;
; The code
; x += 1
; x * 2 == 8
; Implement SSA
; x1 = x0 + 1
; x1 * 2 == 8
;
(push)
(declare-const x___0 (_ BitVec 32))
(declare-const x___1 (_ BitVec 32))
(assert (= x___1 (bvadd x___0 (_ bv1 32))))
(assert (= (bvmul x___1 (_ bv2 32)) (_ bv8 32)))
(check-sat)
(get-model)
(pop)
Results:
sat
(model
(define-fun x___1 () (_ BitVec 32)
(_ bv4 32))
(define-fun x___0 () (_ BitVec 32)
(_ bv3720040335 32))
)
sat
(model
(define-fun x___1 () (_ BitVec 32)
(_ bv4 32))
(define-fun x___0 () (_ BitVec 32)
(_ bv3 32))
)
In case of equation where I use bvadd x___0 get a value 3, which makes sense.
Why do I get a value 3720040335 in case of bvsmod, which is no where near the expected value i.e. some value ending with 4?
There's nothing wrong with the value you are getting. Your encoding is just fine.
Notice that you are using 32-bit signed integers (implicitly implied by the call to bvsmod.) The model returned gives you the value 32-bit bit-vector value whose decimal equivalent is 3720040335. When interpreted as a signed-value, this is actually -574926961, and you can verify (-574926961) % 5 indeed equals 4 as you requested.
Note that the solver is free to give you any model that satisfies your constraints. If you want a more specific value, you'll need to add additional constraints to encode what "simple" should formally mean.
If you want to write the formula like that, you need quantifiers.
I suggest you use SMT expressions instead; sharing will happen for free.
write e.g.:
(assert (= (bvmul (bvadd x___0 (_ bv1 32)) (_ bv2 32)) (_ bv8 32)))
If you need the intermediate values, you can always later do an (eval ...)

BitVector in Z3 - functions for different bits

I have this code to check if other elements are contained sets.
;; All is encoding the set that contains {0, 1, 2, 3, 4, 5}
(define-const All (_ BitVec 6) #b111111)
;; Empty is encoding the empty set
(define-const Empty (_ BitVec 6) #b000000)
(define-fun LT_l ((S (_ BitVec 6)) (l (_ BitVec 6))) Bool
;; True if for all x in S x < l
(= (bvand (bvshl All l) S) Empty))
(define-fun GT_l ((l (_ BitVec 6)) (S (_ BitVec 6))) Bool
;; True if for all x in S l < x
(= (bvand (bvnot (bvshl All l)) S) Empty))
(define-fun is_in ((e (_ BitVec 6)) (S (_ BitVec 6))) Bool
;; True if e is an element of the "set" S.
(not (= (bvand (bvshl (_ bv1 6) e) S) Empty)))
(define-fun is_minimal ((e (_ BitVec 6)) (S (_ BitVec 6))) Bool
(and (is_in e S)
(= (bvand (bvsub (bvshl (_ bv1 6) e) (_ bv1 6)) S) Empty)))
(define-fun LT ((L0 (_ BitVec 6)) (L1 (_ BitVec 6))) Bool
; True if forall x in L0 and forall y in L1, x < y
(or (= L0 Empty)
(= L1 Empty)
(exists ((min (_ BitVec 6))) (and (is_minimal min L1) (LT_l L0 min)))))
(declare-const consoleLock (_ BitVec 6))
(declare-const l1 (_ BitVec 6))
(declare-const l2 (_ BitVec 6))
( assert (distinct consoleLock l1 l2 ) )
( assert (or (= l1 (_ bv0 6)) (= l1 (_ bv1 6)) (= l1 (_ bv2 6)) (= l1 (_ bv4 6)) ))
( assert (or (= l2 (_ bv0 6)) (= l2 (_ bv1 6)) (= l2 (_ bv2 6)) (= l2 (_ bv4 6)) ))
( assert (or (= consoleLock (_ bv0 6)) (= consoleLock (_ bv1 6)) (= consoleLock (_ bv2 6)) (= consoleLock (_ bv4 6)) ))
(declare-const L4 (_ BitVec 6))
(declare-const L1 (_ BitVec 6))
(declare-const L0 (_ BitVec 6))
(declare-const L5 (_ BitVec 6))
(assert (LT_l L0 l1))
(assert (LT L0 L1))
(assert (GT_l L1 l1))
(assert (LT_l L4 l2))
(assert (LT L4 L5))
(assert (GT_l L5 l2))
(declare-const T1 (_ BitVec 6))
(assert (= T1 l1))
(assert (LT_l T1 l2))
(declare-const T2 (_ BitVec 6))
(assert (= T2 l2))
(assert (LT_l T2 l1))
(check-sat)
(get-model)
My problem is that you want to use this code also for vectors with 8-bit and 16-bit but it doesn't work.
For example, if I replace all (_ BitVec 6) by (_ BitVec 8), the above code does not work well, because the result should be unsat but it sat.
As if to 6-bit vectors works well.
How can I make it work for different sizes of bit vectors?
We also have to adjust the constant occurring in the example: #b111111, #b000000, (_ bv1 6), etc. That being said, SMT-LIB 2.0 format is not very convenient for writing parametric problems. I think the programmatic API is easier to use to encode parametric problems.
Here is the same example encoded using the Z3 Python API. It is also available online here. We can change the size of the bit-vectors by replacing SZ = 6 with SZ = 8 or SZ = 16.
def All(sz):
return BitVecVal(2**sz - 1, sz)
def Empty(sz):
return BitVecVal(0, sz)
def LT_l(S, l):
sz = S.size()
return (All(sz) << l) & S == Empty(sz)
def GT_l(l, S):
sz = S.size()
return (~(All(sz) << l)) & S == Empty(sz)
def is_in(e, S):
sz = S.size()
one = BitVecVal(1, sz)
return (1 << e) & S != Empty(sz)
def is_minimal(e, S):
sz = S.size()
return And(is_in(e, S), ((1 << e) - 1) & S == Empty(sz))
def LT(L0, L1):
sz = L0.size()
min = BitVec('min', sz)
return Or(L0 == Empty(sz), L1 == Empty(sz), Exists([min], And(is_minimal(min, L1), LT_l(L0, min))))
SZ=6
consoleLock = BitVec('consoleLock', SZ)
l1 = BitVec('l1', SZ)
l2 = BitVec('l2', SZ)
s = Solver()
s.add(Distinct(consoleLock, l1, l2))
s.add(Or(l1 == 0, l1 == 1, l1 == 2, l1 == 4))
s.add(Or(l2 == 0, l2 == 1, l2 == 2, l2 == 4))
s.add(Or(consoleLock == 0, consoleLock == 1, consoleLock == 2, consoleLock == 4))
L4 = BitVec('L4', SZ)
L1 = BitVec('L1', SZ)
L0 = BitVec('L0', SZ)
L5 = BitVec('L5', SZ)
s.add(LT_l(L0, l1))
s.add(LT(L0, L1))
s.add(GT_l(L1, l1))
s.add(LT_l(L4, l2))
s.add(LT(L4, L5))
s.add(GT_l(L5, l2))
T1 = BitVec('T1', SZ)
s.add(T1 == l1)
s.add(LT_l(T1, l2))
T2 = BitVec('T2', SZ)
s.add(T2 == l2)
s.add(LT_l(T2, l1))
print s.check()

Resources