I am trying to learn using z3. So this question might be silly.
Why do I get a unexpected values for x___0 from Z3 when I use bvsmod as compared to bvadd in the following code. I'm using SSA to implement execution flow here.
Z3 instructions:
(set-option :pp.bv-literals false)
;
; The code
; x %= 5
; x * 2 == 8
; Implement SSA
; x1 = x0 % 5
; x1 * 2 == 8
;
(push)
(set-info :status unknown)
(declare-const x___0 (_ BitVec 32))
(declare-const x___1 (_ BitVec 32))
(assert (= x___1 (bvsmod x___0 (_ bv5 32))))
(assert (= (bvmul x___1 (_ bv2 32)) (_ bv8 32)))
(check-sat)
(get-model)
(pop)
;
; The code
; x += 1
; x * 2 == 8
; Implement SSA
; x1 = x0 + 1
; x1 * 2 == 8
;
(push)
(declare-const x___0 (_ BitVec 32))
(declare-const x___1 (_ BitVec 32))
(assert (= x___1 (bvadd x___0 (_ bv1 32))))
(assert (= (bvmul x___1 (_ bv2 32)) (_ bv8 32)))
(check-sat)
(get-model)
(pop)
Results:
sat
(model
(define-fun x___1 () (_ BitVec 32)
(_ bv4 32))
(define-fun x___0 () (_ BitVec 32)
(_ bv3720040335 32))
)
sat
(model
(define-fun x___1 () (_ BitVec 32)
(_ bv4 32))
(define-fun x___0 () (_ BitVec 32)
(_ bv3 32))
)
In case of equation where I use bvadd x___0 get a value 3, which makes sense.
Why do I get a value 3720040335 in case of bvsmod, which is no where near the expected value i.e. some value ending with 4?
There's nothing wrong with the value you are getting. Your encoding is just fine.
Notice that you are using 32-bit signed integers (implicitly implied by the call to bvsmod.) The model returned gives you the value 32-bit bit-vector value whose decimal equivalent is 3720040335. When interpreted as a signed-value, this is actually -574926961, and you can verify (-574926961) % 5 indeed equals 4 as you requested.
Note that the solver is free to give you any model that satisfies your constraints. If you want a more specific value, you'll need to add additional constraints to encode what "simple" should formally mean.
If you want to write the formula like that, you need quantifiers.
I suggest you use SMT expressions instead; sharing will happen for free.
write e.g.:
(assert (= (bvmul (bvadd x___0 (_ bv1 32)) (_ bv2 32)) (_ bv8 32)))
If you need the intermediate values, you can always later do an (eval ...)
Related
My program reads the constraints from a smt2 file, and all the variables are defined as an array. For example
(declare-fun x () (Array (_ BitVec 32) (_ BitVec 8) ) )
(declare-fun y () (Array (_ BitVec 32) (_ BitVec 8) ) )
(assert (bvslt (concat (select x (_ bv3 32) ) (concat (select x (_ bv2 32) ) (concat (select x (_ bv1 32) ) (select x (_ bv0 32) ) ) ) ) (concat (select y (_ bv3 32) ) (concat (select y (_ bv2 32) ) (concat (select y (_ bv1 32) ) (select y (_ bv0 32) ) ) ) ) ) )
(check-sat)
(exit)
Some other constraints are omitted. Sometimes the solver gives a value of x as:
(store (store (store ((as const (Array (_ BitVec 32) (_ BitVec 8))) #xfe)
#x00000002
#x00)
#x00000001
#xff)
#x00000003
#x80)
According to the definition, each element of the array is a hex value, so the value should be 0x8000fffe. This value is beyond the upper bounds of integer in C++. When I covert it back to int, it is a negative value. So I guess Z3 treats all variables defined by an array as unsigned int.
For example, if the constraint is x > y, the solver may give
x = 0x8000fffe
and
y = 0x00000001. The values satisfy the constraint in unsigned comparison, but when conducting a signed comparison, x is negative and y is positive so it is wrong. I am wondering if there is a way to tell the solver that the numbers are signed when defining them as an array?
Added 22:26:43 09/14/2019
I got two smt2 files, one is
(set-logic QF_AUFBV )
(declare-fun x () (Array (_ BitVec 32) (_ BitVec 8) ) )
(declare-fun y () (Array (_ BitVec 32) (_ BitVec 8) ) )
(assert (bvslt (concat (select x (_ bv3 32) ) (concat (select x (_ bv2 32) ) (concat (select x (_ bv1 32) ) (select x (_ bv0 32) ) ) ) ) (concat (select y (_ bv3 32) ) (concat (select y (_ bv2 32) ) (concat (select y (_ bv1 32) ) (select y (_ bv0 32) ) ) ) ) ) )
(check-sat)
(exit)
The constraint is simply x < y.
The other one is
(set-logic QF_AUFBV )
(declare-fun x () (Array (_ BitVec 32) (_ BitVec 8) ) )
(declare-fun y () (Array (_ BitVec 32) (_ BitVec 8) ) )
(assert (let ( (?B1 (concat (select y (_ bv3 32) ) (concat (select y (_ bv2 32) ) (concat (select y (_ bv1 32) ) (select y (_ bv0 32) ) ) ) ) ) (?B2 (concat (select x (_ bv3 32) ) (concat (select x (_ bv2 32) ) (concat (select x (_ bv1 32) ) (select x (_ bv0 32) ) ) ) ) ) ) (let ( (?B3 (bvsub ?B1 ?B2 ) ) ) (and (and (and (and (and (= false (= (_ bv0 32) ?B2 ) ) (= false (= (_ bv0 32) ?B1 ) ) ) (= false (bvslt ?B1 ?B2 ) ) ) (= false (= (_ bv0 32) ?B3 ) ) ) (= false (bvslt ?B3 ?B2 ) ) ) (= (_ bv0 32) (bvsub ?B3 ?B2 ) ) ) ) ) )
(check-sat)
(exit)
which is
[(! (0 == x)),
(! (0 == y)),
(! ( y < x)),
(! (0 ==( y - x))),
(! (( y - x) < x)),
(0 ==(( y - x) - x)) ]
These smt2 files are generated by Klee.The solver gives
x = (store (store (store ((as const (Array (_ BitVec 32) (_ BitVec 8))) #xfe)
#x00000002
#x00)
#x00000001
#xff)
#x00000003
#x80)
y = before minimize: (store (store (store ((as const (Array (_ BitVec 32) (_ BitVec 8))) #xfc)
#x00000002
#x01)
#x00000001
#xff)
#x00000003
#x00)
so x=0x8000fffe, and y=0x0001fffc. Converted to decimal, we have x=2147549180, and y=131068. So y-x-x is-4294967296, not decimal 0. The solver thinks it is satisfied bacause 4294967296 is
1 00000000 00000000 00000000 00000000
in binary, where the "1" is the 33rd bit, and will be removed. So -4294967296 is considered 0x00 in the memory. This is the reason I asked this question. X and y should be integers, so 0x8000fffe is -0x0000fffe, aka -65534. And y is 131068. And y-x-x is apparently not 0. So in terms of integer, the values don't satisfy the constraints. The expression y - x - x seems to be computed in unsigned rules.
Bit-vectors have no signs
There's no notion of signed or unsigned bit-vector in SMTLib. A bit-vector is simply a sequence of bits, without any attached semantics as to how to treat it as a number.
It is the operations, however, that distinguish signedness. This is why you have bvslt and bvult; for signed and unsigned less-than comparison, for instance. You might want to read the logic description here: http://smtlib.cs.uiowa.edu/theories-FixedSizeBitVectors.shtml
Long story short, all the solver is telling you is that the result contains these bits; how you interpret that as an unsigned word or a signed 2's complement number is totally up to you. Note that this perfectly matches how machine arithmetic is done in hardware, where you simply have registers that contain bit-sequences. It's the instructions that treat the values according to whatever convention they might choose to do so.
I hope that's clear; feel free to ask about a specific case; posting full programs is always helpful as well, so long as they abstract away from details and describe what you're trying to do.
Also see this earlier question that goes into a bit more detail: How to model signed integer with BitVector?
Avoiding overflow/underflow
You can ask z3 to avoid overflow/underflow during bit-vector arithmetic. However, this will require adding extra assertions for each operation you want to perform, so it can get rather messy. (Also, looks like you want to use Klee; I'm not sure if Klee allows you to do this to start with.) The technique is explained in detail in this paper: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/z3prefix.pdf
In particular, you want to read through Section 5.1: Authors describe how to "annotate" each arithmetic operation and assert that it does not overflow explicitly. For instance, if you want to make sure addition doesn't overflow; you first zero-extend your bit-vectors from 32-bits to 33-bits; do the addition, and check if the 33-bit of the result is 1. To avoid overflow, you simply write an assertion saying that bit cannot be 1. Here's an example:
; Two 32-bit variables
(declare-fun x () (_ BitVec 32))
(declare-fun y () (_ BitVec 32))
; Zero-Extend them to 33-bits
(define-fun x33 () (_ BitVec 33) (concat #b0 x))
(define-fun y33 () (_ BitVec 33) (concat #b0 y))
; Add them
(define-fun extendedAdd () (_ BitVec 33) (bvadd x33 y33))
; Get the sign bit
(define-fun signBit () (_ BitVec 1) ((_ extract 32 32) extendedAdd))
; Assert that the addition won't overflow:
(assert (= signBit #b0))
; Regular addition result:
(define-fun addResult () (_ BitVec 32) ((_ extract 31 0) extendedAdd))
; Now you can use addResult as the result of x+y; and you'll
; be assured that this addition will never overflow
(check-sat)
(get-model)
You'd also have to check for underflow at each operation. Further adding complexity.
As you can see, this can get very hairy and the rules for multiplication are actually quite tricky. To simplify this z3 actually provides built-in primitives for multiplication overflow-checking, called:
bvsmul_noovfl: True only if signed multiplication doesn't overflow
bvsmul_noudfl: True only if signed multiplication doesn't underflow
bvumul_noovfl: True only if unsigned multiplication doesn't overflow
There is no predicate for checking if an unsigned multiplication can underflow because that cannot happen. But the point remains: You have to annotate each operation and explicitly assert the relevant conditions. This is best done by a higher-level API during code generation, and some z3 bindings do support such operations. (For instance, see http://hackage.haskell.org/package/sbv-8.4/docs/Data-SBV-Tools-Overflow.html for how the Haskell layer on top of SMT-solvers handles this.) If you'll do this at scale, you probably want to build some mechanism that automatically generates for you as doing it manually would be extremely error-prone.
Or you can switch and use Int type, which never overflows! But then, of course, you're no longer modeling an actual running program but reasoning about actual integer values; which might be acceptable depending on your problem domain.
Is there an easier way to get the immediate answers when doing bit-vector ops eg. a = 1 million, b = 0, what is a & b (answer: 0)
This method works but have to introduce dummy variable to store answer:
(declare-const a (_ BitVec 64))
(declare-const b (_ BitVec 64))
(declare-const ans (_ BitVec 64))
(assert (= a (_ bv1000000 64)))
(assert (= b (_ bv0000000 64)))
(assert (= ans (bvand a b)))
(check-sat)
(get-model)
This method is what I'd like but my code gives a demorgan identity:
(declare-const a (_ BitVec 64))
(declare-const b (_ BitVec 64))
(simplify (bvand a b))
You can use the model to evaluate arbitrary expressions, for instance like this:
(declare-const a (_ BitVec 64))
(declare-const b (_ BitVec 64))
(assert (= a (_ bv1000000 64)))
(assert (= b (_ bv0000000 64)))
(check-sat)
(eval (bvand a b))
says
sat
#x0000000000000000
I didn't test, but something like (apply (then propagate-values simplify)) should do the trick
Is there any efficient way to extract the bit number i from a bit-vector, while i has Int datatype? In other words, is there any efficient smt script that can do what the following script does?
(declare-fun int-index () Int)
(assert (and (>= int-index 0) (<= int-index 21)))
(declare-fun bv1 () (_ BitVec 22))
(define-fun getbit ((x (_ BitVec 22)) (bv-index (_ BitVec 22))) (_ BitVec 1)
((_ extract 0 0) (bvlshr x bv-index)))
(assert (= #b1 (getbit bv1 ((_ int2bv 22) int-index))))
(check-sat-using (! smt :bv.enable_int2bv true) :print_model true)
Thank you in advance.
Not really, you need to create a "big" if-then-else term that does case analysis on bv-index and then uses the (_ extract index index) function, where
"index" has to be a constant.
I have the following piece of code:
(declare-const L4 (_ BitVec 6))
(declare-const L1 (_ BitVec 6))
(declare-const L0 (_ BitVec 6))
(declare-const l2 (_ BitVec 6))
(assert (= l2 (_ bv8 6)))
;; All is encoding the set that contains {0, 1, 2, 3, 4, 5}
(define-const All (_ BitVec 6) #b111111)
;; Empty is encoding the empty set
(define-const Empty (_ BitVec 6) #b000000)
(define-fun LT_l ((S (_ BitVec 6)) (l (_ BitVec 6))) Bool
;; True if for all x in S x < l
(= (bvand (bvshl All l) S) Empty))
(define-fun is_in ((e (_ BitVec 6)) (S (_ BitVec 6))) Bool
;; True if e is an element of the "set" S.
(not (= (bvand (bvshl (_ bv1 6) e) S) Empty)))
(define-fun is_minimal ((e (_ BitVec 6)) (S (_ BitVec 6))) Bool
;; True if e is the minimal element of S
(and (is_in e S) ;; S contains e
;; (1 << e) - 1 represents the set of elements that are smaller than e
(= (bvand (bvsub (bvshl (_ bv1 6) e) (_ bv1 6)) S) Empty)))
;; To encode that forall x in L0 and forall y in L1. x < y
(define-fun LT ((L0 (_ BitVec 6)) (L1 (_ BitVec 6))) Bool
; True if forall x in L0 and forall y in L1, x < y
(or (= L0 Empty)
(= L1 Empty)
(exists ((min (_ BitVec 6))) (and (is_minimal min L1) (LT_l L0 min)))))
(assert (not (= L0 Empty)))
(assert (not (= L1 Empty)))
(assert (not (= L4 Empty)))
(assert (LT_l L4 l2))
(assert (LT L0 L1))
(check-sat)
(get-model)
(assert (LT L1 L0))
(check-sat)
When I run this code I get the model is:
sat
(model
(define-fun min!0 () (_ BitVec 6)
#b000011)
(define-fun L1 () (_ BitVec 6)
#b001000)
(define-fun L0 () (_ BitVec 6)
#b000100)
(define-fun L4 () (_ BitVec 6)
#b000100)
(define-fun l2 () (_ BitVec 6)
#b001000)
)
unsat
Why is the result of min is:
(define-fun min!0 () (_ BitVec 6)
#b000011)
and not b001000 since the smallest value of L1 is this and not b000011.
Someone can explain me?
Finally, I define the function Lt_l that checks if for all x in S x < l, but now I wanted to do GT_l that checks if for all x in S l < x. I have the following code:
(define-fun GT_l ((S (_ BitVec 6)) (l (_ BitVec 6))) Bool
(= (bvand (bvneg (bvshl (_ bv0 6) l)) S) Empty))
But this does not work why?
Thanks
In your example, you are representing sets using bit-vectors. For example, the bit-vector #b101000 represents the set {5, 3}. The output (define-fun L1 () (_ BitVec 6) #b001000) is essentially saying that L1 is the "set" {3}. One possible confusion is that bit-vectors are being used to represent sets and elements. The bit-vector min!0 represents an element. The output (define-fun min!0 () (_ BitVec 6) #b000011) is saying that min!0 is the value 3, and it is indeed the "minimal value" in L1.
Following up to the previous discussion: Z3: Extracting existential model-values
Is there a difference between:
(declare-fun sx ((_ BitVec 16)) (_ BitVec 16))
(assert (forall ((y (_ BitVec 16))) (bvuge y (sx y))))
And
(declare-fun sx ((_ BitVec 16)) (_ BitVec 16))
(declare-fun y () (_ BitVec 16))
(assert (bvuge y (sx y)))
As far as Z3 is concerned? That is, will I still get the QBVF solver for the latter automatically?
Also, upon experimentation I found that if I issue:
(eval (sx #x8000))
After a (check-sat) call, it works fine (which is great). What would be better is if I could also say:
(eval (sx (get-value (y))))
Alas, for that query Z3 complains that it's an invalid function application. Is there a way to use the eval function in that manner?
Thanks!
The scripts
(declare-fun sx ((_ BitVec 16)) (_ BitVec 16))
(assert (forall ((y (_ BitVec 16))) (bvuge y (sx y))))
and
(declare-fun sx ((_ BitVec 16)) (_ BitVec 16))
(declare-fun y () (_ BitVec 16))
(assert (bvuge y (sx y)))
are not equivalent. The second is actually equisatisfiable to
(declare-fun sx ((_ BitVec 16)) (_ BitVec 16))
(assert (exists ((y (_ BitVec 16))) (bvuge y (sx y))))
Regarding the eval command, you can reference any uninterpreted constant and function symbol. Thus, you can write:
(declare-fun sx ((_ BitVec 16)) (_ BitVec 16))
(declare-fun y () (_ BitVec 16))
(assert (bvuge y (sx y)))
(check-sat)
(eval (sx y))
The command (eval (sx y)) will not work for the first script because y is a universal variable.