Conversion of signed bit-vector expression to integer throws AccessViolationException - z3

I am using the Z3 .NET API and am trying to convert a signed bit-vector expression to integer using the following code:
using (Context context = new Context())
{
Expr e = context.MkBV2Int(context.MkBV(-1, 32), true);
}
But, I get the following exception:
Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at Microsoft.Z3.Native.LIB.Z3_mk_bv2int(IntPtr a0, IntPtr a1, Int32 a2)
at Microsoft.Z3.Native.Z3_mk_bv2int(IntPtr a0, IntPtr a1, Int32 a2) in Z3 src\src\api\dotnet\Native.cs:line 2386
at Microsoft.Z3.Context.MkBV2Int(BitVecExpr t, Boolean signed) in Z3 src\src\api\dotnet\Context.cs:line 1800
But, conversion of unsigned bit vectors works great.
What is the problem? And how do I fix it? Thank you very much in advance.

Thanks for reporting this problem! There was indeed a bug in the lower level C API. This has now been fixed in the unstable branch (see here).

Related

Dafny as a SAT-QBF solver is not giving right results

I am trying to get the habit to use Dafny as a friendly SAT-QBF solver for some simple formulae, since doing it in, for instance, Z3 is too uncomfortable.
The context for this is that I have implemented Cooper's algorithm for quantifier elimination and, when all the variables are bounded, it can be used as a decision procedure: therefore, I want to know which is the result that I should get before executing it.
However, I encountered a problem in Dafny.
Let us raise, for instance, this formula (written in Dafny):
assert forall x_1: int :: exists y_1: int :: forall x_2: int :: exists y_2 : int
:: (y_2<y_1) && (x_2<y_2) && (x_1<x_2);
In my Cooper, it returns False, while Dafny returns assertion violation (along with the typical triggerswarnings), which I interpret as False too. Okay, so no problem with this.
But if I raise:
assert exists x_1: int :: exists y_1: int :: exists x_2: int :: exists y_2 : int
:: (y_2<y_1) && (x_2<y_2) && (x_1<x_2);
In my Cooper, it returns True, while Dafny also returns assertion violation. I have done a manual Cooper execution (pencil and paper) and I think the True is right one.
Any idea of what is going on?
PS: I have not tried it in Z3, yet, because I am doing first other attempts with other theories.
EDIT
Trigger warnings can be avoided using a simple trick to instantiate the quantified variables: creating an uninterpreted function.
method Main() {
assert exists x_1 : int {:trigger P(x_1)} :: exists y_1: int {:trigger P(y_1)}
:: exists x_2: int {:trigger P(x_2)} :: exists y_2 : int {:trigger P(y_2)}
:: (y_2<y_1) && (x_2<y_2) && (x_1<x_2);
}
predicate P(a: int)
{
true
}
You cannot do this with Dafny. While Dafny supports quantifiers, booleans, arithmetic, and many other things (recursive functions, sets, sequences, objects and references, multi-dimensional arrays, induction, inductive and coinductive datatypes, bitvectors, greatest and least fixpoints of monotonic functions, etc.), it is not suitable for SAT-QBF (or QBF + artihmetic) benchmarks.
Dafny's errors, including the assertion violation, tell you that the verifier was not able to do the proof. It may be that the property still holds, but you'll need to supply more of the proof yourself. In other words, you should interpret the assertion violation as a "don't know" answer. Stated differently, you cannot decide (only semi-decide) formulas with Dafny.
Dafny uses quantifiers in the SMT solvers via matching patterns, aka triggers. When a quantifier has no good triggers, which is what Dafny's "no trigger" warning is telling you, you may see bad performance, unstable verifications, and so-called butterfly effects (where a small and seemingly unrelated part of the program causes a change in the automatic construction of other proofs). Triggers are driven by uninterpreted function symbols, which your example doesn't have at all.
If you want a readable syntax, you may be able to do what you're trying through Boogie. I have not tried that, but you could try putting Boogie in its monomorphic mode and then supplying prover options to ask for the SAT-QBF or something similar (see Boogie's /help). Otherwise, if you're interested in deciding these problems, then going directly to SMT solver is the way to go.

E2099 Overflow in conversion or arithmetic operation

I want to compare an int64 with a variable like this:
const GB=1073741824;
if DiskFile.Size< 1*GB then
It works with 1 but not with 3:
if DiskFile.Size< 3*GB then
This post (Strange Delphi integer multiplication behavior) explains why. I agree with that explanation. The result of 2*GB cannot fit in 'integer'. What I don't understand is why the compiler chooses integer instead the int64? As in the case of:
if DiskFile.Size< 3073741824 then <--------- almost 3GB
that works.
There is any way to write the last line of code in the 3*GB style (using constants) BUT without defining a new constant for 1GB, 2GB, 3GB, 4GB, etc ?
The first thing to be clear of here is that the integer overflow occurs in the compiler. The compiler has to evaluate your expression because it is a constant expression and they are evaluated by the compiler.
The documentation is a little sparse (and I am being kind here) on how the compiler treats your expression. We can infer, at least empirically, that the compiler attempts to perform 3*GB in a signed integer context. That is clear from the error message.
You need to force the compiler to evaluate the expression in an Int64 context. A cast will force that:
if DiskFile.Size< Int64(3)*GB then
....
Another option is to make the constant have type Int64:
const
GB = Int64(1073741824);
Although I think I'd write it like this:
const
KB = Int64(1024);
MB = 1024*KB;
GB = 1024*MB;
So long as GB is a 64 bit type then you can revert to:
if DiskFile.Size < 3*GB then
....
I'd like to elaborate on my second paragraph above. How can we tell that the compiler performs the arithmetic in 32 bit signed integer context? The following program suggests that this is so:
{$APPTYPE CONSOLE}
const
C1 = 715827882; // MaxInt div 3
C2 = C1+1;
begin
Writeln(3*C1);
Writeln(3*C2);
Readln;
end.
The first expression, 3*C1 compiles, the second fails with E2099. The first expression does not overflow a signed 32 bit integer, the second does.
When looking at the documentation, it is unclear whether the true constant 1073741824 should be of type Integer or Cardinal. The compiler could choose either. It seems that the compiler, when presented with a choice between signed and unsigned types, chooses signed types.
But then one might imagine that the following program would behave in the same way, but with Smallint and Word taking the place of Integer and Cardinal:
{$APPTYPE CONSOLE}
const
C1 = 10922; // high(Smallint) div 3
C2 = C1+1;
begin
Writeln(3*C1);
Writeln(3*C2);
Readln;
end.
But no, this program compiles. So, at this point I am giving up on the documentation which appears to bear little relationship to the actual behaviour of the compiler.
My best guess is that a integral true constant is handled as follows:
If it is within the range of Integer, it is of type Integer.
Otherwise, if it is within the range of Cardinal, it is of type Cardinal.
Otherwise, if it is within the range of Int64, it is of type Int64.
Otherwise, if it is within the range of UInt64, it is of type UInt64.
Otherwise it is a compiler error.
Of course, all of this assumes that the compilers rules for evaluating constant expressions follow the same rules as the rest of the language. I'm not certain that is the case.

erlang an error occurred when evaluating an arithmetic expression

i am a beginner of erlang.
this is my code:
-module(squsum).
-export([main/0]).
ssum(1) -> 1;
ssum(N) -> N*N + ssum(N-1).
main() ->
{_,T} = io:fread("","~d"),
io:fwrite("~p~n",[ssum(T)]).
why will it have such a problem?
** exception error: an error occurred when evaluating an arithmetic expression
in function squsum:ssum/1 (squsum.erl, line 5)
in call from squsum:main/0 (squsum.erl, line 9)
T is a list containing the value, I guess it is the symmetric point of view of IO:fwrite. so simply use:
main() -> {ok,[T]} = io:fread("","~d"), io:fwrite("~p~n",[ssum(T)]).
I'm not too familiar with erlang, but if it is at all possible for T to be negative, then your recursive function would never end.
Similarly, if T=0, the function would never end.
This is really on comment on Pascal's answer: just to add that io:fread/2 returns a list of the terms specified in the format string --- even if the format string has just one term, it returns a list with one element.
The erlang documentation is quite good. io:fread/2 is at http://erlang.org/doc/man/io.html#fread-2

MAX / MIN function in Objective C that avoid casting issues

I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros

Z3 Performance with Non-Linear Arithmetic

We are running into performance problems with what I believe is the part of Z3 that treats non-linear arithmetic. Here is a simple concrete Boogie example, that when verified with Z3 (version 4.1) takes a long time (on the order of 3 minutes) to complete.
const D: int;
function f(n: int) returns (int) { n * D }
procedure test() returns ()
{
var a, b, c: int;
var M: [int]int;
var t: int;
assume 0 < a && 1000 * a < f(1);
assume 0 < c && 1000 * c < f(1);
assume f(100) * b == a * c;
assert M[t] > 0;
}
It seems that the problem is caused by an interaction of functions, range assumption on integer variables as well as multiplications of (unknown) integer values. The assertion in the end should not be provable. Instead of failing quickly, it seems that Z3 has ways to instantiate lots of terms somehow, as its memory consumptions grows fairly quickly to about 300 MB, at which point it gives up.
I'm wondering if this is a bug, or whether it is possible to improve the heuristics on when Z3 should stop the search in the particular direction it is currently trying to solve the problem.
One interesting thing is that inlining the function by using
function {:inline} f(n: int) returns (int) { n * D }
makes the verification terminate very quickly.
Background: This is a minimal test case for a problem that we see in our verifier Chalice. There, the Boogie programs get much longer, potentially with multiple assumptions of a similar kind. Often, the verification does not appear to be terminating at all.
Any ideas?
Yes, the non-termination is due to nonlinear integer arithmetic. Z3 has a new nonlinear solver, but it is for "nonlinear real arithmetic", and can only be used in quantifier free problems that only use arithmetic (i.e., no uninterpreted functions like in your example).
Thus, the old arithmetic solver is used in your example. This solver has very limited support for integer arithmetic. Your analysis of the problem is correct. Z3 has trouble finding a solution for the block of nonlinear integer constraints. Note that if we replace f(100) * b == a * c with f(100) * b <= a * c, then Z3 returns immediately with an "unknown" answer.
We can avoid the non-termination by limiting the number of nonlinear arithmetic reasoning in Z3. The option NL_ARITH_ROUNDS=100 will use the nonlinear module at most 100 times for each Z3 query. This is not an ideal solution, but at least, we avoid the non-termination.

Resources