Dafny handling bit vectors - dafny

I'm trying to use Dafny with (unsigned) bitvectors (following this post).
The following simplified example (permalink) works fine, but when I change to bv32, I get:
Unexpected prover response: timeout
Is it a bug? or an expected performance gap between the two?
Here is the code to make this post self contained:
method bitvectors()
{
var a : bv16 := 0;
// var a : bv32 := 0;
ghost var atag := a;
while (a<0xFFFF)
// while (a<0xFFFFFFFF)
invariant atag < 0xFFFF
//invariant atag < 0xFFFFFFFF
{
atag := a;
a := a+1;
}
}

I'm hoping someone else has a better answer... but basically this is why I stay away from bitvectors :)
I did a little bit of digging, and it seems that on this particular example Dafny gets stuck in the termination check for the loop. At the Boogie level, comparing bitvectors involves converting them to mathematical integers, and then to real numbers, and then comparing those. It's pretty common for solvers to have trouble with these conversion functions, because they cut across different theories.
Sorry I couldn't be more helpful.

Related

Plotting dual recursion equations with wxplot2d always exceeds control stack

I have the following Maxima code:
A[t] :=
if t=0 then A0
else (a+b)*A[t-1]+B[t-1]+c
;
B[t] :=
if t=0 then B0
else (a-b)*B[t-1]+c
;
a:0.1;
b:0.1;
c:1;
A0:100;
B0:0;
wxplot2d(A[t], [t, 0, 100]);
The only remotely weird thing I can think of is that recurion equation A depends on recurion equation B. I would think everything else is extremely basic.
But when I run it, I always get the following error repeated multiple times and no plot.
Maxima encountered a Lisp error:
Control stack exhausted (no more space for function call frames).
This is probably due to heavily nested or infinitely recursive function
calls, or a tail call that SBCL cannot or has not optimized away.
Even when I plot from time steps 0 to 1 with wxplot2d(A[t], [t, 0, 1]);, which by my count would only be two recursions and one external function reference, I still get the same error. Is there no way to have Maxima plot these equations?
I find that the following seems to work.
myvalues: makelist ([t, A[t]], t, 0, 100);
wxplot2d ([discrete, myvalues]);
Just to be clear, A[t] := ..., with square brackets, defines what is called an array function Maxima, which is a memoizing function (i.e. remembers previously calculated values). An ordinary, non-memoizing function is defined as A(t) := ..., with parentheses.
Given that A and B are defined only for nonnegative integers, it makes sense that they should be memoizing functions, so there's no need to change it.

Parsing procedure calls for a toy language

I have a certain toy language that defines, amongst others, procedures and procedure calls, using EBNF syntax:
program = procedure, {procedure} ;
procedure = "procedure", NAME, bracedblock ;
bracedBlock = "{" , statementlist , "}" ;
statementlist = statement, { statement } ;
statement = define | if | while | call | // others omitted for brevity ;
define = NAME, "=", expression, ";"
if = "if", conditionalblock, "then", bracedBlock, "else", bracedBlock
call = "call" , NAME, ";" ;
// other definitions omitted for brevity
A tokeniser for a program in this language has been implemented, and returns a vector of tokens.
Now, parsing said program without the procedure calls, is fairly straightforward: one can define a recursive descent parser using the above grammar directly, and simply parse through the tokens. Some further notes:
Each procedure may call any other procedure except itself, directly or indirectly (i.e. no recursion), and these need not necessarily be in the order of appearance in the source code (i.e. B may be defined after A, and A may call B, or vice versa).
Procedure names need to be unique, and 'reserved keywords' may be used as variable/procedure names.
Whitespace does not matter, at least amongst tokens of different type: similar to C/C++.
There is no scoping rule: all variables are global.
The concept of a 'line number' is important: each statement has one or more line numbers associated with it: define statements have only 1 line number each, for instance, whereas an if statement, which is itself a parent of two statement lists, has multiple line numbers. For instance:
LN CODE
procedure A {
1. a = 5;
2. b = 7;
3. c = 3;
4. 5. if (b < c) then { call C; } else {
6. call B;
}
procedure B {
7. d = 5;
8. while (d > 2) {
9. d = d + 1; }
}
procedure C {
10. e = 10;
11. f = 8;
12. call B;
}
Line numbers are continuous throughout the program; only procedure definitions and the else keyword aren't assigned line numbers. The line numbers are defined by grammar, rather than their position in source code: for instance, consider 'lines' 4 and 5.
There are some relationships that need to be set in a database given each statement and its line number, variables used, variables set, and child containers. This is a key consideration.
My question is therefore this: how can I parse these function calls, maintain the integrity of the line numbers, and set the relationships?
I have considered the 'OS' way of doing things: upon encounter of a procedure call, look ahead for a procedure that matches said called procedure, parse the callee, and unroll the call stack back to the caller. However, this ruins the line number ordering: if the above program were to be parsed this way, C would have line numbers 6 to 8 inclusive, rather than 10 to 12 inclusive.
Another solution is to parse the entire program once in order, maintain a toposort of procedure calls, and then parse a second time by following said toposort. This is problematic because of implementation details.
Is there a possibly better way to do this?
It's always tempting to try to completely process a program text in a single on-line pass. Unfortunately, it is practically never the simplest solution. Trying to do everything at once in a linear progression results in a kind of spaghetti of intertwined computations, and making it all work almost always involves unnecessary restrictions on the language which will later prove to be unfortunate.
So I'd encourage you to reconsider some of your design decisions. If you use the parser just to build up some kind of structural representation of the program -- whether it's an abstract syntax tree or a vector of three-address code, or some other alternative -- and then do further processing in a series of single-purpose passes over that structural representations, you'll likely find that the code is:
much simpler, because computations don't have to be intermingled;
more general, because each pass can be done in the most convenient order rather than restricting inputs to fit a linear ordering;
more readable and more maintainable.
Persisting data structures over multiple passes might increase storage requirements slightly. But the structures are unlikely to occupy enough storage that this will be noticeable. And it probably will not increase the computation time; indeed, it might even reduce the time because the individual passes are simpler and easier to optimise.

Dafny iterator: precondition and modifes clause violated

Dafny shows multiple errors when calling MoveNext() on an iterator that does nothing:
iterator Iter()
{}
method main()
decreases *
{
var iter := new Iter();
while (true)
decreases *
{
var more := iter.MoveNext();
if (!more) { break; }
}
}
The errors are on the call to iter.MoveNext():
call may violate context's modifies clause
A precondition for this call might not hold.
There is no modifies clause for main or Iter, and there is no precondition for Iter. Why is this program incorrect?
You need the following invariant on the loop
invariant iter.Valid() && fresh(iter._new)
Then your program verifies. As usual, there's nothing wrong (dynamically) with your program, but you can have false positives at verification time due to missing annotations.
As far as I know, this invariant is always required when using iterators.
(A little) More information about iterators can be found in the Dafny Reference, in Chapter 16. (At least, enough information for me to remember the answer to this question.)

Can we bound the precision of the real terms in Z3?

In one of my SMT program, I use a real term. I need to bound the precision of the real number for increasing the efficiency, as there are almost infinite number of solutions are possible for this number, although only 5/6 digits after the decimal point is necessary. For example, the possible valuation of the real numbers can be the following, though all are the same if we take the first seven digits after the decimal point.
1197325/13631488 = 0.087835238530......
19157213/218103808 = 0.087835298134......
153257613/1744830464 = 0.087835245980......
1226060865/13958643712 = 0.087835243186......
I want that the SMT solver considers all these four number as a single number (so that the search space reduces). Is there any way to control the precision of the real number?
I tried programmatically (using Z3 Dot Net API) to solve this above problem, which is shown in the following. Here DelBP[j] is a real term.
{
BoolExpr[] _Exprs = new BoolExpr[nBuses];
for (j = 1; j <= nBuses; j++)
{
_Exprs[j - 1] = z3.MkEq(DelBP[j], z3.MkDiv(z3.MkInt2Real(DelBP_A[j]), z3.MkInt2Real(DelBP_B[j])));
}
BoolExpr Expr = z3.MkAnd(_Exprs);
s.Assert(Expr);
tw.WriteLine("(assert {0})", Expr.ToString());
}
{
BoolExpr[] _Exprs = new BoolExpr[nBuses];
for (j = 1; j <= nBuses; j++)
{
_Exprs[j - 1] = z3.MkAnd(z3.MkGe(DelBP_A[j], z3.MkInt(1)),
z3.MkLe(DelBP_A[j], z3.MkInt(10000)));
}
BoolExpr Expr = z3.MkAnd(_Exprs);
s.Assert(Expr);
tw.WriteLine("(assert {0})", Expr.ToString());
}
{
BoolExpr[] _Exprs = new BoolExpr[nBuses];
for (j = 1; j <= nBuses; j++)
{
_Exprs[j - 1] = z3.MkAnd(z3.MkGe(DelBP_B[j], z3.MkInt(1)),
z3.MkLe(DelBP_B[j], z3.MkInt(10000)));
}
BoolExpr Expr = z3.MkAnd(_Exprs);
s.Assert(Expr);
tw.WriteLine("(assert {0})", Expr.ToString());
}
However, it did not work. Can anyone help me to solve this problem? Thank you in advance.
If you feel the need to control the "precision" of real-numbers, then that strongly suggests Real is not the correct domain for your problem. Some ideas, depending on what you're really trying to do:
If 6 digits past the decimal point is all you care, then you might get away with using plain Integers, multiplying everything by 1e6 and restricting all variables to be less than 1e6; or some other similar transformation.
Keep in mind that Z3 has support for IEEE-floating point numbers these days, which are by definition of limited precision. So you can use those if your domain is truly the floating-point numbers as prescribed by IEEE-754.
If you're trying to generate "successive" results, i.e., by solving the problem, then adding the constraint that the result should be different than the previous one, and calling Z3 again; then you can consider adding a constraint that says the new result should differ from the old by more than 1e6 in absolute value.
Whether any of this applies depends on the precise problem you're trying to solve. If you can share some more of your problem, people might be able to come up with other ideas. But the first choice should be figuring out if Real is really the domain you want to work with.

Z3 Performance with Non-Linear Arithmetic

We are running into performance problems with what I believe is the part of Z3 that treats non-linear arithmetic. Here is a simple concrete Boogie example, that when verified with Z3 (version 4.1) takes a long time (on the order of 3 minutes) to complete.
const D: int;
function f(n: int) returns (int) { n * D }
procedure test() returns ()
{
var a, b, c: int;
var M: [int]int;
var t: int;
assume 0 < a && 1000 * a < f(1);
assume 0 < c && 1000 * c < f(1);
assume f(100) * b == a * c;
assert M[t] > 0;
}
It seems that the problem is caused by an interaction of functions, range assumption on integer variables as well as multiplications of (unknown) integer values. The assertion in the end should not be provable. Instead of failing quickly, it seems that Z3 has ways to instantiate lots of terms somehow, as its memory consumptions grows fairly quickly to about 300 MB, at which point it gives up.
I'm wondering if this is a bug, or whether it is possible to improve the heuristics on when Z3 should stop the search in the particular direction it is currently trying to solve the problem.
One interesting thing is that inlining the function by using
function {:inline} f(n: int) returns (int) { n * D }
makes the verification terminate very quickly.
Background: This is a minimal test case for a problem that we see in our verifier Chalice. There, the Boogie programs get much longer, potentially with multiple assumptions of a similar kind. Often, the verification does not appear to be terminating at all.
Any ideas?
Yes, the non-termination is due to nonlinear integer arithmetic. Z3 has a new nonlinear solver, but it is for "nonlinear real arithmetic", and can only be used in quantifier free problems that only use arithmetic (i.e., no uninterpreted functions like in your example).
Thus, the old arithmetic solver is used in your example. This solver has very limited support for integer arithmetic. Your analysis of the problem is correct. Z3 has trouble finding a solution for the block of nonlinear integer constraints. Note that if we replace f(100) * b == a * c with f(100) * b <= a * c, then Z3 returns immediately with an "unknown" answer.
We can avoid the non-termination by limiting the number of nonlinear arithmetic reasoning in Z3. The option NL_ARITH_ROUNDS=100 will use the nonlinear module at most 100 times for each Z3 query. This is not an ideal solution, but at least, we avoid the non-termination.

Resources