Dafny: types with contraints - dafny

I am trying some things in Dafny. I want to code a simple datastructure that holds an uncompressed image in memory:
datatype image' = image(width: int, height: int, data: array<byte>)
newtype byte = b: int | 0 <= b <= 255
Actually using it:
method Main() {
var dat := [1,2,3];
var im := image(1, 3, dat);
}
datatype image' = image(width: int, height: int, data: array<byte>)
newtype byte = b: int | 0 <= b <= 255
leads Dafny to complain:
stdin.dfy(3,24): Error: incorrect type of datatype constructor argument (found seq, expected array)
1 resolution/type errors detected in stdin.dfy
I might also want to demand that the byte array is not null, and the size of the byte array is equal to width * height * 3 (to store three bytes representing the RGB value of that pixel).
What way should I enforce this? I looked into newtype, which lets you put some constraints on variables with a certain type, but this works only for numeric types.

Dafny supports both immutable sequences (which are like mathematical sequences of elements) and mutable arrays (which are, like in C and Java, pointers to elements). The error you're getting is telling you that you're calling the image constructor with a seq<byte> value where an array<byte> value is expected.
You can fix the problem by replacing your definition of dat with:
var dat := new byte[3];
dat[0], dat[1], dat[2] := 1, 2, 3;
However, the more typical thing, if you're using a datatype (which is immutable), would be to use a sequence. So, you probably want to instead change your definition of image to:
datatype image = image(width: int, height: int, data: seq<byte>)
Btw, note that Dafny allows you to name a type and one of its constructors the same, so there's no reason to name one of them with a prime (unless you want to, of course).
Another matter of style is to use a half-open interval in your definition of byte:
newtype byte = b: int | 0 <= b < 256
Since half-open intervals are prevalent in computer science, Dafny's syntax favors them. For example, for a sequence s, the expression s[52..57] denotes a subsequence of s of length 5 (that is, 57 minus 52) starting in s at index 52. One more thing, you can also leave out the type int of b if you want, since Dafny will infer it:
newtype byte = b | 0 <= b < 256
You also asked about the possibility of adding a type constraint, so that the sequence in your datatype will always be of length 3. As you discovered, you cannot do this with a newtype, because newtype (at least for now) only works with numeric types. You can (almost) use a subset type, however. This would be done as follows:
type triple = s: seq<byte> | |s| == 3
(In this example, the first vertical bar is like the one in the newtype declaration and says "such that", whereas the next two denote the length operator on sequences.) The trouble with this declaration is that types must be nonempty and Dafny isn't convinced that there are any values that satisfy the constraint of triple. Well, Dafny is not trying very hard. The plan is to add a witness clause to the type (and newtype) declaration, so that a programmer can show Dafny a value that belongs to the triple type. However, this support is waiting for some implementation changes that will allow customized initial values, so you cannot use this constraint at this time.
Not that you want it here, but Dafny would let you give a weaker constraint that admits the empty sequence:
type triple = s: seq<byte> | |s| <= 3
So, instead, if you want to talk about that an image value has a data component of length 3, then introduce a predicate:
predicate GoodImage(img: image)
{
|img.data| == 3
}
and use this predicate in specifications like pre- and postconditions.
Program safely,
Rustan

Related

set of WideChar: Sets may have at most 256 elements

I have this line:
const
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
The above does not compile, with error:
[Error] Sets may have at most 256 elements
But this line does compile ok:
var WS: WideString;
if WS[1] in [WideChar('A')..WideChar('Z')] then...
And this also compiles ok:
const
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
...
if WS[1] in MY_SET then...
Why is that?
EDIT: My question is why if WS[1] in [WideChar('A')..WideChar('Z')] compiles? and why MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')]; compiles? aren't they also need to apply to the set rules?
A valid set has to obey two rules:
Each element in a set must have an ordinal value less than 256.
The set must not have more than 256 elements.
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
Here you declare a set type (Set of WideChar) which has more than 256 elements -> Compiler error.
if WS[1] in [WideChar('A')..WideChar('Z')]
Here, the compiler sees WideChar('A') as an ordinal value. This value and all other values in the set are below 256. This is ok with rule 1.
The number of unique elements are also within limits (Ord('Z')-Ord('A')+1), so the 2nd rules passes.
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
Here you declare a set that also fulfills the requirements as above. Note that the compiler sees this as a set of ordinal values, not as a set of WideChar.
A set can have no more than 256 elements.
Even with so few elements the set already uses 32 bytes.
From the documentation:
A set is a bit array where each bit indicates whether an element is in the set or not. The maximum number of elements in a set is 256, so a set never occupies more than 32 bytes. The number of bytes occupied by a particular set is equal to
(Max div 8) - (Min div 8) + 1
For this reason only sets of byte, (ansi)char, boolean and enumerations with fewer than 257 elements are possible.
Because widechar uses 2 bytes it can have 65536 possible values.
A set of widechar would take up 8Kb, too large to be practical.
type
Capitals = 'A'..'Z';
const
MY_SET: set of Capitals = [WideChar('A')..WideChar('Z')];
Will compile and work the same.
It does seem a bit silly to use widechar if your code ignores unicode.
As written only the English capitals are recognized, you do not take into account different locales.
In this case it would be better to use code like
if (AWideChar >= 'A') and (AWideChar <= 'Z') ....
That will work no matter how many chars fall in between.
Obviously you can encapsulate this in a function to save on typing.
If you insist on having large sets, see this answer: https://stackoverflow.com/a/2281327/650492

Wrong value calculated by Delphi

I have a record declared as
T3DVector = record
X,Y,Z: Integer;
end;
One variable V of type T3DVector holds:
V.X= -25052
V.Y= 34165
V.Z= 37730
I then try to the following line. D is declared as Double.
D:= (V.X*V.X) + (V.Y*V.Y) + (V.Z*V.Z);
The return value is: -1076564467 (0xFFFFFFFFBFD4EE0D)
The following code should be equivalent:
D:= (V.X*V.X);
D:= D + (V.Y*V.Y);
D:= D + (V.Z*V.Z);
But this,however, returns 3218402829 (0x00000000BFD4EE0D), which actually is the correct value.
By looking at the high bits, I thought this was an overflow problem. When I turned on overflow checking, the first line halted with the exception "Integer overflow". This is even more confusing to me because D is Double, and I am only storing values into D
Can anyone clarify please ?
The target of an assignment statement has no bearing on how the right side is evaluated. On the right side, all you have are of type Integer, so the expression is evaluated as that type.
If you want the right side evaluated as some other type, then at least one operand must have that type. You witnessed this when you broke the statement into multiple steps, incorporating D into the right side of the expression. The value of V.Y * V.Y is still evaluated as type Integer, but the result is promoted to have type Double so that it matches the type of the other operand in the addition term (D).
The fact that D is a double doesn't affect the type of X, Y and Z. Those are Integers, and are apparently not large enough to store the squares of such large numbers, and their multiplication is what overflows. Why don't you declare them as doubles, too?

Postgres setting bit size dynamically

The goal is to fast extract bit at position N;
So far i've found only this way to do that:
CREATE OR REPLACE FUNCTION test(x int, size int)
RETURNS int AS
$BODY$
DECLARE
y int;
BEGIN
y = get_bit(x::bit(size)>>size-1,size-1);
return y;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
But this says because of bit(size)
invalid input syntax for integer: "size"
So i'm forsed to write size as a constant, for example bit(3)
Is there any way to set bit size dynamically? Alternatively, maybe there are some other ways to extract specific bit of int/text?
You don't need bitfield types for this. To test bit 6 of the value 200:
SELECT 200 & (1<<6) != 0
you binary-AND a value with the 6th bit set, by upshifting 1 by 6 base-2 places, then test to see if the result is nonzero.
Alternately, cast to a suitable bit size based on the integer and use position, though I think there's no reason whatsoever to do that when you can use extremely fast bitwise AND and shifts like above. Because get_bit numbers from the left, and uses a 1-offset instead of 0-offset, you have to take the compliment of 31 (for a 32-bit int) to get the position from the right:
SELECT get_bit(200::bit(32), 31 - 6);
so it's way simpler to just use bitwise operations like the first example.

mathematical 'not' of integer

Why delphi perform mathematical 'not' of integer rather than force to cast to boolean value in while do looping ?
Example
var myint:integer;
...
myint:=1;
while not myint=5 do
begin
myint:=myint+1;
showmessage('myint now is : '+inttostr(myint));
end;
Your expression uses two operators: not and =. In order to understand how it is parsed you need to consult the table of operator precedence.
Operators Precedence
----------------------------
# first (highest)
not
----------------------------
* second
/
div
mod
and
shl
shr
as
----------------------------
+ third
-
or
xor
----------------------------
= fourth (lowest)
<>
<
>
<=
>=
in
is
----------------------------
This shows that not has the highest precedence of all operators, and is of higher precedence than =. Which means that your expression is parsed as if it were:
(not myint) = 5
In this expression, because it is bound to an integral variable, not is biwise negation.
In order to get the result that you desire you must use parentheses to indicate that you wish to perform the equality test before the not:
not (myint = 5)
Now, in this expression, (myint = 5) is a logical expression and so the not operator is now logical negation.
The answer to a question of this nature is always found in the table of operator precedence and it will pay dividends to become familiar with this table. This table will tell you how any expression that omits parentheses is parsed.
Finally as Rudy points out, your specific expression is most cleanly written using the <> operator:
myint <> 5
The reason why delphi is doing this is that the not operator has a higher order than the equal comparison operator. It is an unary operator which is calculated first. In fact, is has to be calculated first before the = operator comes into action (which -of course- has the lowest order).
To force delphi calculating what you want, use parenthes:
if not (myint = 5) then....
// vs
if (not myint) = 5 then
// vs
if not myint = 5 then
As you know by now the latter two are equivalent.
BTW: Something similar (order precedence) happens with calculations like 3+4*5. Here, the multiplication is carried out before the addition because the * operator has a higher order compared to the + operator.

How to do Integer division in Dart?

I have trouble with integer division in Dart as it gives me error: 'Breaking on exception: type 'double' is not a subtype of type 'int' of 'c'.'
Here's the following code:
int a = 500;
int b = 250;
int c;
c = a / b; // <-- Gives warning in Dart Editor, and throws an error in runtime.
As you see, I was expecting that the result should be 2, or say, even if division of 'a' or 'b' would have a result of a float/double value, it should be converted directly to integer value, instead of throwing error like that.
I have a workaround by using .round()/.ceil()/.floor(), but this won't suffice as in my program, this little operation is critical as it is called thousands of times in one game update (or you can say in requestAnimationFrame).
I have not found any other solution to this yet, any idea? Thanks.
Dart version: 1.0.0_r30798
That is because Dart uses double to represent all numbers in dart2js. You can get interesting results, if you play with that:
Code:
int a = 1;
a is int;
a is double;
Result:
true
true
Actually, it is recommended to use type num when it comes to numbers, unless you have strong reasons to make it int (in for loop, for example). If you want to keep using int, use truncating division like this:
int a = 500;
int b = 250;
int c;
c = a ~/ b;
Otherwise, I would recommend to utilize num type.
Integer division is
c = a ~/ b;
you could also use
c = (a / b).floor();
c = (a / b).ceil();
if you want to define how fractions should be handled.
Short Answer
Use c = a ~/ b.
Long Answer
According to the docs, int are numbers without a decimal point, while double are numbers with a decimal point.
Both double and int are subtypes of num.
When two integers are divided using the / operator, the result is evaluated into a double. And the c variable was initialized as an integer. There are at least two things you can do:
Use c = a ~/ b.
The ~/ operator returns an int.
Use var c;. This creates a dynamic variable that can be assigned to any type, including a double and int and String etc.
Truncating division operator
You can use the truncating division operator ~/ to get an integer result from a division operation:
4 ~/ 2; // 2 (int)
Division operator
The regular division operator / will always return a double value at runtime (see the docs):
for (var i = 4; i == 4; i = 3) {
i / 2; // 2 (double)
}
Runtime versus compile time
You might have noticed that I wrote a loop for the second example (for the regular division operator) instead of 4 / 2.
The reason for this is the following:
When an expression can be evaluated at compile time, it will be simplified at that stage and also be typed accordingly. The compiler would simply convert 4 / 2 to 2 at compile time, which is then obviously an int. The loop prevents the compiler from evaluating the expression.
As long as your division happens at runtime (i.e. with variables that cannot be predicted at compile time), the return types of the / (double) and ~/ (int) operators will be the types you will see for your expressions at runtime.
See this fun example for further reference.
Conclusion
Generally speaking, the regular division operator / always returns a double value and truncate divide can be used to get an int result instead.
Compiler optimization might, however, cause some funky results :)

Resources