Dafny 3 maximum int constant - dafny

I there in Dafny 3 a maximum/minimum int constant? something like int.MaxValue?
I need it to write a Dafny program that calculates the minimum value in a sequence.

In Dafny, the type int is meant to model mathematical integers. There is no maximum or minimum int constant.
If you want to work with a range of integers, you can define it using a newtype declaration. The Dafny library also contains some standard definitions such as 32 bits integers.
As for finding the minimum int in a sequence, you could do something along these lines:
datatype Option<T> =
| Some(value: T)
| None
function FindMinRec(s: seq<int>, lb: int): int {
if |s| == 0 then lb
else if s[0] < lb
then FindMinRec(s[1..],s[0])
else FindMinRec(s[1..],lb)
}
function FindMin(s: seq<int>): Option<int> {
if |s| == 0 then None else Some(FindMinRec(s[1..],s[0]))
}

Related

Convert bytes to signed integers in lua 5.1.5

I'm looking for how to turn bytes into a signed int using lua 5.1.5, so far I've only been able to find solutions for lua 5.2 onward, and they are not backward compatible.
I have solutions for how to turn bytes into unsigned integers, like so:
payload_t.temperature=tonumber(utility.hex2str(string.sub(payload,32,33)),16)
First of all I'll assume that you actually have a byte string rather than a hex string given; if your string is a hex string, you can trivially convert it to a byte string using gsub:
function hex2bytes(str)
-- assert that it is indeed a string of hex digit pairs
assert(#str % 2 == 0 and not str:match"[^%x]")
return str:gsub("%x%x", function(hex) return tonumber(hex, 16) end)
end
Now, let's convert this byte string to an integer. I'll assume little endian (least significant byte first); should your string be big endian (most significant byte first) you'll have to reverse it using str:reverse() before you read it.
Reading an unsigned integer is pretty straightforward:
function bytes2uint(str)
local uint = 0
for i = 1, #str do
uint = uint + str:byte(i) * 0x100^(i-1)
end
return uint
end
I'll assume your integers are stored using Two's complement. In this case the higher 2^n values (equivalent to the first bit being set or the value being >= 2^(n-1)) the uint can take represent negative numbers, with the smallest value (2^(n-1)) representing the largest negative value (-2^(n-1)). Thus you can simply subtract the unsigned value from 2^n, the (exclusive) max value for the uint:
function bytes2int(str)
local uint = bytes2uint(str)
local max = 0x100 ^ #str
if uint >= max / 2 then
return uint - max
end
return uint
end

How would one create a bitwise rotation function in dart?

I'm in the process of creating a cryptography package for Dart (https://pub.dev/packages/steel_crypt). Right now, most of what I've done is either exposed from PointyCastle or simple-ish algorithms where bitwise rotations are unnecessary or replaceable by >> and <<.
However, as I move toward complicated cryptography solutions, which I can do mathematically, I'm unsure of how to implement bitwise rotation in Dart with maximum efficiency. Because of the nature of cryptography, the speed part is emphasized and uncompromising, in that I need the absolute fastest implementation.
I've ported a method of bitwise rotation from Java. I'm pretty sure this is correct, but unsure of the efficiency and readability:
My tested implementation is below:
int INT_BITS = 64; //Dart ints are 64 bit
static int leftRotate(int n, int d) {
//In n<<d, last d bits are 0.
//To put first 3 bits of n at
//last, do bitwise-or of n<<d with
//n >> (INT_BITS - d)
return (n << d) | (n >> (INT_BITS - d));
}
static int rightRotate(int n, int d) {
//In n>>d, first d bits are 0.
//To put last 3 bits of n at
//first, we do bitwise-or of n>>d with
//n << (INT_BITS - d)
return (n >> d) | (n << (INT_BITS - d));
}
EDIT (for clarity): Dart has no unsigned right or left shift, meaning that >> and << are signed right shifts, which bears more significance than I might have thought. It poses a challenge that other languages don't in terms of devising an answer. The accepted answer below explains this and also shows the correct method of bitwise rotation.
As pointed out, Dart has no >>> (unsigned right shift) operator, so you have to rely on the signed shift operator.
In that case,
int rotateLeft(int n, int count) {
const bitCount = 64; // make it 32 for JavaScript compilation.
assert(count >= 0 && count < bitCount);
if (count == 0) return n;
return (n << count) |
((n >= 0) ? n >> (bitCount - count) : ~(~n >> (bitCount - count)));
}
should work.
This code only works for the native VM. When compiling to JavaScript, numbers are doubles, and bitwise operations are only done on 32-bit numbers.

Get result from modulo operation in ios swift [duplicate]

How does modulo of negative numbers work in swift ?
When i did (-1 % 3) it is giving -1 but the remainder is 2. What is the catch in it?
The Swift remainder operator % computes the remainder of
the integer division:
a % b = a - (a/b) * b
where / is the truncating integer division. In your case
(-1) % 3 = (-1) - ((-1)/3) * 3 = (-1) - 0 * 3 = -1
So the remainder has always the same sign as the dividend (unless
the remainder is zero).
This is the same definition as required e.g. in the C99 standard,
see for example
Does either ANSI C or ISO C specify what -5 % 10 should be?. See also
Wikipedia: Modulo operation for an overview
how this is handled in different programming languages.
A "true" modulus function could be defined in Swift like this:
func mod(_ a: Int, _ n: Int) -> Int {
precondition(n > 0, "modulus must be positive")
let r = a % n
return r >= 0 ? r : r + n
}
print(mod(-1, 3)) // 2
From the Language Guide - Basic Operators:
Remainder Operator
The remainder operator (a % b) works out how many multiples of b
will fit inside a and returns the value that is left over (known as
the remainder).
The remainder operator (%) is also known as a modulo operator in
other languages. However, its behavior in Swift for negative numbers
means that it is, strictly speaking, a remainder rather than a modulo
operation.
...
The same method is applied when calculating the remainder for a
negative value of a:
-9 % 4 // equals -1
Inserting -9 and 4 into the equation yields:
-9 = (4 x -2) + -1
giving a remainder value of -1.
In your case, no 3 will fit in 1, and the remainder is 1 (same with -1 -> remainder is -1).
If what you are really after is capturing a number between 0 and b, try using this:
infix operator %%
extension Int {
static func %% (_ left: Int, _ right: Int) -> Int {
if left >= 0 { return left % right }
if left >= -right { return (left+right) }
return ((left % right)+right)%right
}
}
print(-1 %% 3) //prints 2
This will work for all value of a, unlike the the previous answer while will only work if a > -b.
I prefer the %% operator over just overloading %, as it will be very clear that you are not doing a true mod function.
The reason for the if statements, instead of just using the final return line, is for speed, as a mod function requires a division, and divisions are more costly that a conditional.
An answer inspired by cdeerinck, which sacrifices speed for simplicity, is this:
infix operator %%
extension Int {
static func %% (_ left: Int, _ right: Int) -> Int {
let mod = left % right
return mod >= 0 ? mod : mod + right
}
}
I tested it with this little loop in a playground:
for test in [6, 5, 4, 0, -1, -2, -100, -101] {
print(test, "%% 5", test %% 5)
}

system hangs when factorizing a float instead of an integer

I am struggling to understand the cause of this issue. To the point:
1) Passing an integer ( 10 ) to the following factorization function works immediately:
test() ->
X = 10,
F = factorize(X).
factorize(0) -> 1;
factorize(N) -> N * factorize(N-1).
2) Passing a float ( 10.0 ) will cause the beam process to hang, taking high CPU and not even terminating. Notice this is a small value. I can factorize a high integer number and get an almost immediate response, but a small float number 10.0 will cause it hang.
test() ->
X = 10.0, <-- NOTICE THE DOT ZERO 10.0
F = factorize(X).
factorize(0) -> 1;
factorize(N) -> N * factorize(N-1).
Question: why on Erl Earth would this hanging occur with some mere multiplication recurrency of floats ?
As documentation says, there are two operations to compare equality of terms in Erlang and they differ only in handling integer and floats:
=:= - exactly equal - which counts numbers equal if the types are the same, and their values are the same too false = (0.0 =:= 0)
== - equal - counts numbers equal if their values are the same but their types may not be equal true = (0.0 == 0)
Pattern matching uses the first one - exactly equal - operator, that's why your function hanged in the second clause.
Another problem with floats is thier approximate value. You can never be sure you have some exact value especially after arithmetic operation. There is a common practice to use small value epsilon in floats equality tests.
is_zero(F) -> (F < 1.0e-10) andalso (F > -1.0e-10).

How could I test if two bit patterns differs in any N bits (position doesn't matter)

Let's say i have this bit field value: 10101001
How would i test if any other value differs in any n bits. Without considering
the positions?
Example:
10101001
10101011 --> 1 bit different
10101001
10111001 --> 1 bit different
10101001
01101001 --> 2 bits different
10101001
00101011 --> 2 bits different
I need to make a lot of this comparisons so i'm primarily looking for perfomance but any
hint is very welcome.
Take the XOR of the two fields and do a population count of the result.
if you XOR the 2 values together, you are left only with the bits that are different.
You then only need to count the bits which are still 1 and you have your answer
in c:
unsigned char val1=12;
unsigned char val2=123;
unsigned char xored = val1 ^ val2;
int i;
int numBits=0;
for(i=0; i<8; i++)
{
if(xored&1) numBits++;
xored>>=1;
}
although there are probably faster ways to count the bits in a byte
(you could for instance use a lookuptable for 256 values)
Just like everybody else said, use XOR to determine what's different and then use one of these algorithms to count.
This gets the bit difference between the values and counts the bits three at a time:
public static int BitDifference(int a, int b) {
int cnt = 0, bits = a ^ b;
while (bits != 0) {
cnt += (0xE994 >> ((bits & 7) << 1)) & 3;
bits >>= 3;
}
return cnt;
}
XOR the numbers, then the problem becomes a matter of counting the 1s in the result.
In Java:
Integer.bitCount(a ^ b)
Comparison is performed with XOR, as others already answered.
counting can be performed in several ways:
shift left and addition.
lookup in a table.
logic formulas that you can find with Karnaugh maps.

Resources