How to do Integer division in Dart? - dart

I have trouble with integer division in Dart as it gives me error: 'Breaking on exception: type 'double' is not a subtype of type 'int' of 'c'.'
Here's the following code:
int a = 500;
int b = 250;
int c;
c = a / b; // <-- Gives warning in Dart Editor, and throws an error in runtime.
As you see, I was expecting that the result should be 2, or say, even if division of 'a' or 'b' would have a result of a float/double value, it should be converted directly to integer value, instead of throwing error like that.
I have a workaround by using .round()/.ceil()/.floor(), but this won't suffice as in my program, this little operation is critical as it is called thousands of times in one game update (or you can say in requestAnimationFrame).
I have not found any other solution to this yet, any idea? Thanks.
Dart version: 1.0.0_r30798

That is because Dart uses double to represent all numbers in dart2js. You can get interesting results, if you play with that:
Code:
int a = 1;
a is int;
a is double;
Result:
true
true
Actually, it is recommended to use type num when it comes to numbers, unless you have strong reasons to make it int (in for loop, for example). If you want to keep using int, use truncating division like this:
int a = 500;
int b = 250;
int c;
c = a ~/ b;
Otherwise, I would recommend to utilize num type.

Integer division is
c = a ~/ b;
you could also use
c = (a / b).floor();
c = (a / b).ceil();
if you want to define how fractions should be handled.

Short Answer
Use c = a ~/ b.
Long Answer
According to the docs, int are numbers without a decimal point, while double are numbers with a decimal point.
Both double and int are subtypes of num.
When two integers are divided using the / operator, the result is evaluated into a double. And the c variable was initialized as an integer. There are at least two things you can do:
Use c = a ~/ b.
The ~/ operator returns an int.
Use var c;. This creates a dynamic variable that can be assigned to any type, including a double and int and String etc.

Truncating division operator
You can use the truncating division operator ~/ to get an integer result from a division operation:
4 ~/ 2; // 2 (int)
Division operator
The regular division operator / will always return a double value at runtime (see the docs):
for (var i = 4; i == 4; i = 3) {
i / 2; // 2 (double)
}
Runtime versus compile time
You might have noticed that I wrote a loop for the second example (for the regular division operator) instead of 4 / 2.
The reason for this is the following:
When an expression can be evaluated at compile time, it will be simplified at that stage and also be typed accordingly. The compiler would simply convert 4 / 2 to 2 at compile time, which is then obviously an int. The loop prevents the compiler from evaluating the expression.
As long as your division happens at runtime (i.e. with variables that cannot be predicted at compile time), the return types of the / (double) and ~/ (int) operators will be the types you will see for your expressions at runtime.
See this fun example for further reference.
Conclusion
Generally speaking, the regular division operator / always returns a double value and truncate divide can be used to get an int result instead.
Compiler optimization might, however, cause some funky results :)

Related

Dafny: types with contraints

I am trying some things in Dafny. I want to code a simple datastructure that holds an uncompressed image in memory:
datatype image' = image(width: int, height: int, data: array<byte>)
newtype byte = b: int | 0 <= b <= 255
Actually using it:
method Main() {
var dat := [1,2,3];
var im := image(1, 3, dat);
}
datatype image' = image(width: int, height: int, data: array<byte>)
newtype byte = b: int | 0 <= b <= 255
leads Dafny to complain:
stdin.dfy(3,24): Error: incorrect type of datatype constructor argument (found seq, expected array)
1 resolution/type errors detected in stdin.dfy
I might also want to demand that the byte array is not null, and the size of the byte array is equal to width * height * 3 (to store three bytes representing the RGB value of that pixel).
What way should I enforce this? I looked into newtype, which lets you put some constraints on variables with a certain type, but this works only for numeric types.
Dafny supports both immutable sequences (which are like mathematical sequences of elements) and mutable arrays (which are, like in C and Java, pointers to elements). The error you're getting is telling you that you're calling the image constructor with a seq<byte> value where an array<byte> value is expected.
You can fix the problem by replacing your definition of dat with:
var dat := new byte[3];
dat[0], dat[1], dat[2] := 1, 2, 3;
However, the more typical thing, if you're using a datatype (which is immutable), would be to use a sequence. So, you probably want to instead change your definition of image to:
datatype image = image(width: int, height: int, data: seq<byte>)
Btw, note that Dafny allows you to name a type and one of its constructors the same, so there's no reason to name one of them with a prime (unless you want to, of course).
Another matter of style is to use a half-open interval in your definition of byte:
newtype byte = b: int | 0 <= b < 256
Since half-open intervals are prevalent in computer science, Dafny's syntax favors them. For example, for a sequence s, the expression s[52..57] denotes a subsequence of s of length 5 (that is, 57 minus 52) starting in s at index 52. One more thing, you can also leave out the type int of b if you want, since Dafny will infer it:
newtype byte = b | 0 <= b < 256
You also asked about the possibility of adding a type constraint, so that the sequence in your datatype will always be of length 3. As you discovered, you cannot do this with a newtype, because newtype (at least for now) only works with numeric types. You can (almost) use a subset type, however. This would be done as follows:
type triple = s: seq<byte> | |s| == 3
(In this example, the first vertical bar is like the one in the newtype declaration and says "such that", whereas the next two denote the length operator on sequences.) The trouble with this declaration is that types must be nonempty and Dafny isn't convinced that there are any values that satisfy the constraint of triple. Well, Dafny is not trying very hard. The plan is to add a witness clause to the type (and newtype) declaration, so that a programmer can show Dafny a value that belongs to the triple type. However, this support is waiting for some implementation changes that will allow customized initial values, so you cannot use this constraint at this time.
Not that you want it here, but Dafny would let you give a weaker constraint that admits the empty sequence:
type triple = s: seq<byte> | |s| <= 3
So, instead, if you want to talk about that an image value has a data component of length 3, then introduce a predicate:
predicate GoodImage(img: image)
{
|img.data| == 3
}
and use this predicate in specifications like pre- and postconditions.
Program safely,
Rustan

Wrong value calculated by Delphi

I have a record declared as
T3DVector = record
X,Y,Z: Integer;
end;
One variable V of type T3DVector holds:
V.X= -25052
V.Y= 34165
V.Z= 37730
I then try to the following line. D is declared as Double.
D:= (V.X*V.X) + (V.Y*V.Y) + (V.Z*V.Z);
The return value is: -1076564467 (0xFFFFFFFFBFD4EE0D)
The following code should be equivalent:
D:= (V.X*V.X);
D:= D + (V.Y*V.Y);
D:= D + (V.Z*V.Z);
But this,however, returns 3218402829 (0x00000000BFD4EE0D), which actually is the correct value.
By looking at the high bits, I thought this was an overflow problem. When I turned on overflow checking, the first line halted with the exception "Integer overflow". This is even more confusing to me because D is Double, and I am only storing values into D
Can anyone clarify please ?
The target of an assignment statement has no bearing on how the right side is evaluated. On the right side, all you have are of type Integer, so the expression is evaluated as that type.
If you want the right side evaluated as some other type, then at least one operand must have that type. You witnessed this when you broke the statement into multiple steps, incorporating D into the right side of the expression. The value of V.Y * V.Y is still evaluated as type Integer, but the result is promoted to have type Double so that it matches the type of the other operand in the addition term (D).
The fact that D is a double doesn't affect the type of X, Y and Z. Those are Integers, and are apparently not large enough to store the squares of such large numbers, and their multiplication is what overflows. Why don't you declare them as doubles, too?

2048 casted to BOOL returns 0

Consider this code
NSInteger q = 2048;
BOOL boolQ = q;
NSLog(#"%hhd",boolQ);
After execution boolQ is equal 0. Could someone explain why is this so?
BOOL probably is implemented as char or uint8_t/int8_t, as "hh" prints half of the half of an integer. which typically is a byte.
Converting to char is taking the lowest 8bit of 2048 (=0x800) and gives you 0.
The proper way to convert any integer to a boolean value is:
NSInteger q = some-value;
BOOL b = !!q;
Casting an integer value to a type too small to represent the value being converted is undefined behaviour in C (C11 standard Annex J.2), and therefore also in the part of Objective-C which deals with C-level matters. Since it's undefined behaviour it can represent the result however it wants, expected value or not.
As per 6.3.1.4, any integer can be used as a boolean value without casting, in which case it will show the expected behaviour (0 is 0, everything else is 1), giving rise to the !! idiom suggested by alk; perhaps counterintuitively, you convert the value by not explicitly converting the value (instead, the conversion is correctly handled by the implicit conversion operation inserted by the ! operator).

Objective C ceil returns wrong value

NSLog(#"CEIL %f",ceil(2/3));
should return 1. However, it shows:
CEIL 0.000000
Why and how to fix that problem? I use ceil([myNSArray count]/3) and it returns 0 when array count is 2.
The same rules as C apply: 2 and 3 are ints, so 2/3 is an integer divide. Integer division truncates so 2/3 produces the integer 0. That integer 0 will then be cast to a double precision float for the call to ceil, but ceil(0) is 0.
Changing the code to:
NSLog(#"CEIL %f",ceil(2.0/3.0));
Will display the result you're expecting. Adding the decimal point causes the constants to be recognised as double precision floating point numbers (and 2.0f is how you'd type a single precision floating point number).
Maudicus' solution works because (float)2/3 casts the integer 2 to a float and C's promotion rules mean that it'll promote the denominator to floating point in order to divide a floating point number by an integer, giving a floating point result.
So, your current statement ceil([myNSArray count]/3) should be changed to either:
([myNSArray count] + 2)/3 // no floating point involved
Or:
ceil((float)[myNSArray count]/3) // arguably more explicit
2/3 evaluates to 0 unless you cast it to a float.
So, you have to be careful with your values being turned to int's before you want.
float decValue = (float) 2/3;
NSLog(#"CEIL %f",ceil(decValue));
==>
CEIL 1.000000
For you array example
float decValue = (float) [myNSArray count]/3;
NSLog(#"CEIL %f",ceil(decValue));
It probably evaluates 2 and 3 as integers (as they are, obviously), evaluates the result (which is 0), and then converts it to float or double (which is also 0.00000). The easiest way to fix it is to type either 2.0f/3, 2/3.0f, or 2.0f/3.0f, (or without "f" if you wish, whatever you like more ;) ).
Hope it helps

Is there a substitute for Pow in BigInteger in F#?

I was using the Pow function of the BigInteger class in F# when my compiler told me :
This construct is deprecated. This member has been removed to ensure that this
type is binary compatible with the .NET 4.0 type System.Numerics.BigInteger
Fair enough I guess, but I didn't found a replacement immediately.
Is there one? Should we only use our own Pow functions? And (how) will it be replaced in NET4.0?
You can use the pown function
let result = pown 42I 42
pown works on any type that 'understands' multiplication and 'one'.
If you look at F# from the perspective of being based on OCaml, then the OCaml Num module has power_num. Since OCaml type num are arbitrary-precision rational numbers they can handle any size number, e.g. they are not limited by the CPU register because they can do the math symbolically. Also since num is defined as
type num =
| Int of int
| Big_int of Big_int.big_int
| Ratio of Ratio.ratio
they can handle very small numbers with out loss of precision because of the Ratio type.
Since F# does not have the num type, Jack created the FSharp.Compatibility.OCaml module which has num.fs and is available via NuGet.
So you can get all the precision you want using this, and the num functions can handle negative exponents.

Resources