Is there a constant for max/min int/double value in dart? - dart

Is there a constant in dart that tells us what is the max/min int/double value ?
Something like double.infinity but instead double.maxValue ?

For double there are
double.maxFinite (1.7976931348623157e+308)
double.minPositive (5e-324)
In Dart 1 there was no such number for int. The size of integers was limited only by available memory
In Dart 2 int is limited to 64 bit, but it doesn't look like there are constants yet.
For dart2js different rules apply
When compiling to JavaScript, integers are therefore restricted to 53 significant bits because all JavaScript numbers are double-precision floating point values.

I found this from dart_numerics package.

here you are the int64 max value:
const int intMaxValue = 9223372036854775807;
for dart web is 2^53-1:
const int intMaxValue = 9007199254740991;
"The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent integers between -(2^53 - 1) and 2^53 - 1." see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER

No,
Dart does not have a built-in constant for the max value of an int
but this is how to get the max value
Because ints are signed in Dart, they have a range (inclusive) of [-2^31, 2^31-1] if 32-bit and [-2^63, 2^63 - 1] if 64-bit. The first bit in an int is called the 'sign-bit'. If the sign-bit is 1, the int is negative; if 0, the int is non-negative. In the max int, all the bits are 1 except the sign bit, which is 0. We can most easily achieve this by writing the int in hexadecimal notation (integers preceded with '0x' are hexadecimal):
int max = 0x7fffffff; // 32-bit
int max = 0x7fffffffffffffff; // 64-bit
In hexadecimal (a.k.a. hex), each hex digit specifies a group of 4 bits, since there are 16 hex digits (0-f), there are 2 bit digits (0-1), and 2^4 = 16. There is a compile error if more bits than the bitness are specified; if fewer bits than the bitness were specified, then the hexadecimal integer will be padded with 0's until the number of bits is the bitness. So, to indicate that all the bits are 1 except for the sign-bit, we will need to use bitness / 4 hex characters (e.g. 16 for 64-bit architecture). The first hex character will represent the binary integer '0111' (7), which is 0x7, and all the other hex characters will represent the binary integer '1111' (15), or 0xf.
Alternatively, you could use bit-shifting, which I will not explain, but feel free to Google it.
int bitness = ... // presumably 64
int max = (((1 << (bitness - 2)) - 1) << 1) + 1;

Use double.maxFinite() to convert int
const maxValue = double.maxFinite.toInt();
const minValue = -double.maxFinite.toInt();

Related

LUA 5.4 - How to convert 64-bit numbers to hex

I wanted to convert numbers greater than 64 bits, including up to 256 bits number from decimal to hex in lua.
Example:
num = 9223372036854775807
num = string.format("%x", num)
num = tostring(num)
print(num) -- output is 7fffffffffffffff
but if I already add a single number, it returns an error in the example below:
num = 9223372036854775808
num = string.format("%x", num)
num = tostring(num)
print(num) -- error lua54 - bad argument #2 to 'format' (number has no integer representation)
Does anyone have any ideas?
I wanted to convert numbers greater than 64 bits, including up to 256 bits number from decimal to hex in lua.
Well that's not possible without involving a big integer library such as this one. Lua 5.4 has two number types: 64-bit signed integers and 64-bit floats, which are both to limited to store arbitrary 256-bit integers.
The first num in your example, 9223372036854775807, is just the upper limit of int64 bounds (-2^63 to 2^63-1, both inclusive). Adding 1 to this forces Lua to cast it into a float64, which can represent numbers way larger than that at the cost of precision. You're then left with an imprecise float which has no "integer representation" as Lua tells you.
You could trivially reimplement %x yourself, but that wouldn't help you extend the precision/size of floats & ints. You need to find another number representation and find or write a bigint library to go with it. Options are:
String representation: Represent numbers as hex- or bytestrings (base 256).
Table representation: Represent numbers as lists of numbers (base 2^x where x is < 64)

Dart: split an arbitrarily precised number onto a sequence of bytes

Assuming I have a declaration like this: final int input = 0xA55AA9D2;, I'd like to get a list of [0xA5, 0x5A, 0xA9, 0xD2]. It is easily achievable in Java by just right shifting the input by 24, 16, 8 and 0 respectively with subsequent cast to byte in order to cut precision to 8-bit value.
But how to do the same with Dart? I can't find sufficient information about numbers encoding (e.g. in Java front 1 means minus, but how is minus encoded here?) and transformations (e.g. how to cut precision) in order to solve this task.
P.S.: I solved this for 32-bit numbers using out.add([value >> 24, (value & 0x00FFFFFF) >> 16, (value & 0x0000FFFF) >> 8, value & 0X000000FF]); but it feels incredibly ugly, I feel that SDK provides more convenient means to split an arbitrarily precised number into bytes
The biggest issue here is that a Dart int is not the same type on the VM and in a browser.
On the native VM, an int is a 64-bit two's complement number.
In a browser, when compiled to JavaScript, an int is just a non-fractional double because JavaScript only has doubles as numbers.
If your code is only running on the VM, then getting the bytes is as simple as:
int number;
List<int> bytes = List.generate(8, (n) => (number >> (8 * n)) & 0xFF);
In JavaScript, bitwise operations only work on 32-bit integers, so you could do:
List<int> bytes = List.generate(4, (n) => (number >> (8 * n)) & 0xFF);
and get the byte representation of number.toSigned(32).
If you want a number larger than that, I'd probably use BigInt:
var bigNumber = BigInt.from(number).toSigned(64);
var b255 = BigInt.from(255);
List<int> bytes = List.generate(8, (n) => ((bigNumber >> (8 * n)) & b255).toInt());
From the documentation to the int class:
The default implementation of int is 64-bit two's complement integers with operations that wrap to that range on overflow.
Note: When compiling to JavaScript, integers are restricted to values that can be represented exactly by double-precision floating point values. The available integer values include all integers between -2^53 and 2^53 ...
(Most modern systems use two's complement for signed integers.)
If you need your Dart code to work portably for both web and for VMs, you can use package:fixnum to use fixed-width 32- or 64-bit integers.

Convert first two bytes of Lua string (in bigendian format) to unsigned short number

I want to have a lua function that takes a string argument. String has N+2 bytes of data. First two bytes has length in bigendian format, and rest N bytes contain data.
Say data is "abcd" So the string is 0x00 0x04 a b c d
In Lua function this string is an input argument to me.
How can I calculate length optimal way.
So far I have tried below code
function calculate_length(s)
len = string.len(s)
if(len >= 2) then
first_byte = s:byte(1);
second_byte = s:byte(2);
//len = ((first_byte & 0xFF) << 8) or (second_byte & 0xFF)
len = second_byte
else
len = 0
end
return len
end
See the commented line (how I would have done in C).
In Lua how do I achieve the commented line.
The number of data bytes in your string s is #s-2 (assuming even a string with no data has a length of two bytes, each with a value of 0). If you really need to use those header bytes, you could compute:
len = first_byte * 256 + second_byte
When it comes to strings in Lua, a byte is a byte as this excerpt about strings from the Reference Manual makes clear:
The type string represents immutable sequences of bytes. Lua is 8-bit clean: strings can contain any 8-bit value, including embedded zeros ('\0'). Lua is also encoding-agnostic; it makes no assumptions about the contents of a string.
This is important if using the string.* library:
The string library assumes one-byte character encodings.
If the internal representation in Lua of your number is important, the following excerpt from the Lua Reference Manual may be of interest:
The type number uses two internal representations, or two subtypes, one called integer and the other called float. Lua has explicit rules about when each representation is used, but it also converts between them automatically as needed.... Therefore, the programmer may choose to mostly ignore the difference between integers and floats or to assume complete control over the representation of each number. Standard Lua uses 64-bit integers and double-precision (64-bit) floats, but you can also compile Lua so that it uses 32-bit integers and/or single-precision (32-bit) floats.
In other words, the 2 byte "unsigned short" C data type does not exist in Lua. Integers are stored using the "long long" type (8 byte signed).
Lastly, as lhf pointed out in the comments, bitwise operations were added to Lua in version 5.3, and if lhf is the lhf, he should know ;-)

Why can I go up to this number with an integer?

I am learning C and I read in the Kernighan&Ritchie's book that integers int were included in a set specific set [-32767;32767]. I tried to verify this assertion by writing the following program which increment a variable count from 1 to the limit before it fall in negative numbers.
#include <stdio.h>
int main(void){
int count = 1;
while(count > 0){
count++;
printf("%d\n", count);
}
return 0;
}
And surprisingly I got this output:
1
......
2147483640
2147483641
2147483642
2147483643
2147483644
2147483645
2147483646
2147483647 -> This is a lot more than 32767?!
-2147483648
I do not understand, Why do I get this output? And I doubt M. Ritchie made a mistake ;)
You're on a 32- or a 64-bit machine and the C compiler you are using has 32-bit integers. In 2's complement binary, the highest positive integer would be 31 bites, or 2^31-1 or 2147483647, as you are observing.
Note that this doesn't violate K&R's claim that the integer value includes the range [-32768;32767].
Shorts typically go from -32768 to 32767. 2^15th - 1 is the largest short.
Ints typically go from -2147483648 to 2147483647. 2^31st -1 is the largest int.
Basically ints are twice the size you thought.

iOS calculating sum of filesizes always negative

I've got a strange problem here, and i'm sure it's just something small.
I recieve information about files via JSON (RestKit is doing a good job).
I write the filesize of each file via coredata to a local store.
Afterwards within one of my viewcontrollers i need to sum up the files-sizes of all files in database. I fetch all files and then going through a slope (for) to sum the size up.
The problem is now, the result is always negative!
The coredata entity filesize is of type Integer 32 (filesize is reported in bytes by JSON).
I read the fetchresult in an NSArray allPublicationsToLoad and then try to sum up. The Objects in the NSArray of Type CDPublication have a value filesize of Type NSNumber:
for(int n = 0; n < [allPublicationsToLoad count]; n = n + 1)
{
CDPublication* thePub = [allPublicationsToLoad objectAtIndex:n];
allPublicationsSize = allPublicationsSize + [[thePub filesize] integerValue];
sum = [NSNumber numberWithFloat:([sum floatValue] + [[thePub filesize] floatValue])];
Each single filesize of the single CDPublications objects are positive and correct. Only the sum of all the filesizes ist negative afterwards. There are around 240 objects right now with filesize-values between 4000 and 234.645.434.123.
Can somebody please give me a hit into the right direction !?
Is it the problem that Integer 32 or NSNumber can't hold such a huge range?
Thanks
MadMaxApp
}
The NSNumber object can't hold such a huge number. Because of the way negative numbers are stored the result is negative.
Negative numbers are stored using two's complement, this is done to make addition of positive and negative numbers easier. The range of numbers NSNumber can hold is split in two, the highest half (the int values for which the highest order bit is equal to 1) is considered to be negative, the lowest half (where the highest order bit is equal to 0) are the normal positive numbers. Now, if you add sufficiently large numbers, the result will be in the highest half and thus be interpreted as a negative number. Here's an illustration for the 4-bit integer situation (32 works exactly the same but there would be a lot more 0 and 1 to type;))
With 4 bits you can represent this range of signed integers:
0000 (=0)
0001 (=1)
0010 (=2)
...
0111 (=7)
1000 (=-8)
1001 (=-7)
...
1111 (=-1)
The maximum positive integer you can represent is 7 in this case. If you would add 5 and 4 for example you would get:
0101 + 0100 = 1001
1001 equals -7 when you represent signed integers like this (and not 9, as you would expect). That's the effect you are observing, but on a much larger scale (32 bits)
Your only option to get correct results in this case is to increase the number of bits used to represent your integers so the result won't be in the negative number range of bit combinations. So if 32 bits is not enough (like in your case), you can use a long (64 bits).
[myNumber longLongValue];
I think this has to do with int overflow: very large integers get reinterpreted as negatives when they overflow the size of int (32 bits). Use longLongValue instead of integerValue:
long long allPublicationsSize = 0;
for(int n = 0; n < [allPublicationsToLoad count]; n++) {
CDPublication* thePub = [allPublicationsToLoad objectAtIndex:n];
allPublicationsSize += [[thePub filesize] longLongValue];
}
This is an integer overflow issue associated with use of two's complement arithmetic. For a 32 bit integer there are exactly 232 (4,294,967,296) possible integer values which can be expressed. When using two's complement, the most significant bit is used as a sign bit which allows half of the numbers to represent non-negative integers (when the sign bit is 0) and the other half to represent negative numbers (when the sign bit is 1). This gives an effective range of [-231, 231-1] or [-2,147,483,648, 2,147,483,647].
To overcome this problem for your case, you should consider using a 64-bit integer. This should work well for the range of values you seem to be interested in using. Alternatively, if even 64-bit is not sufficient, you should look for big integer libraries for iOS.

Resources