I wanted to convert numbers greater than 64 bits, including up to 256 bits number from decimal to hex in lua.
Example:
num = 9223372036854775807
num = string.format("%x", num)
num = tostring(num)
print(num) -- output is 7fffffffffffffff
but if I already add a single number, it returns an error in the example below:
num = 9223372036854775808
num = string.format("%x", num)
num = tostring(num)
print(num) -- error lua54 - bad argument #2 to 'format' (number has no integer representation)
Does anyone have any ideas?
I wanted to convert numbers greater than 64 bits, including up to 256 bits number from decimal to hex in lua.
Well that's not possible without involving a big integer library such as this one. Lua 5.4 has two number types: 64-bit signed integers and 64-bit floats, which are both to limited to store arbitrary 256-bit integers.
The first num in your example, 9223372036854775807, is just the upper limit of int64 bounds (-2^63 to 2^63-1, both inclusive). Adding 1 to this forces Lua to cast it into a float64, which can represent numbers way larger than that at the cost of precision. You're then left with an imprecise float which has no "integer representation" as Lua tells you.
You could trivially reimplement %x yourself, but that wouldn't help you extend the precision/size of floats & ints. You need to find another number representation and find or write a bigint library to go with it. Options are:
String representation: Represent numbers as hex- or bytestrings (base 256).
Table representation: Represent numbers as lists of numbers (base 2^x where x is < 64)
Related
Assuming I have a declaration like this: final int input = 0xA55AA9D2;, I'd like to get a list of [0xA5, 0x5A, 0xA9, 0xD2]. It is easily achievable in Java by just right shifting the input by 24, 16, 8 and 0 respectively with subsequent cast to byte in order to cut precision to 8-bit value.
But how to do the same with Dart? I can't find sufficient information about numbers encoding (e.g. in Java front 1 means minus, but how is minus encoded here?) and transformations (e.g. how to cut precision) in order to solve this task.
P.S.: I solved this for 32-bit numbers using out.add([value >> 24, (value & 0x00FFFFFF) >> 16, (value & 0x0000FFFF) >> 8, value & 0X000000FF]); but it feels incredibly ugly, I feel that SDK provides more convenient means to split an arbitrarily precised number into bytes
The biggest issue here is that a Dart int is not the same type on the VM and in a browser.
On the native VM, an int is a 64-bit two's complement number.
In a browser, when compiled to JavaScript, an int is just a non-fractional double because JavaScript only has doubles as numbers.
If your code is only running on the VM, then getting the bytes is as simple as:
int number;
List<int> bytes = List.generate(8, (n) => (number >> (8 * n)) & 0xFF);
In JavaScript, bitwise operations only work on 32-bit integers, so you could do:
List<int> bytes = List.generate(4, (n) => (number >> (8 * n)) & 0xFF);
and get the byte representation of number.toSigned(32).
If you want a number larger than that, I'd probably use BigInt:
var bigNumber = BigInt.from(number).toSigned(64);
var b255 = BigInt.from(255);
List<int> bytes = List.generate(8, (n) => ((bigNumber >> (8 * n)) & b255).toInt());
From the documentation to the int class:
The default implementation of int is 64-bit two's complement integers with operations that wrap to that range on overflow.
Note: When compiling to JavaScript, integers are restricted to values that can be represented exactly by double-precision floating point values. The available integer values include all integers between -2^53 and 2^53 ...
(Most modern systems use two's complement for signed integers.)
If you need your Dart code to work portably for both web and for VMs, you can use package:fixnum to use fixed-width 32- or 64-bit integers.
Is there a constant in dart that tells us what is the max/min int/double value ?
Something like double.infinity but instead double.maxValue ?
For double there are
double.maxFinite (1.7976931348623157e+308)
double.minPositive (5e-324)
In Dart 1 there was no such number for int. The size of integers was limited only by available memory
In Dart 2 int is limited to 64 bit, but it doesn't look like there are constants yet.
For dart2js different rules apply
When compiling to JavaScript, integers are therefore restricted to 53 significant bits because all JavaScript numbers are double-precision floating point values.
I found this from dart_numerics package.
here you are the int64 max value:
const int intMaxValue = 9223372036854775807;
for dart web is 2^53-1:
const int intMaxValue = 9007199254740991;
"The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent integers between -(2^53 - 1) and 2^53 - 1." see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER
No,
Dart does not have a built-in constant for the max value of an int
but this is how to get the max value
Because ints are signed in Dart, they have a range (inclusive) of [-2^31, 2^31-1] if 32-bit and [-2^63, 2^63 - 1] if 64-bit. The first bit in an int is called the 'sign-bit'. If the sign-bit is 1, the int is negative; if 0, the int is non-negative. In the max int, all the bits are 1 except the sign bit, which is 0. We can most easily achieve this by writing the int in hexadecimal notation (integers preceded with '0x' are hexadecimal):
int max = 0x7fffffff; // 32-bit
int max = 0x7fffffffffffffff; // 64-bit
In hexadecimal (a.k.a. hex), each hex digit specifies a group of 4 bits, since there are 16 hex digits (0-f), there are 2 bit digits (0-1), and 2^4 = 16. There is a compile error if more bits than the bitness are specified; if fewer bits than the bitness were specified, then the hexadecimal integer will be padded with 0's until the number of bits is the bitness. So, to indicate that all the bits are 1 except for the sign-bit, we will need to use bitness / 4 hex characters (e.g. 16 for 64-bit architecture). The first hex character will represent the binary integer '0111' (7), which is 0x7, and all the other hex characters will represent the binary integer '1111' (15), or 0xf.
Alternatively, you could use bit-shifting, which I will not explain, but feel free to Google it.
int bitness = ... // presumably 64
int max = (((1 << (bitness - 2)) - 1) << 1) + 1;
Use double.maxFinite() to convert int
const maxValue = double.maxFinite.toInt();
const minValue = -double.maxFinite.toInt();
I want to have a lua function that takes a string argument. String has N+2 bytes of data. First two bytes has length in bigendian format, and rest N bytes contain data.
Say data is "abcd" So the string is 0x00 0x04 a b c d
In Lua function this string is an input argument to me.
How can I calculate length optimal way.
So far I have tried below code
function calculate_length(s)
len = string.len(s)
if(len >= 2) then
first_byte = s:byte(1);
second_byte = s:byte(2);
//len = ((first_byte & 0xFF) << 8) or (second_byte & 0xFF)
len = second_byte
else
len = 0
end
return len
end
See the commented line (how I would have done in C).
In Lua how do I achieve the commented line.
The number of data bytes in your string s is #s-2 (assuming even a string with no data has a length of two bytes, each with a value of 0). If you really need to use those header bytes, you could compute:
len = first_byte * 256 + second_byte
When it comes to strings in Lua, a byte is a byte as this excerpt about strings from the Reference Manual makes clear:
The type string represents immutable sequences of bytes. Lua is 8-bit clean: strings can contain any 8-bit value, including embedded zeros ('\0'). Lua is also encoding-agnostic; it makes no assumptions about the contents of a string.
This is important if using the string.* library:
The string library assumes one-byte character encodings.
If the internal representation in Lua of your number is important, the following excerpt from the Lua Reference Manual may be of interest:
The type number uses two internal representations, or two subtypes, one called integer and the other called float. Lua has explicit rules about when each representation is used, but it also converts between them automatically as needed.... Therefore, the programmer may choose to mostly ignore the difference between integers and floats or to assume complete control over the representation of each number. Standard Lua uses 64-bit integers and double-precision (64-bit) floats, but you can also compile Lua so that it uses 32-bit integers and/or single-precision (32-bit) floats.
In other words, the 2 byte "unsigned short" C data type does not exist in Lua. Integers are stored using the "long long" type (8 byte signed).
Lastly, as lhf pointed out in the comments, bitwise operations were added to Lua in version 5.3, and if lhf is the lhf, he should know ;-)
I am trying to decode the bitstring to decimal value. For e.x I have these kind of bitstrings
<<96,64,112,153,9:4>>. I want to convert them to decimal values like you take four bits as a digit (96(01100000) --> 60( first four bits is 6, next four bits is 0) , 64 --> 40 and so on. The output would be 604070999. The last 9:4 represents that you consider 4 bits to represent.
Can anyone help in doing this function erlang.
If you have a binary rather than a bitstring (i.e., without the trailing 9:4 part), you can apply a hex conversion to each byte within a binary comprehension, then convert the resulting binary to an integer:
1> Bin = <<96,64,112,153>>.
<<96,64,112,153>>
2> binary_to_integer(<< <<(integer_to_binary(B,16))/binary>> || <<B:8>> <= Bin >>).
60407099
The same also works for your bitstring, taking 4 bits at a time instead of 8 in the comprehension:
3> Bits = <<96,64,112,153,9:4>>.
<<96,64,112,153,9:4>>
4> binary_to_integer(<< <<(integer_to_binary(B,16))/binary>> || <<B:4>> <= Bits >>).
604070999
But as #Hynek-Pichi-Vychodil points out in the comments, for the bitstring you don't need the integer_to_binary/2 call at all, but instead can convert each 4-bit digit to its corresponding character by adding $0, the literal for the character 0:
5> binary_to_integer(<< <<($0+B)>> || <<B:4>> <= Bits >>).
604070999
print(2^62)
print(2^63)
print(2^64)
In Lua 5.2, all numbers are doubles. The output of the above code is:
4.6116860184274e+18
9.2233720368548e+18
1.844674407371e+19
Lua 5.3 has support for integers and does automatic conversion between integer and float representation. The same code outputs:
4611686018427387904
-9223372036854775808
0
I want to get the float result. 2.0^64 works, but what if it's not a literal:
local n = io.read("*n") --user input 2
print(n^64)
One possible solution is to divide the number by 1: (n/1)^64 because in / division , the operands are always converted to float, but I'm looking for a more elegant solution.
Tested on Lua 5.3.0 (work2).
io.read("*n") always returns a float. So no surprises there.
If you need to convert an integer to a float, add 0.0 to it.