I am really confused about numbers implementation in lua.
Documentation on lua website is quite clear (https://www.lua.org/pil/2.3.html) :
The number type represents real (double-precision floating-point) numbers. Lua has no integer type, as it does not need it. There is a widespread misconception about floating-point arithmetic errors and some people fear that even a simple increment can go weird with floating-point numbers. The fact is that, when you use a double to represent an integer, there is no rounding error at all (unless the number is greater than 100,000,000,000,000). Specifically, a Lua number can represent any long integer without rounding problems. Moreover, most modern CPUs do floating-point arithmetic as fast as (or even faster than) integer arithmetic.
That make perfect sense.
But how come an integer owerflow happens in this simplest example?
$ lua
Lua 5.3.6 Copyright (C) 1994-2020 Lua.org, PUC-Rio
> 9223372036854775807 + 1
-9223372036854775808
It's simple: You're reading "Programming in Lua" online. You're thus using an outdated version for Lua 5.0:
This is the online version of the first edition of the book Programming in Lua, a detailed and authoritative introduction to all aspects of Lua programming written by Lua's chief architect. The first edition was aimed at Lua 5.0. It remains largely relevant for later versions, but there are some differences. All corrections listed in the errata have been made in the online version.
Lua 5.3, which you have installed, adds a 64-bit signed integer type which behaves as described by Jorge Diaz (I'm not sure whether two's or one's complement representation is guaranteed though, technically it probably isn't). The Lua 5.3 reference manual properly documents integers. Note: The newest Lua version is 5.4.
In Lua 5.0, 5.1 and 5.2, you would indeed observe float behavior:
$ lua5.2
Lua 5.2.4 Copyright (C) 1994-2015 Lua.org, PUC-Rio
> return ("%.20g"):format(9223372036854775807 + 1)
9223372036854775808
In Lua(also in others languages), integers are stored using 64 bits (which is the standard size for integers on most modern systems). When an integer exceeds the maximum value that can be stored in 64 bits (which is 2^63-1), it "wraps around" to the minimum value that can be stored in 64 bits (-2^63). This is known as an integer overflow.
In this parrticular example, you are adding 1 to the maximum representable value of Lua integers (9223372036854775807), which causes the value to overflow and "wrap around" to the minimum representable value (-9223372036854775808).
Related
What is the number format length in bytes?
This is a "multi type" data format. Is it 4 bytes? 8 bytes? How much? How can I get it programmatically? Does the length depend on the OS/processor type?
Here https://www.lua.org/pil/2.3.html the documentation says this is a double precision type. That is, it has 64 bits. Am I right?
Like #Roddy said, it's slightly complicated with the integer type. Moreover, it depends on how your Lua is compiled.
Basically, in Lua 5.3, there are two types, the integer type lua_Integer and the number type lua_Number. You can get their lengths programatically from within Lua by parsing a chunk header:
local chunk = string.dump(function() end)
print("lua_Integer", chunk:byte(16))
print("lua_Number", chunk:byte(17))
Typically both lengths will be 8 bytes. However on some embedded platforms you can find Luas where the lua_Number type is a float (4 bytes), a 32 bit integer or even weirder things.
It depends on the version of Lua, and of course, how it's compiled.
5.3 has true integers, typically 64 bits. https://www.lua.org/manual/5.3/manual.html
The type number uses two internal representations, or two subtypes,
one called integer and the other called float.
...
Standard Lua uses 64-bit integers and double-precision (64-bit)
floats, but you can also compile Lua so that it uses 32-bit integers
and/or single-precision (32-bit) floats.
Earlier versions always use 64-bit double-precision floating point, which effectively accurately represents up to 52-bit integers. Your link... https://www.lua.org/pil/2.3.html
according to the Lua reference (for integers)
In case of overflows in integer arithmetic, all operations wrap around, according to the usual rules of two-complement arithmetic. (In other words, they return the unique representable integer that is equal modulo 2^64 to the mathematical result.)
and for floating point
With the exception of exponentiation and float division, the arithmetic operators work as follows: If both operands are integers, the operation is performed over integers and the result is an integer. Otherwise, if both operands are numbers or strings that can be converted to numbers (see ยง3.4.3), then they are converted to floats, the operation is performed following the usual rules for floating-point arithmetic (usually the IEEE 754 standard), and the result is a float.
Lua as a language does not define what you ask for. The data type used for representing numbers may differ from version to version (note that the link to the free online version of "Programming in Lua" is about Lua 5.0), but primarily this is defined by the way Lua is compiled, as others already said.
Look at luaconf.h for all the details.
Regarding your actual problem (converting hex-string to numbers), you could look at the result of tonumber() on various input strings, compared to known results:
function hexConvertibeBytes()
local i, s = 0, ''
repeat
i, s = i + 1, s .. 'FF'
local n = tonumber( s, 16 )
until n ~= 256^i - 1
return i - 1
end
We can use string.pack as follows:
s = string.pack("J",0)
number_of_bytes = #s
I run Lua on a CPU without dedicated floating point HW, depending on SW emulation.
From luaopt.h I can see that some macros are set to double, but it does not clearly state when floats are used and its a little hard to track it.
If my script does simple stuff like:
a=0
a=a+1
for...
Would that involve a floating point operations at any level?
If no that's fine, but what is then the benefit to change macros to long?
(I tried of course but did not work....)
All numeric operations in Lua are performed (according to the default configuration) in floating point. There is no distinction made between floating point and integer, all values are simply numbers.
The actual C type used to store a Lua number is set in luaconf.h, and it is both allowed and even practical to change that to a suitable integral type. You start by changing LUA_NUMBER from double to int, long, or perhaps ptrdiff_t. Then you will find you need to tweak the related macros that control the conversions between strings and numbers. And, of course, you will likely need to eliminate most or all of the base math library since math.sin() and its friends and neighbors are not particularly useful over integers.
The result will be a Lua interpreter where all numbers are integers. The language will still allow you to type 3.14, but it will be stored as 3. Your code will likely not be completely portable to a Lua interpreter built with the standard configuration since a huge amount of Lua code casually assumes that floating point arithmetic is permitted, and remember that your compiled byte code will definitely not be compatible since byte code will store numbers as LUA_NUMBER.
There is LNUM patch (used, for example, by OpenWrt project which relies heavily on Lua for providing Web UI on hardware without FPU) that allows dual integer/floating point representation of numbers in Lua with conversions happening behind the scenes when required. With it most integer computations will be performed without resorting to FPU. Unfortunately, it's only applicable to Lua 5.1; 5.2 is not supported.
I've spent many hours researching this and am pretty stuck: my question is - has the internal format of a Delphi TDateTime changed between Delphi 7 (released in 2002 or so) and today?
Scenario: I'm reading a binary logfile created by a Delphi 7 app, and the vendor tells me it's a TDateTime in the record, but decoding the bits shows it's clearly not standard IEEE 754 floating point even though the TDateTime produced by modern Delphi is.
But it's some kind of floating point with around 15 bits of exponent and 45 bits of significand (as opposed to 11 and 53 bits in IEE754), and the leading bit is a 1 (which in IEE754 indicates a negative number) for numbers that are clearly not negative, such as the current date/time.
Hints in old documentation suggested that TDateTime "read as" a double but wasn't necessarily represented internally as one, which means that the internal format would be mostly invisible except where these TDateTimes were written out in binary form.
My suspicion is that the change occurred with Delphi 8, which added .NET support, but I simply can't find any references to this anywhere. I have perl code (!) that picks apart these types mostly working, but I'd love to find a formal spec so I can do it properly.
Any old-timers run into this?
~~~ Steve
Nothing has changed since Delphi 7. In Delphi 7, and in fact previous versions, TDateTime is IEEE754, measuring the number of days since the Delphi epoch.
You are going to need to get in touch with the software vendor and try to work out what this data's format really is. It would be surprising if the format was a non-IEEE754 floating point data type. Are you quite sure that it is floating point?
As for BCB3, BCB6 and D4, it's exactly the IEEE 754 Double-precision floating-point format, in the VCL source file system.pas (as included in BCB6) it's defined by thus:
TDateTime = type Double;
Original Message:
I need to multiply two 64 bit numbers, but Lua is losing precision
with big numbers. (for example 99999999999999999 is shown as
100000000000000000) After multiplying I need a 64 bit solution,
so I need a way to limit the solution to 64 bits. (I know, if the
solution would be precise, I could just use % 0x10000000000000000,
so that would work too)
EDIT: With Lua 5.3 and the new 64 bit integer support, this problem doesn't exist anymore. Neat.
Lua uses double-precision floating points for all math, including integer arithmetic (see http://lua-users.org/wiki/FloatingPoint). This gives you about 53 bits of precision, which (as you've noticed) is less than you need.
There are a couple of different ways to get better precision in Lua. Your best bet is to find the most active such effort and piggy-back off it. In that case, your question has already been answered; check out What is the standard (or best supported) big number (arbitrary precision) library for Lua?
If your Lua distribution has packages for it, the easy answer is lmapm.
If you use LuaJIT in place of Lua, you get access to all C99 built-in types, including long long which is usually 64 bits.
local ffi = require 'ffi'
-- Needed to parse constants that do not fit in a double:
ffi.cdef 'long long strtoll(const char *restrict str, char **restrict endptr, int base);'
local a = ffi.C.strtoll("99999999999999999", nil, 10)
print(a)
print(a * a)
=> 3803012203950112769LL (assuming the result is truncated to 64 bits)
I'm dealing with timestamps in Lua showing the number of microseconds since the Epoch (e.g. "1247687475123456").
I would really like to be able to print that number in all its terrible glory, but Lua insists on printing it in scientific notation. I've scoured the available documentation about printing a formatted string, but the only available commands are "Print in scientific notation (%e/%E)" and "Automatically print in scientific notation if the number is very long (%g)". No options seem to be available to print the number in its normal form.
I realize that I could write a function that will take the original number, do some dividing by 10 and print the digits in a loop but that seems like an inelegant hassle. Surely there's some way of doing this that's built in to the language?
> print(string.format("%18.0f",1247687475123456))
1247687475123456
Lua as usually configured uses your platform's usual double-precision floating point format to store all numbers. For most desktop platforms today, that will be the 64-bit IEEE-754 format. The conventional wisdom is that integers in the range -1E15 to +1E15 can be safely assumed to be represented exactly.
In any case, the string.format() function passes its arguments through (with some minor tweaks) to the platform's implementation of printf(). The format string understood by printf() includes %e and %E to force "scientific" notation, and %f to force plain decimal notation. In addition, %g and %G choose the shortest notation.
For example:
E:\...>lua
Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio
> a = 1e17/3
> print(string.format("%f",a))
33333333333333332.000000
> print(string.format("%e",a))
3.333333e+016
> print(string.format("%.0f",a))
33333333333333332
Note that if the value fits within a 32-bit signed integer range, you can also use the %d format. However, results are not well defined if the value exceeds that range. System timestamps in microseconds are likely to exceed the 32-bit range.
If 16 decimal digits is not enough precision, there are several choices available for increased precision.
First, it would not be difficult to package a true 64-bit integer in a userdata along with a suitable set of arithmetic metamethods. This gets discussed occasionally on the Lua mailing list, but I don't recall seeing a completed module released by anyone.
Second, one of the Lua authors has released two modules supporting arbitrary precision arithmetic: lbc and lmapm. Both are found at that page.
Third, casual searching in Google readily turns up several other math library wrappers.