Bitwise operations, wrong result in Dart2Js - dart

I'm doing ZigZag encoding on 32bit integers with Dart. This is the source code that I'm using:
int _encodeZigZag(int instance) => (instance << 1) ^ (instance >> 31);
int _decodeZigZag(int instance) => (instance >> 1) ^ (-(instance & 1));
The code works as expected in the DartVM.
But in dart2js the _decodeZigZag function is returning invalid results if I input negativ numbers. For example -10. -10 is encoded to 19 and should be decoded back to -10, but it is decoded to 4294967286. If I run (instance >> 1) ^ (-(instance & 1)) in the JavaScript console of Chrome, I get the expected result of -10. That means for me, that Javascript should be able to run this operation properly with it number model.
But Dart2Js generate the following JavaScript, that looks different from the code I tested in the console:
return ($.JSNumber_methods.$shr(instance, 1) ^ -(instance & 1)) >>> 0;
Why does Dart2Js adds a usinged right shift by 0 to the function? Without the shift, the result would be as expected.
Now I'm wondering, is it a bug in the Dart2Js compiler or the expected result? Is there a way to force Dart2Js to output the right javascript code?
Or is my Dart code wrong?
PS: Also tested splitting up the XOR into other operations, but Dart2Js is still adding the right shift:
final a = -(instance & 1);
final b = (instance >> 1);
return (a & -b) | (-a & b);
Results in:
a = -(instance & 1);
b = $.JSNumber_methods.$shr(instance, 1);
return (a & -b | -a & b) >>> 0;

For efficiency reasons dart2js compiles Dart numbers to JS numbers. JS, however, only provides one number type: doubles. Furthermore bit-operations in JS are always truncated to 32 bits.
In many cases (like cryptography) it is easier to deal with unsigned 32 bits, so dart2js compiles bit-operations so that their result is an unsigned 32 bit number.
Neither choice (signed or unsigned) is perfect. Initially dart2js compiled to signed 32 bits, and was only changed when we tripped over it too frequently. As your code demonstrate, this doesn't remove the problem, just shifts it to different (hopefully less frequent) use-cases.
Non-compliant number semantics have been a long-standing bug in dart2js, but fixing it will take time and potentially slow down the resulting code. In the short-term future Dart developers (compiling to JS) need to know about this restriction and work around it.

Looks like I found equivalent code that output the right result. The unit test pass for both the dart vm and dart2js and I will use it for now.
int _decodeZigZag(int instance) => ((instance & 1) == 1 ? -(instance >> 1) - 1 : (instance >> 1));
Dart2Js is not adding a shift this time. I would still be interested into the reason for this behavior.

Related

2^65 modulo 101 incorrect anwser

This code checks that the value a maps uniquely for the values 1 to 100 using the formula (a^x) % 101
local function f(a)
found = {}
bijective = true
for x = 1, 100 do
value = (a^x) % 101
if found[value] then
bijective = false
break
else
found[value] = x
end
end
return bijective
end
However does not produce the expected result.
it maps 2^65 % 101 to 56, which matches the value produced by 2^12 % 101 and I get a false result, however the correct value for 2^65 % 101 is 57 and 2 actually should produce all unique values resulting in a true result.
The error described above is specifically on Lua 5.1, is this just a quirk of Lua's number typing? Is there a way to make this function work correctly in 5.1?
The error described above is specifically on Lua 5.1, is this just a quirk of Lua's number typing? Is there a way to make this function work correctly in 5.1?
First of all, this is not an issue with Lua's number typing since 2^65, being a (rather small) power of two, can be represented exactly by the double precision since it uses an exponent-mantissa representation. The mantissa can simply be set to all zeroes (leading one is implicit) and the exponent must be set to 65 (+ offset).
I tried this on different Lua versions and PUC Lua 5.1 & 5.2 as well as LuaJIT have the issue; Lua 5.3 (and presumably later versions as well) are fine. Interestingly, using math.fmod(2^65, 101) returns the correct result on the older Lua versions but 2^65 % 101 does not (it returns 0 instead).
This surprised me so I dug in the Lua 5.1 sources. This is the implementation of math.fmod:
#include <math.h>
...
static int math_fmod (lua_State *L) {
lua_pushnumber(L, fmod(luaL_checknumber(L, 1), luaL_checknumber(L, 2)));
return 1;
}
this also is the only place where fmod from math.h appears to be used. The % operator on the other hand is implemented as documented in the reference manual:
#define luai_nummod(a,b) ((a) - floor((a)/(b))*(b))
in src/luaconf.h. You could trivially redefine it as fmod(a,b) to fix your issue. In fact Lua 5.4 does something similar and even provides an elaborate explanation in its sources!
/*
** modulo: defined as 'a - floor(a/b)*b'; the direct computation
** using this definition has several problems with rounding errors,
** so it is better to use 'fmod'. 'fmod' gives the result of
** 'a - trunc(a/b)*b', and therefore must be corrected when
** 'trunc(a/b) ~= floor(a/b)'. That happens when the division has a
** non-integer negative result: non-integer result is equivalent to
** a non-zero remainder 'm'; negative result is equivalent to 'a' and
** 'b' with different signs, or 'm' and 'b' with different signs
** (as the result 'm' of 'fmod' has the same sign of 'a').
*/
#if !defined(luai_nummod)
#define luai_nummod(L,a,b,m) \
{ (void)L; (m) = l_mathop(fmod)(a,b); \
if (((m) > 0) ? (b) < 0 : ((m) < 0 && (b) > 0)) (m) += (b); }
#endif
Is there a way to make this function work correctly in 5.1?
Yes: The easy way is to use fmod. This may work for these particular numbers since they still fit in doubles due to the base being 2 and the exponent being moderately small, but it won't work in the general case. The better approach is to leverage modular arithmetics to keep your intermediate results small, never storing numbers significantly larger than 101^2 since (a * b) % c == (a % c) * (b % c).
local function f(a)
found = {}
bijective = true
local value = 1
for _ = 1, 100 do
value = (value * a) % 101 -- a^x % 101
if found[value] then
bijective = false
break
else
found[value] = x
end
end
return bijective
end

Dart double "bitwise not" is giving different result (~~-1 != -1)

So I am running dart on DartPad And I tried running the following code:
import 'dart:math';
void main() {
print(~0);
print(~-1);
print(~~-1);
}
Which resulted in the following outputs
4294967295
0
4294967295
As you can see inverting the bits from 0 results in the max number (I was expecting -1 as dart uses two's complement) and inverting from -1 results in 0, which creates the situation where inverting 2 times -1 does not give me -1.
Looks like it's ignoring the first bit when inverting 0, why is that?
Dart compiled for the web (which includes DartPad) uses JavaScript numbers and number operations.
One of the consequences of that is that bitwise operations (~, &, |, ^, <<, >> and >>> on int) only gives 32-bit results, because that's what the corresponding JavaScript operations do.
For historical reasons, Dart chooses to give unsigned 32-bit results, not two's complement numbers. So ~-1 is 0 and ~0 is the unsigned 0xFFFFFFFF, not -1.
In short, that's just how it is.

D: (Win32) lua_rawgeti() pushes nil

I am investigating a strange problem: on Windows, lua_rawgeti() does not return back the value to which I have created reference, but a nil. Code:
lua_State *L = luaL_newstate();
luaL_requiref(L, "_G", luaopen_base, 1);
lua_pop(L, 1);
lua_getglobal(L, toStringz("_G"));
int t1 = lua_type(L, -1);
auto r = luaL_ref(L, LUA_REGISTRYINDEX);
lua_rawgeti(L, LUA_REGISTRYINDEX, r);
int t2 = lua_type(L, -1);
lua_close(L);
writefln("Ref: %d, types: %d, %d", r, t1, t2);
assert(r != LUA_REFNIL);
assert((t1 != LUA_TNIL) && (t1 == t2));
Full source and build bat: https://github.com/mkoskim/games/tree/master/tests/luaref
Compile & run:
rdmd -I<path>/DerelictLua/source/ -I<path>/DerelictUtil/source/ testref.d
64-bit Linux (_G is table, and rawgeti places a table to stack):
$ build.bat
Ref: 3, types: 5, 5
32-bit Windows (_G is table, but rawgeti places nil to stack):
$ build.bat
Ref: 3, types: 5, 0
<assertion fail>
So, either luaL_ref() fails to store reference to _G correctly, or lua_rawgeti() fails to retrieve _G correctly.
Update: I compiled Lua library from sources, and added printf() to lua_rawgeti() (lapi.c:660) to print out the reference:
printf("lua_rawgeti(%d)\n", n);
I also added writeln() to test.d to tell me at which point we call lua_rawgeti(). It shows that D sends the reference number correctly:
lua_rawgeti(2)
lua_rawgeti(0)
Dereferencing:
lua_rawgeti(3)
Ref: 3, types: 5, 0
On Windows, I use:
DMD 2.086.0 (32-bit Windows)
lua53.dll (32-bit Windows, I have tried both lua-5.3.4 and lua-5.3.5), from here: http://luabinaries.sourceforge.net/download.html
DerelictLua newest version (commit 5549c1a)
DerelictUtil newest version (commit 8dda339)
Questions:
Is there any bug in the code I just don't catch? Is there any known "quirks" or such to use 32-bit D and Lua on Windows? There can't be any big problems with my compiler and libraries, because they compile and link together without any errors, and lua calls mostly work (e.g. opening lua state, pushing _G to stack and such).
I was not able to find anything related when googling, so I am pretty sure there is something wrong in my setup (something is mismatching). It is hard to me to suspect problems in Lua libraries, because they have been stable quite some long time (even 32-bit versions).
I would like to know, if people have used 64-bit Windows DMD + Lua successfully. Of course, I would appreciate to hear if people use 32-bit Windows DMD + Lua successfully.
I am bit out of ideas where to look for solution. Any ideas what to try next?
Thanks in advance!
I got an answer from lua mailing list: http://lua-users.org/lists/lua-l/2019-05/msg00076.html
I suspect this is a bug in DerelictLua.
Lua defines lua_rawgeti thus:
int lua_rawgeti (lua_State *L, int index, lua_Integer n);
While DerelictLua defines its binding thus:
alias da_lua_rawgeti = int function(lua_State*, int, int);
I fixed that and created pull request to DerelictLua.

MQL4: can't workout how to get decimal value of 1/6

Can't work out why 1/6 keeps returning me 0 and how to resolve it.
Print(1/6);
Print(DoubleToString((1/6),8));
Prints 0.00000000
You need one double number in expression. Try: Print(1/6.0);
Why:
The code above presents a pair of constant ( that happen to be 1 and 6 in the posted case ).
The MQL4 language is a compiled language and ( after expansion of #define macros, #include and similar pre-compilation steps have taken their place ) the compiler in the compilation phase "reads" your code and lexically analyses, what is there intended to be done.
Having seen an expression of 1 / 6 in the code, the compiler knows, there is nothing that could change this part of code at runtime, inside the run-time ecosystem, and thus it tried to reduce this part of static-value code.
The compiler was hardwired so as to assemble the code-execution file-format ( an .MQ4-file ) into which it will -- no surprise in this -- put a straight 0, as the compiler decided from the static code analysis -- a constant, derived from a compile-time known pair of integer literal constants 1 and 6, not to waste a single nanosecond at runtime to ( re-)-evaluate the compile-time known value of 0.
How to resolve it:
double enumerator = 1,
divisor = 6;
Print( enumerator / divisor );
Print( DoubleToString( enumerator / divisor ),
8
)
);

Modulo Power function in J

If I want to compute a^b mod c then there is an efficient way of doing this, without computing a^b in full.
However, when programming, if I write f g x then g(x) is computed regardless of f.
J provides for composing f and g in special cases, and the modulo power function is one of them. For instance, the following runs extremely quickly.
1000&| # (2&^) 10000000x
This is because the 'atop' conjunction # tells the language to compose the functions if possible. If I remove it, it goes unbearably slowly.
If however I want to work with x^x , then ^~ no longer works and I get limit errors for large values. Binding those large values however does work.
So
999&| # (100333454&^) 100333454x
executes nice and fast but
999&| # ^~ 100333454x
gives me a limit error - the RHS is too big.
Am I right in thinking that in this case, J is not using an efficient power-modulo algorithm?
Per the special code page:
m&|#^ dyad avoids exponentiation for integer arguments
m&|#(n&^) monad avoids exponentiation for integer arguments
The case for ^~ is not supported by special code.
https://github.com/Pascal-J/BN-openssl-bindings-for-J
includes bindings for openssl (distributed with J on windows, and pre-installed on all other compatible systems)'s BN library including modexp. The x ^ x case will be especially efficient due to fewer parameters converted from J to "C" (BN)

Resources