MQL4: can't workout how to get decimal value of 1/6 - mql4

Can't work out why 1/6 keeps returning me 0 and how to resolve it.
Print(1/6);
Print(DoubleToString((1/6),8));
Prints 0.00000000

You need one double number in expression. Try: Print(1/6.0);

Why:
The code above presents a pair of constant ( that happen to be 1 and 6 in the posted case ).
The MQL4 language is a compiled language and ( after expansion of #define macros, #include and similar pre-compilation steps have taken their place ) the compiler in the compilation phase "reads" your code and lexically analyses, what is there intended to be done.
Having seen an expression of 1 / 6 in the code, the compiler knows, there is nothing that could change this part of code at runtime, inside the run-time ecosystem, and thus it tried to reduce this part of static-value code.
The compiler was hardwired so as to assemble the code-execution file-format ( an .MQ4-file ) into which it will -- no surprise in this -- put a straight 0, as the compiler decided from the static code analysis -- a constant, derived from a compile-time known pair of integer literal constants 1 and 6, not to waste a single nanosecond at runtime to ( re-)-evaluate the compile-time known value of 0.
How to resolve it:
double enumerator = 1,
divisor = 6;
Print( enumerator / divisor );
Print( DoubleToString( enumerator / divisor ),
8
)
);

Related

2^65 modulo 101 incorrect anwser

This code checks that the value a maps uniquely for the values 1 to 100 using the formula (a^x) % 101
local function f(a)
found = {}
bijective = true
for x = 1, 100 do
value = (a^x) % 101
if found[value] then
bijective = false
break
else
found[value] = x
end
end
return bijective
end
However does not produce the expected result.
it maps 2^65 % 101 to 56, which matches the value produced by 2^12 % 101 and I get a false result, however the correct value for 2^65 % 101 is 57 and 2 actually should produce all unique values resulting in a true result.
The error described above is specifically on Lua 5.1, is this just a quirk of Lua's number typing? Is there a way to make this function work correctly in 5.1?
The error described above is specifically on Lua 5.1, is this just a quirk of Lua's number typing? Is there a way to make this function work correctly in 5.1?
First of all, this is not an issue with Lua's number typing since 2^65, being a (rather small) power of two, can be represented exactly by the double precision since it uses an exponent-mantissa representation. The mantissa can simply be set to all zeroes (leading one is implicit) and the exponent must be set to 65 (+ offset).
I tried this on different Lua versions and PUC Lua 5.1 & 5.2 as well as LuaJIT have the issue; Lua 5.3 (and presumably later versions as well) are fine. Interestingly, using math.fmod(2^65, 101) returns the correct result on the older Lua versions but 2^65 % 101 does not (it returns 0 instead).
This surprised me so I dug in the Lua 5.1 sources. This is the implementation of math.fmod:
#include <math.h>
...
static int math_fmod (lua_State *L) {
lua_pushnumber(L, fmod(luaL_checknumber(L, 1), luaL_checknumber(L, 2)));
return 1;
}
this also is the only place where fmod from math.h appears to be used. The % operator on the other hand is implemented as documented in the reference manual:
#define luai_nummod(a,b) ((a) - floor((a)/(b))*(b))
in src/luaconf.h. You could trivially redefine it as fmod(a,b) to fix your issue. In fact Lua 5.4 does something similar and even provides an elaborate explanation in its sources!
/*
** modulo: defined as 'a - floor(a/b)*b'; the direct computation
** using this definition has several problems with rounding errors,
** so it is better to use 'fmod'. 'fmod' gives the result of
** 'a - trunc(a/b)*b', and therefore must be corrected when
** 'trunc(a/b) ~= floor(a/b)'. That happens when the division has a
** non-integer negative result: non-integer result is equivalent to
** a non-zero remainder 'm'; negative result is equivalent to 'a' and
** 'b' with different signs, or 'm' and 'b' with different signs
** (as the result 'm' of 'fmod' has the same sign of 'a').
*/
#if !defined(luai_nummod)
#define luai_nummod(L,a,b,m) \
{ (void)L; (m) = l_mathop(fmod)(a,b); \
if (((m) > 0) ? (b) < 0 : ((m) < 0 && (b) > 0)) (m) += (b); }
#endif
Is there a way to make this function work correctly in 5.1?
Yes: The easy way is to use fmod. This may work for these particular numbers since they still fit in doubles due to the base being 2 and the exponent being moderately small, but it won't work in the general case. The better approach is to leverage modular arithmetics to keep your intermediate results small, never storing numbers significantly larger than 101^2 since (a * b) % c == (a % c) * (b % c).
local function f(a)
found = {}
bijective = true
local value = 1
for _ = 1, 100 do
value = (value * a) % 101 -- a^x % 101
if found[value] then
bijective = false
break
else
found[value] = x
end
end
return bijective
end

Dart double "bitwise not" is giving different result (~~-1 != -1)

So I am running dart on DartPad And I tried running the following code:
import 'dart:math';
void main() {
print(~0);
print(~-1);
print(~~-1);
}
Which resulted in the following outputs
4294967295
0
4294967295
As you can see inverting the bits from 0 results in the max number (I was expecting -1 as dart uses two's complement) and inverting from -1 results in 0, which creates the situation where inverting 2 times -1 does not give me -1.
Looks like it's ignoring the first bit when inverting 0, why is that?
Dart compiled for the web (which includes DartPad) uses JavaScript numbers and number operations.
One of the consequences of that is that bitwise operations (~, &, |, ^, <<, >> and >>> on int) only gives 32-bit results, because that's what the corresponding JavaScript operations do.
For historical reasons, Dart chooses to give unsigned 32-bit results, not two's complement numbers. So ~-1 is 0 and ~0 is the unsigned 0xFFFFFFFF, not -1.
In short, that's just how it is.

Why does Rust reuse memory with same value

Example code:
fn main() {
let mut y = &5; // 1
println!("{:p}", y);
{
let x = &2; // 2
println!("{:p}", x);
y = x;
}
y = &3; // 3
println!("{:p}", y);
}
If third assignment contains &3 then code output:
0x558e7da926a0
0x558e7da926a4
0x558e7da926a8
If third assignment contains &2 (same value with second assignment) then code output:
0x558e7da926a0
0x558e7da926a4
0x558e7da926a4
If third assignment contains &5 (same value with first assignment) then code output:
0x558e7da926a0
0x558e7da926a4
0x558e7da926a0
Why does rust not free memory but reuse it if the assignment value is the same or allocate a new block of memory otherwise?
Two occurrences of the same literal number are indistinguishable. You cannot expect the address of two literals to be identical, and neither can you expect them to be different.
This allows the compiler (but in fact it is free to do otherwise) to emit one 5 data in the executable code, and have all &5 refer to it. Constants may (see comment) also have a static lifetime, in which case they are not allocated/deallocated during program execution, they always are allocated.
There are lots of tricks an optimizing compiler can use to determine if a variable can be assigned a constant value. Your findings are consistent with this, no need to run duplicate code if it is not needed.

Why F# compiler gets into twist with seq{0L..-5L..-10L}?

I'm having a bit of trouble declaring a descending sequence of int64.
What I want is this:
seq{0L..-5L..-10L};;
However, I get an error:
seq{0L..-5L..-10L};;
---^^^^^^^^^^^^^^^
stdin(5,4): error FS0739: Invalid object, sequence or record expression
Interestingly, it works with plain int:
> seq{0..-5..-10};;
val it : seq<int> = seq [0; -5; -10]
Even more interestingly, if I put spaces between .., it starts working with int64 too:
> seq{0L .. -5L .. -10L};;
val it : seq<int64> = seq [0L; -5L; -10L]
Can someone explain why the compiler gets into the twist with seq{0L..-5L..-10L}?
I agree that this is a bit odd behavior. It is generally recommended (although this is not strictly required by the specification) to write spaces around .. and it works correctly in that case. So I'd recommend using:
seq { 0 .. -5 .. -10 }
seq { 0L .. -5L .. -10L }
Why is this behaving differently for int and int64? You may notice that when you write 1..-2 and 1L..-2, Visual Studio colorizes the text differently (in the first case .. has the same color as numbers, in the other case, it has the same color as .. with spaces).
The problem is that when the compiler sees 1., it may mean a floating point value (1.0) or it may be a start of 1.., so this case is handled specially. For 1L., this is not a problem - 1L. has to be the beginning of 1L...
So, if you write 1..-5..-10, the compiler uses the special handling and generates a sequence. If you write 1L..-5..-10, then the compiler parses ..- as a unary operator that is applied to 5L. Writing the spaces resolves the ambiguity between unary operator and .. followed by a negative number.
For reference, here is a screenshot from my Visual Studio (which shows 10.. in green, but .. on the second line in yellow - not particularly noticeable difference, but they are different :-))

Bitwise operations, wrong result in Dart2Js

I'm doing ZigZag encoding on 32bit integers with Dart. This is the source code that I'm using:
int _encodeZigZag(int instance) => (instance << 1) ^ (instance >> 31);
int _decodeZigZag(int instance) => (instance >> 1) ^ (-(instance & 1));
The code works as expected in the DartVM.
But in dart2js the _decodeZigZag function is returning invalid results if I input negativ numbers. For example -10. -10 is encoded to 19 and should be decoded back to -10, but it is decoded to 4294967286. If I run (instance >> 1) ^ (-(instance & 1)) in the JavaScript console of Chrome, I get the expected result of -10. That means for me, that Javascript should be able to run this operation properly with it number model.
But Dart2Js generate the following JavaScript, that looks different from the code I tested in the console:
return ($.JSNumber_methods.$shr(instance, 1) ^ -(instance & 1)) >>> 0;
Why does Dart2Js adds a usinged right shift by 0 to the function? Without the shift, the result would be as expected.
Now I'm wondering, is it a bug in the Dart2Js compiler or the expected result? Is there a way to force Dart2Js to output the right javascript code?
Or is my Dart code wrong?
PS: Also tested splitting up the XOR into other operations, but Dart2Js is still adding the right shift:
final a = -(instance & 1);
final b = (instance >> 1);
return (a & -b) | (-a & b);
Results in:
a = -(instance & 1);
b = $.JSNumber_methods.$shr(instance, 1);
return (a & -b | -a & b) >>> 0;
For efficiency reasons dart2js compiles Dart numbers to JS numbers. JS, however, only provides one number type: doubles. Furthermore bit-operations in JS are always truncated to 32 bits.
In many cases (like cryptography) it is easier to deal with unsigned 32 bits, so dart2js compiles bit-operations so that their result is an unsigned 32 bit number.
Neither choice (signed or unsigned) is perfect. Initially dart2js compiled to signed 32 bits, and was only changed when we tripped over it too frequently. As your code demonstrate, this doesn't remove the problem, just shifts it to different (hopefully less frequent) use-cases.
Non-compliant number semantics have been a long-standing bug in dart2js, but fixing it will take time and potentially slow down the resulting code. In the short-term future Dart developers (compiling to JS) need to know about this restriction and work around it.
Looks like I found equivalent code that output the right result. The unit test pass for both the dart vm and dart2js and I will use it for now.
int _decodeZigZag(int instance) => ((instance & 1) == 1 ? -(instance >> 1) - 1 : (instance >> 1));
Dart2Js is not adding a shift this time. I would still be interested into the reason for this behavior.

Resources