How to define -1 as a uint64 in a match clause? - f#

let myuint64 = 10uL
match myuint64 with
| -1 -> ()
| _ -> ()
How do I define the given -1 as a uint64 value?

> match 0UL-1UL with
- |System.UInt64.MaxValue -> "-1"
- |_ -> "???"
- ;;
val it : string = "-1"

Let me leave alone the fact that you can't really represent a negative value with a data type that can only store positive values (and zero of course).
If, on the other hand, you were storing it in a signed value, -1 would be stored as all bits set.
So basically, I will assume you want to find a way to represent -1 as a bit-wise value that will be compatible with -1 as a signed value.
The value would then be, in C# and C/C++ syntax, 0xffffffffffffffff. Exactly how to specify that in F# I don't know.

I don't know F# at all, but if it's anything like any other languages, a UInt64 can't be -1. Ever. UInt means unsigned integer, which means it can only represent positive values.

To expand on other answers:
When a type starts with a u it means unsigned. What signed/unsigned means is this:
Numbers are stored using a certain number of bits. In the case of int64 and uint64, 64 bits are used. If the number is signed, the 1st bit is not used as part of the number itself, only the other 63 are. That bit is used to say whether the number is negative. If the number is unsigned, then all bits including the 1st bit are used as part of the number and the number is always non-negative (ie: is positive or 0).

Well you could assign it -1 and on most architectures store the 2's complement in there. The signed and unsigned stuff are really only for the type checking. There is no negative sign in hardware.
I have no idea if f# type checker is smart enough to know that a lexical constant -1 is a negative number and should not be put in a uint64.
C definitely does not care.
#include <stdio.h>
#include <inttypes.h>
main()
{
uint64_t x = -1;
printf("0x%x\n", x); // 0xffffffff
}

if F# will convert it for you then -1UL would work. If not then you can specify it as 0xFFFFFFFFFFFFFFFFUL and add a comment to remember that it's -1.
Don't have the F# tools installed at the moment so I cannot verify this.

If you want to go with a signed int:
-1: int64
but you can't match a negative number to a uint, as others have stated.

Related

MQL4 StringToDouble alters the value of the variable?

MQL4 documentation states that the value limits for double type variables is:
"Minimal Positive Value" = 2.2250738585072014e-308
"Maximum Value" = 1.7976931348623158e+308
See https://docs.mql4.com/basis/types/double
Why does StringToDouble() alter the value converted?
Am I doing one thing while expecting a different result?
void OnStart() {
string s1 = "5554535251504900090807060504030201";
double d1 = StringToDouble(s1);
string s2 = DoubleToString(d1);
Print("s2<",s2,">");
printf("%099.8f",d1);
Print("s1<",s1,">");
return;<br>
}
Here's what I get when I run that code:
s1<5554535251504900090807060504030201>
d1<000000000000000000000000000000000000000000000000000000005554535251504899684469244159852544.00000000>
s2<5554535251504899684469244159852544>
5554535251504900090807060504030201 amounts to5.55454E+33.
Obviously, that doesn't even come remotely close to the 1.7976931348623158e+308 limit.
What am I missing here?
Q : "What am I missing here?"
The documented facts.
MQL4 uses no more than 4-bytes to store int.
MQL4 uses no more than 8-bytes to store double.
IEEE-754 standard defines the rest - how many bits from those 64 are reserved for: exponent ( -308, 0, +308 )
sign ( +, - ) and
the rest, for normalised form of the mantissa : 0.???????...????
Argument, that an actual number is far from either "edge" of < DBL_MIN, DBL_MAX > does explain nothing about the shallow-ness of the exact number reduced-precision representation ( see DBL_EPSILON ~ 2E-16 or DBL_DIG ~ 15-significant digits, or DBL_MANT_DIG ~ 53-bits, left from a 64-bit ( 8-Byte ) storage-cell for mantissa ).
There are many numbers, that simply cannot be stored exactly, using IEEE-754 floating point number representation.
Tons of literature explain this, so feel free to dig deeper, or may use another tools, that rely on infinite-(unlimited)-precision number representation, should your use-case requires that.

single, double and precision

I know that storing single value (or double) can not be very precise. so storing for example 125.12 can result in 125.1200074788. now in delphi their is some usefull function like samevalue or comparevalue that take an epsilon as param and say that 125.1200074788 or for exemple 125.1200087952 is equal.
but i often see in code stuff like : if aSingleVar = 0 then ... and this in fact as i see always work. why ? why storing for exemple 0 in a single var keep the exact value ?
Only values that are in form m*2^e, where m and e are integers can be stored in a floating point variable (not all of them though, it depends on precision). 0 has this form, and 125.12 does not, as it equals 3128/25, and 1/25 is not an integer power of 2.
Comparing 125.12 to a single (or double) precision variable will most probably return always False, because a literal 125.12 will be treated as an extended precision number, and no single (or double) precision number would have such a value.
Looks like a good use for the BigDecimals unit by Rudy Velthuis. Millions of decimal places of accuracy and precision.

Is that a bug, when I send zero to to funtion luaO_ceillog2?

I'a reading lua source code which version is 5.3. And i found the function
int luaO_ceillog2 (unsigned int x) in lobject.c file doest't take a special discuss for 0. When 0 was send to this fuction, it would return 32. Does this is a bug? I was confused.
luaO_ceillog2 is a function that's only used internally. Its name infers that it calculates ceil (maximum number that's not less than) of log2 of the argument.
Mathematically, logbx is only valid for x who is positive. So 0 is not a valid argument for this function, I don't think this counts as a bug.

MAX / MIN function in Objective C that avoid casting issues

I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros

Is there a substitute for Pow in BigInteger in F#?

I was using the Pow function of the BigInteger class in F# when my compiler told me :
This construct is deprecated. This member has been removed to ensure that this
type is binary compatible with the .NET 4.0 type System.Numerics.BigInteger
Fair enough I guess, but I didn't found a replacement immediately.
Is there one? Should we only use our own Pow functions? And (how) will it be replaced in NET4.0?
You can use the pown function
let result = pown 42I 42
pown works on any type that 'understands' multiplication and 'one'.
If you look at F# from the perspective of being based on OCaml, then the OCaml Num module has power_num. Since OCaml type num are arbitrary-precision rational numbers they can handle any size number, e.g. they are not limited by the CPU register because they can do the math symbolically. Also since num is defined as
type num =
| Int of int
| Big_int of Big_int.big_int
| Ratio of Ratio.ratio
they can handle very small numbers with out loss of precision because of the Ratio type.
Since F# does not have the num type, Jack created the FSharp.Compatibility.OCaml module which has num.fs and is available via NuGet.
So you can get all the precision you want using this, and the num functions can handle negative exponents.

Resources