I notice that Rascal supports big integers. But I cannot find constants for infinity. Do they exist? If not, I would suggest add them since the sometimes they are quite useful. Currently, my workaround is to define something like int pInf = 1024, but it may fail for extreme cases.
Rational numbers in Rascal actually support infinity (in the form of a zero denominator), but that's more a side effect of the implementation than a real design choice, so you may not want to count on it. I also can't guarantee that all corner cases are handled correctly.
For example,
rascal>1r0
rat: 1r0
rascal>1r0*2
rat: 1r0
rascal>-1r0
rat: -1r0
rascal>-1r0*(-2)
rat: 1r0
rascal>1 / 1r0
rat: 0r
rascal>12345678901234567890 > 1r0
bool: false
rascal>25r0
rat: 1r0
rascal>25 / 0
|stdin:///|(5,1,<1,5>,<1,6>): ArithmeticException("/ by zero")
rascal>25 / 0r
rat: 1r0
No support for infinity in Rascal.
The "Rascal" way of dealing with such variability is to introduce an algebraic data-type, as in:
data Arity = inf() | fixed(int size)
Then you can use pattern matching or is or whatever to deal with the differences.
if (arity is inf) {...}
int foo(fixed(int size)) = ...;
int foo(inf()) = ...;
Related
I have picked two random double numbers:
double a = 7918.52;
double b = 5000.00;
I would expect to get 2918.52 from a - b.
Well, it gives me a result of 2918.5200000000004, which seems odd.
print(a - b); // -> 2918.5200000000004
But if I change double a to 7918.54, I will get the expected result of 2918.54.
Can someone explain to me why some double values result in unexpected rounding issues and others do not?
The reason for this is floating-point arithmetic and the fact that Dart uses the IEEE 754 standard as far as I am concerned.
This happens for all languages that use floating-point arithmetic. You can read through similar questions regarding other programming languages.
General question about floating-point arithmetic in modern programming languages.
creativecreatorormaybenot is right, this is issue comes from the Dart language itself.
The best solution I came up with was to manually round it to the precision you expect :
double substract(double a, double b, int precision) {
return (a - b).precision(precision);
}
double precision(int fractionDigits) {
num mod = pow(10.0, fractionDigits);
return ((this * mod).round().toDouble() / mod);
}
A simple solution would be:
depending on how many decimals you expect (lets assume 2, you can make it more generic...)
((a * 100).toInt() - (b * 100)).toInt() / 100.
I'm working on something for an image processing task (it's not greatly relevant here), and I stumbled across behaviour of F#'s Option type that surprised me, regarding performing greater than (>) comparisons. I couldn't find anything that directly explained what I should expect (more on that below) on Stack Overflow, the F# docs or the wider web.
The specific part I am looking at looks something like:
let sort3Elems (arr: byte option []) =
if arr.[0] > arr.[1] then swap &arr.[0] &arr.[1]
if arr.[1] > arr.[2] then swap &arr.[1] &arr.[2]
if arr.[0] > arr.[1] then swap &arr.[0] &arr.[2]
where I will be passing in arrays of four byte options (if you're wondering why this looks weird and is super-unfunctional, right now I'm deliberately trying to re-implement the non-functional-language implementation of an algorithm in a textbook). I was expecting this to cause a compiler error, where it would complain that the options couldn't be directly compared. To my surprise, this compiled fine. Intrigued, I tested it out in F# Interactive, the results of which look something like the below:
let arr: byte option [] = Array.zeroCreate 4;;
val arr : byte option [] = [|None; None; None; None|]
> arr.[0] <- Some(127uy);;
val it : unit = ()
> arr.[2] <- Some(55uy);;
val it : unit = ()
> arr.[0] > arr.[2];;
val it : bool = true
> arr.[0] < arr.[2];;
val it : bool = false
> arr.[0] < arr.[1];;
val it : bool = false
> arr.[0] > arr.[1];;
val it : bool = true
> arr.[2] > arr.[1];;
val it : bool = true
> arr.[3] > arr.[1];;
val it : bool = false
> arr.[3] < arr.[1];;
val it : bool = false
> arr.[3] > arr.[1];;
val it : bool = false
It seems to me that essentially the comparison operators must always return true (false) when asking whether a Some is greater (less) than a None, two Nones always return false, and two Somes of the same contained type compare the contained values (assuming that they can be compared I imagine). This makes sense, though I was surprised.
Wanting to confirm this, I attempted to track something down that would explain the behaviour I should expect, but I couldn't find anything that addressed the point. The Option page in the MS F# Guide docs doesn't make any mention of it, and I couldn't find anything in places like F# for fun and profit. I couldn't even manage to find a page about Option anywhere in the MS API docs... Looking at the source for Option in the F# GitHub repo doesn't tell me anything. The best I could find was a blog post by Don Syme from years ago, which didn't actually answer my question. There were a smattering of Stack Overflow questions that discussed topics to do with the comparison operators or Option types, but I didn't find anything that dealt with the combination of the two.
So, my question is, does performing greater than/less than comparisons on the Option type return the results I have speculated about above? I'm guessing that this is reasonably common knowledge amongst F# programmers, but it was news to me. As a sub-/related question, does anyone know where I could/should have looked to for more information? Thanks.
The F# compiler automatically generates comparison for discriminated unions and record types. Since option is just a discriminated union, this also means that you get an automatic comparison for unions. I'm not sure if there is a good web page documenting this, but you can find a description in section 8.15.4 in the F# specification:
8.15.4 Behavior of the Generated CompareTo Implementations
For a type T, the behavior of the generated System.IComparable.CompareTo implementation is as
follows:
Convert the y argument to type T . If the conversion fails, raise the InvalidCastException.
If T is a reference type and y is null, return 1.
If T is a struct or record type, invoke FSharp.Core.Operators.compare on each corresponding pair
of fields of x and y in declaration order, and return the first non-zero result.
If T is a union type, invoke FSharp.Core.Operators.compare first on the index of the union cases
for the two values, and then on each corresponding field pair of x and y for the data carried by
the union case. Return the first non-zero result.
As documented in the last case, the case for option first compares the cases. None has an index smaller than Some so a None value will always be smaller than any Some value. If the cases match, then None = None and Some n with Some m are compared based on n and m.
I have a double value that I need to access to inside a backgroundThread. I would like to use somethink like AtomiccmpExchange but seam to not work with double. is their any other equivalent that I can use with double ? I would like to avoid to use Tmonitor.enter / Tmonitor.exit as I need something the most fast as possible. I m under android/ios so under firemonkey
You could type cast the double values into UInt64 values:
PUInt64(#dOld)^ := AtomicCmpExchange(PUInt64(#d)^,PUInt64(#dNew)^,PUInt64(#dComp)^);
Note that you need to align the variables properly, according to platforms specifications.
As #David pointed out, comparing doublevalues is not the same as comparing UInt64 values. There are some specific double values that will behave out of the ordinary:
A NaN is normally (as specified in IEEE-754) detected by comparing a value by itself.
IsNaN := d <> d;
footnote: Delphi default exception handler is triggered in the event of comparing a NaN, but other compilers may behave differently. In Delphi there is an IsNaN() function to use instead.
Likewise the value zero could be both positive and negative, for a special meaning. Comparing double 0 with double -0 will return true, but comparing the memory footprint will return false.
Maybe use of System.SyncObjs.TInterlocked class methods will be better?
Hello well all is in the title. The question apply especially for all those values that can be like NSTimeInterval, CGFloat or any other variable that is a float or a double. Thanks.
EDIT: I'm asking for value assignment not format in a string.
EDIT 2: The question is really does assigning a plain 0 for a float or a double is worst than anything with f a the end.
The basic difference is as :
1.0 or 1. is a double constant
1.0f is a float constant
Without a suffix, a literal with a decimal in it (123.0) will be treated as a double-precision floating-point number. If you assign or pass that to a single-precision variable or parameter, the compiler will (should) issue a warning. Appending f tells the compiler you want the literal to be treated as a single-precision floating-point number.
If you are initializing a variable then it make no sense. compiler does all the cast for you.
float a = 0; //Cast int 0 to float 0.0
float b = 0.0; //Cast 0.0 double to float 0.0 as by default floating point constants are double
float c = 0.0f // Assigning float to float. .0f is same as 0.0f
But if you are using these in an expression then that make a lot of sense.
6/5 becomes 1
6/5.0 becomes 1.2 (double value)
6/5.0f becomes 1.2 (float value)
If you want to dig out if there is any difference to the target CPU running the code or the binary code it executes, you can easily copy one of the command lines compiling the code from XCode to command line, fix missing environment variables and add a -S. By that you would get assembly output, that you can use to compare. If you put all 4 variants in a small example source file, you can compare the resulting assembly code afterwards, even without being fluent in ARM assembly.
From my ARM assembly experience (okay... 6 years ago and GCC) I would bet 1ct on something like XORing a register with itself to flush it's content to 0.
Whether you use 0.0, .0, or 0.0f or even 0f does not make much of a difference. (There are some with respect to double and float) You may even use (float) 0.
But there is a significant difference between 0 and some float notation. Zero will always be some type of integer. And that can force the machine to perform integer operations when you may want float operations instead.
I do not have a good example for zero handy but I've got one for float/int in general, which nealy drove me crazy the other day.
I am used to 8-Bit-RGB colors That is because of my hobby as photographer and because of my recent background as html developer. So I felt it difficult to get used to the cocoa style 0..1 fractions of red, green and yellow. To overcome that I wanted to use the values that I was used to and devide them by 255.
[CGColor colorWithRed: 128/255 green: 128/255 andYellow: 128/255];
That should generate me some nice middle gray. But it did not. All that I tried either made a black or white.
First I thought that this was caused by some undocumented dificiency of the UI text objects with which I was using this colour. It took a while to realize that this constant values forced integer operations wich can only round up or down to 0 and 1.
This expession eventually did what I wanted to achieve:
[CGColor colorWithRed: 128.0/255.0 green: 128.0/255.0 andYellow: 128.0/255.0];
You could achieve the same thing with less .0s attached. But it does not hurt having more of them as needed. 128.0f/(float)255 would do either.
Edit to respond to your "Edit2":
float fvar;
fvar = 0;
vs ...
fvar = .0;
In the end it does not make a difference at all. fvar will contain a float value close to (but not always equal to) 0.0. For compilers in the 60th and 70th I would have guessed that there is a minor performance issue associated with fvar = 0. That is that the compiler creates an int 0 first which will then have to be converted to float before the assignment. Modern compilers of today should optimize automatically much better than older ones. In the end I'd have to look at the machine code output to see whether it does make a difference.
However, with fvar = .0; you are always on the safe site.
let myuint64 = 10uL
match myuint64 with
| -1 -> ()
| _ -> ()
How do I define the given -1 as a uint64 value?
> match 0UL-1UL with
- |System.UInt64.MaxValue -> "-1"
- |_ -> "???"
- ;;
val it : string = "-1"
Let me leave alone the fact that you can't really represent a negative value with a data type that can only store positive values (and zero of course).
If, on the other hand, you were storing it in a signed value, -1 would be stored as all bits set.
So basically, I will assume you want to find a way to represent -1 as a bit-wise value that will be compatible with -1 as a signed value.
The value would then be, in C# and C/C++ syntax, 0xffffffffffffffff. Exactly how to specify that in F# I don't know.
I don't know F# at all, but if it's anything like any other languages, a UInt64 can't be -1. Ever. UInt means unsigned integer, which means it can only represent positive values.
To expand on other answers:
When a type starts with a u it means unsigned. What signed/unsigned means is this:
Numbers are stored using a certain number of bits. In the case of int64 and uint64, 64 bits are used. If the number is signed, the 1st bit is not used as part of the number itself, only the other 63 are. That bit is used to say whether the number is negative. If the number is unsigned, then all bits including the 1st bit are used as part of the number and the number is always non-negative (ie: is positive or 0).
Well you could assign it -1 and on most architectures store the 2's complement in there. The signed and unsigned stuff are really only for the type checking. There is no negative sign in hardware.
I have no idea if f# type checker is smart enough to know that a lexical constant -1 is a negative number and should not be put in a uint64.
C definitely does not care.
#include <stdio.h>
#include <inttypes.h>
main()
{
uint64_t x = -1;
printf("0x%x\n", x); // 0xffffffff
}
if F# will convert it for you then -1UL would work. If not then you can specify it as 0xFFFFFFFFFFFFFFFFUL and add a comment to remember that it's -1.
Don't have the F# tools installed at the moment so I cannot verify this.
If you want to go with a signed int:
-1: int64
but you can't match a negative number to a uint, as others have stated.