What is the minimum value d such that...
f - d != f
f + d != f
...for any CGFloat f (except infinity)?
For example, it's not 1.
CGFloat f = CGFLOAT_MAX;
CGFloat d = 1;
NSLog(#"%d", f - d != f); // Prints 0
If “any CGFLOAT” truly means any value that a CGFLOAT can take, then the answer is infinity. Otherwise, f could be infinity (a legal value), and f-d would equal f for any finite value of d.
If f is limited to finite values, then d is 2970 if CGFLOAT_IS_DOUBLE is set and 2103 otherwise. This is because those are the smallest values required to cause a change when d is added to the largest finite values that a CGFLOAT could have.
These are undoubtedly not the values you are looking for. As Stephen Canon notes, what are you really trying to do?
The value such that
f - d != f
f + d != f
is not fixed for all possible floating point numbers. You've probably came to this conclusion your own because you can clearly see that for d=1 and f=5.0, f-d=4.0... but this doesn' work for CGFLOAT_MAX.
This is because of how floating point numbers are stored in memory, they are stored with a base and an exponent, so not all digits are being represented.
That would depend on the the definition of CGFLOAT on your platform.
Current platforms use either float (typically when compiling for 32-bit platforms) or double (typically when compiling for 64-bit platforms).
32 bit floats use 23 bits for the fraction, which means that d would probably be around CGFLOAT_MAX/2^23
64 bit doubles use 52 bits for the fraction, which means that d would probably be around CGFLOAT_MAX/2^52
The minimum value for d depends on the value of f.
The problem is that CGFloat only supports a certain number of significant digits and not all floating point number are represented exactly.
The following is not exact results but meant to illustrate the problem.
If f is 1.0 then d might need to be something like 0.0000001. But if f is 1000000.0 (one million) then d would need to be 0.1.
Basically the value of d must be within the significant digits of f for the result to be noticeable.
Consider using NSDecimalNumber to be able to fully represent large numbers eliminating this problem.
Related
Is the integer division operator any different that casting the result of normal division?
Consider the following Dart code:
void main() {
int a = 10;
int b = 3;
int result = a~/b;
int result2 = (a/b).toInt();
}
Do both of those expressions compile to the same machine code? I'm assuming the answer would be either:
A) Yes, but only because the compiler is smart enough to re-write the second expression.
B) Yes, because there is no actual difference and the first expression is shorthand for the second.
or C) No, because there is a difference in the way the calculation is performed at the CPU level and the compiler doesn't optimize this for you.
x ~/ y is not equivalent to (x / y).toInt(). Integer arithmetic (including division) is significantly faster than floating-point arithmetic.
If x ~/ y is faster, then couldn't the compiler automatically make that transformation? Perhaps, but such a transformation wouldn't always generate the same results, so it wouldn't necessarily be appropriate. Integer division typically is used when the operands are integers. (That is, when you start in the integer domain and want to stay in the integer domain.) If the operands are integers, then the two operations can generate different results due to the difference in precision between int (64-bit signed integers for the Dart VM) and double (a IEEE-754 double-precision floating-point number, which has 53 bits of precision for its significand):
void main() {
var x = 9223372036854775801; // Close to (but not exactly) 2^63 - 1
print(x); // Prints: 9223372036854775801
print(x ~/ 1); // Prints: 9223372036854775801
print((x / 1).toInt()); // Prints: 9223372036854775807
}
Finally, (x / y).toInt() is simply more cumbersome to write and to read than x ~/ y, especially if it's part of a more complex arithmetic expression.
I run some organic chemistry models. A model is described by a generated ModelData.fs file, e.g.: https://github.com/kkkmail/ClmFSharp/blob/master/Clm/Model/ModelData.fs . The file has a very simple structure and using a generated model file is the only way that it can possibly work.
The referenced file is just for tests, but the real models are huge and may go close to 60 - 70 MB / 1.5M LOC. When I try to compile such files, F# compiler,fsc.exe, just hangs up and never comes back. It "eats" about 1.5 GB of memory and then does something forever at near 100% processing capacity. It can clearly handle smaller models, which take about 10 MB in under about a minute. So somewhere between 10 MB and 70 MB something breaks down badly in fsc.
I wonder if there are some parameter tweaks that I could make to the way the fsc compiles the project in order to make it capable of handling such huge models.
The huge models that I am referring to have one parameter set as follows: let numberOfSubstances = 65643. This results in various generated arrays of that size. I wonder if this could be the source of the problem.
Thanks a lot!
I don't think you need to autogenerate all of that.
From your comments, I understand that the functions d0, d1, ... are generated from a big sparse matrix in a way that sums up all of the input array x (with coefficients), but crucially skips summing up zero coefficients, which gives you a great performance gain, because the matrix is huge. Would that be a correct assessment?
If so, I still don't think you need to generate code to do that.
Let's take a look. I will assume that your giant sparse matrix has an interface for obtaining cell values, and it looks something like this:
let getMatrixCell (i: int) (j: int) : double
let maxI: int
let maxJ: int
Then your autogeneration code might look something like this:
let generateDFunction (i: int) =
printfn "let d%d (x: double[]) =" i
printfn " [|"
for j in 0..maxJ do
let cell = getMatrixCell i j
if cell <> 0 then
printfn " %f * x.[%d]" cell j
printfn " |]"
printfn " |> Array.sum"
Which would result in something like this:
let d25 (x : array<double>) =
[|
-1.0 * x.[25]
1.0 * x.[3]
|]
|> Array.sum
Note that I am simplifying here: in your example file, it looks like the functions also multiply negative coefficients by x.[i]. But maybe I'm also overcomplicating, because it looks like all the coefficients are always either 1 or -1. But that is all nonessential to my point.
Now, in the comments, it has been proposed that you don't generate functions d0, d1, ... but instead work directly with the matrix. For example, this would be a naive implementation of such suggestion:
let calculateDFunction (i: int) (x: double[]) =
[| for j in 0..maxJ -> (getMatrixCell i j) * x.[j] |] |> Array.sum
You then argued that this solution would be prohibitively slow, because it always iterates over the whole array x, which is huge, but most of the coefficients are zero, so it doesn't have to.
And then your way of solving this issue was to use an intermediate step of generated code: you generate the functions that only touch non-zero indicies, and then you compile and use those functions.
But here's the point: yes, you do need that intermediate step to get rid of non-zero indicies, but it doesn't have to be generated-and-compiled code!
Instead, you can prepare lists/arrays of non-zero indicies ahead of time:
let indicies =
[| for i in 0..maxI ->
[ for j in 0..maxJ do
let cell = getMatrixCell i j
if cell <> 0 then yield (j, cell)
]
|]
This will yield an array indicies : Array<int list>, where each index k corresponds to your autogenerated function dk, and it contains a list of non-zero matrix indicies together with their values in the matrix. For example, the function d22 I gave above would be represented by the 22nd element of indicies:
indicies.[22] = [ (25, -1.0), (3, 1.0) ]
Based on this intermediate structure, you can then calculate any function dk:
let calculateDFunction (k: int) (x: double[]) =
[| for (j, coeff) in indicies.[k] -> coeff * x.[j] |] |> Array.sum
In fact, if performance is crucial to you (as it seems to be from the comments), you probably should do away with all those intermediate arrays: hundreds or thousands heap allocations on each iteration is definitely not helping. You can sum with a mutable variable instead:
let calculateDFunction (k: int) (x: double[]) =
let sum = 0.0
for (j, coeff) in indicies.[k] do
sum <- sum + coeff * x.[j]
sum
How to perform arithmetic with values of different widths ?
In verilog there is no problem xoring 2 bits with 8 bits but cryptol complains:
cryptol> let test(x: [2],y: [8]) = x ^ y
[error] at <interactive>:1:31--1:32:
Type mismatch:
Expected type: 2
Inferred type: 8
My original problem:
I would like to rotate the bytes in a 64 bit value, with the number of bytes to shift depending on a two bit input. I struggle to get this working:
cryptol> let shift (v, s:[2]) = v >>> (s*16+8)
[error] at <interactive>:1:5--1:38:
Unsolved constraint:
2 >= 5
arising from
use of literal or demoted expression
at <interactive>:1:33--1:35
In the interpreter I can remove the type specification of s and then it works however I need to get that working from a file and with s being really a 2 bit value.
The type of ^ is:
Cryptol> :t (^)
(^) : {a} (Logic a) => a -> a -> a
Note that it requires both arguments to be exactly the same. You're getting the type-error because [2] is not the same as [8]; as they differ in size. Unlike Verilog, Cryptol will not "pad" things implicitly, and I think Cryptol is definitely doing the right thing here. Verilog programmers can chime in with countless bugs they had due to implicit casting.
All such casting in Cryptol has to be explicit.
The typical way to deal with this situation in Cryptol is to use the polymorphic constant zero:
Cryptol> :t zero
zero : {a} (Zero a) => a
The value zero inhabits all types (you can ignore the Zero constraint for now), and as you can imagine is the "right" padding value in this case. So, you'd define your function as:
Cryptol> let test(x:[2], y:[8]) = (zero#x)^y
Cryptol> :t test
test : ([2], [8]) -> [8]
And use it like this:
Cryptol> test (1, 5)
0x04
And if you wanted to pad on the right for some reason, you'd do:
Cryptol> let test(x:[2], y:[8]) = (x#zero)^y
Cryptol> test(1,5)
0x45
This way, everything is explicit and you don't have to know all the magical rules about how things get padded to become the right size.
If you want to get real fancy, then you can do:
Cryptol> let test(x, y) = (zero#x)^(zero#y)
Cryptol> :t test
test : {n, m, i, j, a} (Logic a, Zero a, m + i == n + j, fin n,
fin m) =>
([i]a, [j]a) -> [m + i]a
Now, that type looks a bit scary; but essentially it's telling you that you can give it any sized arguments, and it would be valid for any other size, so long as the new size is larger than the maximum of the two you've given. Of course, this inferred size is way more polymorphic then you probably cared for; so you can give it something more readable:
test : {m, n} (fin m, fin n) => [m] -> [n] -> [max m n]
test x y = (zero#x) ^ (zero#y)
I believe this captures your intent perfectly. Note how cryptol will make sure your inputs are finite, and you get the maximum of the two sizes given.
Getting back to your example, Cryptol is telling you that to multiply by 16 you need at least 5 bits, and thus 2>=5 is not satisfiable. This is a bit cryptic, but arises from the use of literals which are polymorphically typed. You can use the zero trick to address the issue in the same way as before:
Cryptol> let shift (v, s:[2]) = v >>> ((zero#s)*16+8)
[warning] at <interactive>:1:32--1:38:
Defaulting type argument 'front' of '(#)' to 3
But note how cryptol is warning you about the type of zero that's used there, since the type of >>> is polymorphic enough to allow different size shifts/rotates:
Cryptol> :t (>>>)
(>>>) : {n, ix, a} (fin n, fin ix) => [n]a -> [ix] -> [n]a
In these cases, Cryptol will pick the smallest possible size to default to by looking at the expressions. Unfortunately, it does the wrong thing here. By picking size 3 for zero, you'll have a 5 bit shift, but your expression can produce the maximum value of 3*16+8=56, which requires at least 6 bits to represent. Note that Cryptol only uses the minimum size required to handle the multiplication there, and does not care about overflows! This is why it's important to pay attention to such warnings.
To be clear: Cryptol did the right thing per the language rules on how type inference works, but it ended up picking a size that is just too small for what you wanted to do.
So, you should write your shift as follows:
Cryptol> let shift (v, s:[2]) = v >>> (((zero:[4])#s)*16+8)
Cryptol> :t shift
shift : {n, a} (fin n) => ([n]a, [2]) -> [n]a
The important thing here is to make sure the expression s*16+8 will fit in the final result, and since s is only 2 bits wide the largest value will be 56 as discussed above, which needs at least 6-bits to represent. This is why I chose [4] as the size of zero.
The moral of the story here is that you should always be explicit about the sizes of your bitvectors, and Cryptol will give you the right framework to express your constraints in a polymorphic way to allow for code reuse without ambiguity, avoiding many of the pitfalls of Verilog and other similar languages.
This is the first line of F# that I've tried writing, so apologies because I probably just don't know the right Google keywords to search.
I tried defining a function like so:
let sigmoid x deriv = if deriv then x * (1 - x) else 1 / (1 + System.Math.Exp(-x))
This gives me an error on the System.Math.Exp(-x):
The type 'float' does not match the type 'int'
I guess I was expecting the compiler to do type inference on this function and define x as a float. What am I missing here?
Here is the whole thing that I'm trying to plug in:
let sigmoid x deriv = if deriv then x * (1 - x) else 1 / (1 + System.Math.Exp(-x))
[<EntryPoint>]
let main argv =
sigmoid 1.0 false |> printfn "%A"
0
The compiler infers x as int, because you used it in things like 1 - x. A simple 1 will always be an integer, and you can only use it in arithmetic expressions together with other integers. Your code compiles if you change all your usages of 1 to 1.0, which will make it a float and cause x to be inferred as a float as well.
This is different from C#, for example, which will coerce the types if necessary and thus allow for mixing integers and floating point numbers in the same expressions. That can lead to an accidental loss of precision under certain circumstances, though, while F# always forces you to state any necessary conversions explicitly.
The function I have:
let increment n = n+1;;
My call to the function:
let x = increment -5;;
The error I get:
let x = increment -5;;
^^^^^^^^^
Error: This expression has type int -> int
but an expression was expected of type int`
Why doesn't x = -4 after the function call?
I tried finding the answer myself and I ended up here:
http://caml.inria.fr/pub/docs/manual-ocaml-4.00/lex.html#prefix-symbol
The term "longest match rule" is used in the section Ambiguities, I assume that a clarification of that term would solve my issue?
The problem is easy, - is considered here as the binary operator minus so OCaml is reading it as increment minus 5 (increment and 5 are the two operands) when what you'd want is increment the number (minus 5) (increment being a function).
Just write increment (-5) and the job's done.
An alternative to using parentheses, as suggested in the other answer, is to use the ## application operator, e.g.:
let increment n = n + 1
let x = increment ## -5
Briefly, f ## x is the same as f x, but the ## operator has lower precdence than arithmetic operations, while function application has higher precedence. Thus, using the application operator allows you to omit parentheses.
Sometimes, the reverse application operator |> can be more idiomatic; it is typically used to "pipe" an expression through one or more function applications.
Example:
let increment n = n + 1
let x = -5 |> increment
let clamp x low high = x |> min high |> max low