Why is this unit of measure being restricted to 1? - f#

I need to be able to represent the same concept with a few different units in F#. For example, I want to represent "distance" using light-years, astronomical units, kilometers and meters. I'd like to use a generic function for doing calculations with these values. This is how I've grouped ly, AU, km, and m together:
[<Measure>] type ly
[<Measure>] type AU
[<Measure>] type km
[<Measure>] type m
[<Measure>] type distance
type UnitValue<[<Measure>] 'u, [<Measure>] 't> =
val conversionFactor : float<'t / 'u>
val value : float<'u>
new (v, cf) = { value = FloatWithMeasure<'u> v; conversionFactor = FloatWithMeasure<'t / 'u> cf }
member this.toUnits = this.value * this.conversionFactor
member this.fromUnits (x : float<'t>) = x / this.conversionFactor
static member (+) (a : UnitValue<'u, 't>, b : UnitValue<_, 't>) =
a.newValue (a.toUnits + b.toUnits)
static member (-) (a : UnitValue<'u, 't>, b : UnitValue<_, 't>) =
a.newValue (a.toUnits - b.toUnits)
static member (*) (a : UnitValue<'u, 't>, b : float) =
a.newValue (a.toUnits * b)
member this.newValue (x : float<'t>) =
new UnitValue<'u, 't>(float (this.fromUnits x), float this.conversionFactor)
//Distance units
type LightYearValue(value) =
inherit UnitValue<ly, distance>(value, 6324.0)
type AstronomicalUnitValue(value) =
inherit UnitValue<AU, distance>(value, 15.0)
type KilometerValue(value) =
inherit UnitValue<km, distance>(value, 0.00001)
type MeterValue(value) =
inherit UnitValue<m, distance>(value, 0.0000000)
This code is called from unit-unaware C#, and can be done by just specifying new LightYearValue(4.2), which will become a UnitValue<ly, distance> in F#, and can be passed to a function expecting a UnitValue<_, distance>. That way, the appropriate units go into the function, and the appropriate units go out. For example, if I passed the function a UnitValue<AU, distance>, I might get back a float<AU / s ^ 2> depending on the calculation--and it would be an appropriate number for the scale.
Feeling pretty happy with this, I start writing an Orbit type:
and Orbit(PeR : UnitValue<_, distance>, ApR : UnitValue<_, distance>, AgP : float, focus : SphericalMass) =
let PeR = PeR
let ApR = ApR
let AgP = AgP
let focus = focus
let Maj = PeR + ApR
let Ecc = (Maj.value - (2.0 * PeR.value)) / Maj.value
let DistanceAt theta =
(Maj.value / 2.0) * (1.0 - Ecc ** 2.0) / (1.0 + Ecc * Math.Cos(theta))
but when I hover my mouse over PeR, it says its type is UnitValue<1, distance>. So what gives? Why doesn't this work? I can write a function taking UnitValue<_, distance> and it works fine! Could it have to do with C# interacting with this code? (the type is extended by a C# class) Is there any way to make this work :(

When declaring type, you need to declare generic type parameters (and also unit parameters) explicitly. The following declaration infers the types correctly:
type Orbit<[<Measure>] 'u, [<Measure>] 'v>
( PeR : UnitValue<'u, distance>, ApR : UnitValue<'v, distance>,
AgP : float, focus : SphericalMass) =
let Maj = PeR + ApR
let Ecc = (Maj.value - (2.0 * PeR.value)) / Maj.value
let DistanceAt theta =
(Maj.value / 2.0) * (1.0 - Ecc ** 2.0) / (1.0 + Ecc * Math.Cos(theta))
(BTW: You don't need to re-assign parameters to local let bindings - they will be accessible automatically, so I removed lines like let ApR = ApR)

Related

how to define a measure as a multiple of another measure

I am attempting to define Isaron to be 43.2 times an Egg, but I am getting an exception:
[<Measure>] type Egg
[<Measure>] type Isaron = (43.2)*(Egg)
error FS0618: Invalid literal in type
How do I correctly define this measure?
You don't define the relationship in the measure type, you define it in functions that convert to and from the types:
[<Measure>] type Egg
[<Measure>] type Isaron
let toEgg (i : float<Isaron>) = i / 43.2<Isaron> * 1.0<Egg>
let toIsaron (e : float<Egg>) = e / 1.0<Egg> * 43.2<Isaron>
Scott Wlaschin's site has another example of how to do it in feet and inches, by creating a constant conversion factor:
[<Measure>] type Egg
[<Measure>] type Isaron
let isaronsPerEgg = 43.2<Isaron/Egg>
let toEgg (i : float<Isaron>) = i / isaronsPerEgg
let toIsaron (e : float<Egg>) = e * isaronsPerEgg

Assignment to mutable variable interferes with type inference of otherwise valid calculation

I'm attempting some Newton Raphson updates. Here is a piece of code that compiles and runs (warning: infinite loop).
let thetam = [|beta; sigSq|] |> DenseVector
let mutable gm = grad yt xt betah sigSqh // returns DenseVector
let hm = hess yt xt betah sigSqh // return Matrix<float>
while gm*gm > 0.0001 do
gm <- grad yt xt betah sigSqh
thetam - (hess yt xt betah sigSqh).Inverse() * gm // unassigned compiles
However, as soon as I assign the last value to the mutable variable thetam as follows...
while gm*gm > 0.0001 do
gm <- grad yt xt betah sigSqh
thetam <- thetam - (hess yt xt betah sigSqh).Inverse() * gm // gm here has problems
a squigly red line under gm appears and the compiler complains The type 'Vector<float>' is not compatible with the type 'DenseVector'
However, the function grad is explicitly told to return a DenseVector and ordinarily works as expected.
let grad (yt : Vector<float>) (xt : Vector<float>) (beta : float) (sigSq : float) =
let T = (float yt.Count)
let gradBeta = (yt - beta * xt)*xt / sigSq
let gradSigSq = -0.5*T/sigSq + 0.5/sigSq**2.*(yt - beta * xt)*(yt - beta * xt)
[|gradBeta; gradSigSq|] |> DenseVector
Why is the assignment to thetam causing problems? Is there a magic way to perform updates without mutability?
Here is the complete script:
open System
open System.IO
open System.Windows.Forms
open System.Windows.Forms.DataVisualization
open FSharp.Data
open FSharp.Charting
open FSharp.Core.Operators
open MathNet.Numerics
open MathNet.Numerics.LinearAlgebra
open MathNet.Numerics.LinearAlgebra.Double
open MathNet.Numerics.Random
open MathNet.Numerics.Distributions
open MathNet.Numerics.Statistics
let beta, sigSq = 3., 9.
let xt = DenseVector [|23.; 78.; 43.; 32.; 90.; 66.; 89.; 34.; 72.; 99.|]
let T = xt.Count
let genProc () =
beta * xt + DenseVector [|for i in 1 .. T do yield Normal.Sample(0., Math.Sqrt(sigSq))|]
let llNormal (yt : Vector<float>) (xt : Vector<float>) (beta : float) (sigSq : float) =
let T = (float yt.Count)
let z = (yt - beta * xt) / Math.Sqrt(sigSq)
-0.5 * log (2. * Math.PI) - 0.5 * log (sigSq) - z*z/2./T/sigSq
let grad (yt : Vector<float>) (xt : Vector<float>) (beta : float) (sigSq : float) =
let T = (float yt.Count)
let gradBeta = (yt - beta * xt)*xt / sigSq
let gradSigSq = -0.5*T/sigSq + 0.5/sigSq**2.*(yt - beta * xt)*(yt - beta * xt)
[|gradBeta; gradSigSq|] |> DenseVector
let hess (yt : Vector<float>) (xt : Vector<float>) (beta : float) (sigSq : float) =
let T = (float yt.Count)
let z = yt - beta * xt
let h11 = -xt*xt/sigSq
let h22 = T*0.5/sigSq/sigSq - z*z/sigSq/sigSq/sigSq
let h12 = -1./sigSq**2.*((yt - beta * xt)*xt)
array2D [[h11;h12];[h12;h22]] |> DenseMatrix.ofArray2
let yt = genProc()
// until convergence
let mutable thetam = [|beta; sigSq|] |> DenseVector
let mutable gm = grad yt xt beta sigSq
while gm*gm > 0.0001 do
gm <- grad yt xt beta sigSq
// 'gm' here is complaining upon equation being assigned to thetam
thetam <- thetam - (hess yt xt beta sigSq).Inverse() * gm
You should change at least let mutable thetam = [|beta; sigSq|] |> DenseVector to
let mutable thetam = [|beta; sigSq|] |> DenseVector.ofArray (and possibly other DenseVector references). Mathnet for performance reasons does in-place changes so it might trip you if you use mutable references:
DenseVector(Double[] storage)
Create a new dense vector directly binding to a raw array. The array
is used directly without copying. Very efficient, but changes to the
array and the vector will affect each other.
Versus:
DenseVector OfArray(Double[] array)
Create a new dense vector as a copy of the given array. This new
vector will be independent from the array. A new memory block will be
allocated for storing the vector.
In fact we've seen this behavior in your previous question when Exponential.Samples behaved in a similar fashion.
The API docs (while not super user-friendly) are here.

How to use f# constraints for generic calculation functions?

type Point<'t> =
val X : 't
val Y : 't
new(x : 't,y : 't) = { X = x; Y = y }
let clampedSubtract (p1:Point<_>) (p2:Point<_>) =
Point( max (p2.X - p1.X) 0, max (p2.Y - p1.Y) 0 )
If you look at the code above, you will notice, that the function is not implemented as generic as it should be.
First, using the 0 in the max expressions clamps the type to int. But it should be the type of whatever type Point<'t> has and not Point<int>.
But even more important, this function can only work as expected, if signed types are used for `t.
This raises a few questions of mine:
Is there a way to obtain the neutral element (zero) from a generic (number) type?
How can I express a restriction such as "only signed number"?
Is there a way to extend type constraint system in F#?
Thanks, in advance.
The solution to the first question as already answered is to use an inline function together with GenericZero and that's all.
Regarding the signed restriction, actually there's an easy way to restrict it to signed types. Use somewhere the generic unary negation which is defined only for signed types:
let inline clampedSubtract (p1:Point<_>) (p2:Point<_>) =
let zero = LanguagePrimitives.GenericZero
Point( max (p2.X + -p1.X) zero, max (p2.Y + -p1.Y) zero )
let result1 = clampedSubtract (Point(4 , 5 )) (Point(4 , 5 ))
let result2 = clampedSubtract (Point(4y , 5y )) (Point(4y , 5y ))
let result3 = clampedSubtract (Point(4uy, 5uy)) (Point(4uy, 5uy)) // doesn't compile
In general, if you want to restrict any generic function to signed types you can define this function:
let inline whenSigned x = ignore (-x)
let inline clampedSubtract (p1:Point<_>) (p2:Point<_>) =
whenSigned p1.X
let zero = LanguagePrimitives.GenericZero
Point( max (p2.X - p1.X) zero, max (p2.Y - p1.Y) zero )
Finally regarding your third question it's not very clear to me what do you mean with extending the type system. You can create static constraints by yourself, in that sense the system is already extensible.
I did a project sometime ago to emulate some Haskell types, part of the code of that project is still in a module in FsControl there you can have an idea to what level you can play with those constraints.
This makes it generic:
let inline clampedSubtract (p1:Point<_>) (p2:Point<_>) =
let zero = LanguagePrimitives.GenericZero
Point( max (p2.X - p1.X) zero, max (p2.Y - p1.Y) zero )
But there's no way to constrain it to signed primitive types.

List comprehensions with float iterator in F#

Consider the following code:
let dl = 9.5 / 11.
let min = 21.5 + dl
let max = 40.5 - dl
let a = [ for z in min .. dl .. max -> z ] // should have 21 elements
let b = a.Length
"a" should have 21 elements but has got only 20 elements. The "max - dl" value is missing. I understand that float numbers are not precise, but I hoped that F# could work with that. If not then why F# supports List comprehensions with float iterator? To me, it is a source of bugs.
Online trial: http://tryfs.net/snippets/snippet-3H
Converting to decimals and looking at the numbers, it seems the 21st item would 'overshoot' max:
let dl = 9.5m / 11.m
let min = 21.5m + dl
let max = 40.5m - dl
let a = [ for z in min .. dl .. max -> z ] // should have 21 elements
let b = a.Length
let lastelement = List.nth a 19
let onemore = lastelement + dl
let overshoot = onemore - max
That is probably due to lack of precision in let dl = 9.5m / 11.m?
To get rid of this compounding error, you'll have to use another number system, i.e. Rational. F# Powerpack comes with a BigRational class that can be used like so:
let dl = 95N / 110N
let min = 215N / 10N + dl
let max = 405N / 10N - dl
let a = [ for z in min .. dl .. max -> z ] // Has 21 elements
let b = a.Length
Properly handling float precision issues can be tricky. You should not rely on float equality (that's what list comprehension implicitely does for the last element). List comprehensions on float are useful when you generate an infinite stream. In other cases, you should pay attention to the last comparison.
If you want a fixed number of elements, and include both lower and upper endpoints, I suggest you write this kind of function:
let range from to_ count =
assert (count > 1)
let count = count - 1
[ for i = 0 to count do yield from + float i * (to_ - from) / float count]
range 21.5 40.5 21
When I know the last element should be included, I sometimes do:
let a = [ for z in min .. dl .. max + dl*0.5 -> z ]
I suspect the problem is with the precision of floating point values. F# adds dl to the current value each time and checks if current <= max. Because of precision problems, it might jump over max and then check if max+ε <= max (which will yield false). And so the result will have only 20 items, and not 21.
After running your code, if you do:
> compare a.[19] max;;
val it : int = -1
It means max is greater than a.[19]
If we do calculations the same way the range operator does but grouping in two different ways and then compare them:
> compare (21.5+dl+dl+dl+dl+dl+dl+dl+dl) ((21.5+dl)+(dl+dl+dl+dl+dl+dl+dl));;
val it : int = 0
> compare (21.5+dl+dl+dl+dl+dl+dl+dl+dl+dl) ((21.5+dl)+(dl+dl+dl+dl+dl+dl+dl+dl));;
val it : int = -1
In this sample you can see how adding 7 times the same value in different order results in exactly the same value but if we try it 8 times the result changes depending on the grouping.
You're doing it 20 times.
So if you use the range operator with floats you should be aware of the precision problem.
But the same applies to any other calculation with floats.

Unwrapping nested loops in F#

I've been struggling with the following code. It's an F# implementation of the Forward-Euler algorithm used for modelling stars moving in a gravitational field.
let force (b1:Body) (b2:Body) =
let r = (b2.Position - b1.Position)
let rm = (float32)r.MagnitudeSquared + softeningLengthSquared
if (b1 = b2) then
VectorFloat.Zero
else
r * (b1.Mass * b2.Mass) / (Math.Sqrt((float)rm) * (float)rm)
member this.Integrate(dT, (bodies:Body[])) =
for i = 0 to bodies.Length - 1 do
for j = (i + 1) to bodies.Length - 1 do
let f = force bodies.[i] bodies.[j]
bodies.[i].Acceleration <- bodies.[i].Acceleration + (f / bodies.[i].Mass)
bodies.[j].Acceleration <- bodies.[j].Acceleration - (f / bodies.[j].Mass)
bodies.[i].Position <- bodies.[i].Position + bodies.[i].Velocity * dT
bodies.[i].Velocity <- bodies.[i].Velocity + bodies.[i].Acceleration * dT
While this works it isn't exactly "functional". It also suffers from horrible performance, it's 2.5 times slower than the equivalent c# code. bodies is an array of structs of type Body.
The thing I'm struggling with is that force() is an expensive function so usually you calculate it once for each pair and rely on the fact that Fij = -Fji. But this really messes up any loop unfolding etc.
Suggestions gratefully received! No this isn't homework...
Thanks,
Ade
UPDATED: To clarify Body and VectorFloat are defined as C# structs. This is because the program interops between F#/C# and C++/CLI. Eventually I'm going to get the code up on BitBucket but it's a work in progress I have some issues to sort out before I can put it up.
[StructLayout(LayoutKind.Sequential)]
public struct Body
{
public VectorFloat Position;
public float Size;
public uint Color;
public VectorFloat Velocity;
public VectorFloat Acceleration;
'''
}
[StructLayout(LayoutKind.Sequential)]
public partial struct VectorFloat
{
public System.Single X { get; set; }
public System.Single Y { get; set; }
public System.Single Z { get; set; }
}
The vector defines the sort of operators you'd expect for a standard Vector class. You could probably use the Vector3D class from the .NET framework for this case (I'm actually investigating cutting over to it).
UPDATE 2: Improved code based on the first two replies below:
for i = 0 to bodies.Length - 1 do
for j = (i + 1) to bodies.Length - 1 do
let r = ( bodies.[j].Position - bodies.[i].Position)
let rm = (float32)r.MagnitudeSquared + softeningLengthSquared
let f = r / (Math.Sqrt((float)rm) * (float)rm)
bodies.[i].Acceleration <- bodies.[i].Acceleration + (f * bodies.[j].Mass)
bodies.[j].Acceleration <- bodies.[j].Acceleration - (f * bodies.[i].Mass)
bodies.[i].Position <- bodies.[i].Position + bodies.[i].Velocity * dT
bodies.[i].Velocity <- bodies.[i].Velocity + bodies.[i].Acceleration * dT
The branch in the force function to cover the b1 == b2 case is the worst offender. You do't need this if softeningLength is always non-zero, even if it's very small (Epsilon). This optimization was in the C# code but not the F# version (doh!).
Math.Pow(x, -1.5) seems to be a lot slower than 1/ (Math.Sqrt(x) * x). Essentially this algorithm is slightly odd in that it's perfromance is dictated by the cost of this one step.
Moving the force calculation inline and getting rid of some divides also gives some improvement, but the performance was really being killed by the branching and is dominated by the cost of Sqrt.
WRT using classes over structs: There are cases (CUDA and native C++ implementations of this code and a DX9 renderer) where I need to get the array of bodies into unmanaged code or onto a GPU. In these scenarios being able to memcpy a contiguous block of memory seems like the way to go. Not something I'd get from an array of class Body.
I'm not sure if it's wise to rewrite this code in a functional style. I've seen some attempts to write pair interaction calculations in a functional manner and each one of them was harder to follow than two nested loops.
Before looking at structs vs. classes (I'm sure someone else has something smart to say about this), maybe you can try optimizing the calculation itself?
You're calculating two acceleration deltas, let's call them dAi and dAj:
dAi = r*m1*m2/(rm*sqrt(rm)) / m1
dAj = r*m1*m2/(rm*sqrt(rm)) / m2
[note: m1 = bodies.[i].mass, m2=bodies.[j].mass]]
The division by mass cancels out like this:
dAi = rm2 / (rmsqrt(rm))
dAj = rm1 / (rmsqrt(rm))
Now you only have to calculate r/(rmsqrt(rm)) for each pair (i,j).
This can be optimized further, because 1/(rmsqrt(rm)) = 1/(rm^1.5) = rm^-1.5, so if you let r' = r * (rm ** -1.5), then Edit: no it can't, that's premature optimization talking right there (see comment). Calculating r' = 1.0 / (r * sqrt r) is fastest.
dAi = m2 * r'
dAj = m1 * r'
Your code would then become something like
member this.Integrate(dT, (bodies:Body[])) =
for i = 0 to bodies.Length - 1 do
for j = (i + 1) to bodies.Length - 1 do
let r = (b2.Position - b1.Position)
let rm = (float32)r.MagnitudeSquared + softeningLengthSquared
let r' = r * (rm ** -1.5)
bodies.[i].Acceleration <- bodies.[i].Acceleration + r' * bodies.[j].Mass
bodies.[j].Acceleration <- bodies.[j].Acceleration - r' * bodies.[i].Mass
bodies.[i].Position <- bodies.[i].Position + bodies.[i].Velocity * dT
bodies.[i].Velocity <- bodies.[i].Velocity + bodies.[i].Acceleration * dT
Look, ma, no more divisions!
Warning: untested code. Try at your own risk.
I'd like to play arround with your code, but it's difficult since the definition of Body and FloatVector is missing and they also seem to be missing from the orginal blog post you point to.
I'd hazard a guess that you could improve your performance and rewrite in a more functional style using F#'s lazy computations:
http://msdn.microsoft.com/en-us/library/dd233247(VS.100).aspx
The idea is fairly simple you wrap any expensive computation that could be repeatedly calculated in a lazy ( ... ) expression then you can force the computation as many times as you like and it will only ever be calculated once.

Resources