Does F# have a built in cube root function? I know I can use exponentiation to compute cuberoots but it won't type check in my case since I want to take the cuberoot of a quantity of type float and get a float.
I don't think there is a built-in function to calculate the cube root with units of measure (I assume it would be in the primitive operators module where sqrt and others are), so I think the only option is to use exponentiation.
However, you can use exponentiation without units and wrap the unit-unsafe operation in a function that adds units, so you get function with correct units:
let cuberoot (f:float<'m^3>) : float<'m> =
System.Math.Pow(float f, 1.0/3.0) |> LanguagePrimitives.FloatWithMeasure
Note that F# does not support fractional units so you can write cuberoot (10.0<m^3>) or cuberoot (10.0<m^9>), but if you write cuberoot (10.0<m>) then it will not type-check, because the result would be meters to 1/3 (and that's a fractional unit).
This sample is only implementing cuberoot for float. If you wanted to write overloaded function that works with other numeric types (I guess you might need float32) then it gets a bit uglier (so I would not recommend that unless necessary), but you can use a trick with intermediate type with multiple overloads like, for example, in this answer.
Related
This question relates to F# units of measure.
Should I enforce a type for a unit I am using.
For example should I enforce that seconds are always a float?
let asSeconds time = float(time)*1.0<second>
This seems a tad bit restrictive, since NASA may want to use decimal. On the other hand I don't know how to convert a generic a' into a unit a'<seconds>. The issue I am facing is that some application level functions do NOT use units, but some library stuff DO use units.
P.S. This is lightly related to my previous question
should I enforce that seconds are always a float?
I don't think you could really enforce this, since units of measure are not inherently tied to a specific numeric type. For example, both of the following would be legal:
let a = 1<second> // a : int<second>
let b = 1.0<second> // b : float<second>
I don't know how to convert a generic 'a into a unit 'a<seconds>.
This is an interesting question, but I don't think it's possible to write a generic version of asSeconds, because units of measure can only be applied to literals, and literals are never generic in F#. So you can't write time<second> or time * GenericOne<second>.
Bottom line, when converting from a dimensionless value to a dimensioned value you have to pick a specific numeric type, like float. But this doesn't mean that you can force all values of that UOM to have that numeric type.
F# has multiple ways to declare the same types. This is likely because of the dual lineage of F# as both a member of the ML family and a .NET language. I haven't been able to find any guidance on which style is more idiomatic.
Specifically, I want to know:
Which is more idiomatic for 64-bit IEEE 754 floating-point numbers in F#, float or double?
Which is a more idiomatic way in F# to declare an array type:
int[]
int array
array<int>
Sources:
https://learn.microsoft.com/dotnet/fsharp/language-reference/basic-types
https://learn.microsoft.com/dotnet/fsharp/language-reference/fsharp-types#syntax-for-types
Context: I'm working on some API documentation that is explaining how data in a data store maps to .NET types, along with how those types are typically declared in both C# and F#.
For doubles, it's pretty much always float. Unless you deal with both singles and doubles and need to ensure clarity I guess.
For generic types, the usual syntax I use and see people use is:
int option
int list
int[]
For all other types, including F#-specific ones like Async, Set, and Map, angle bracket syntax is used.
The only type that I feel has a significant split is seq (an alias for IEnumerable): I'd say the majority of people use seq<int> but a significant number of people write int seq. Either way, you should definitely use seq and not IEnumerable. Similarly, you should use the alias ResizeArray for System.Collections.Generic.List.
The F# Core Library reference, which seems like a good example to follow, seems to prefer float, int[] and seq<int>.
I am trying to make a simple use of typeclasses in Nim. Please, keep in mind that I only have been using Nim since this morning, so I may have been doing something stupid.
Anyway, I would like to define a pseudorandom generator that produces a stream of values of type T. Sometimes T is numeric, hence it makes sense to know something about the minimum and maximum values attainable - say to rescale the values. Here are my types
type
Generator*[T] = generic x
next(var x) is T
BoundedGenerator*[T] = generic x
x is Generator[T]
min(x) is T
max(x) is T
I also have such an instance, say LinearCongruentialGenerator.
Say I want to use this to define Uniform generator that produces float values in an interval. I have tried
type Uniform* = object
gen: BoundedGenerator[int]
min_p: float
max_p: float
proc create*(gen: BoundedGenerator[int], min: float, max: float): Uniform =
return Uniform(gen: gen, min_p: min, max_p: max)
I omit the obvious definitions of next, min and max.
The above, however, does not compile, due to Error: 'BoundedGenerator' is not a concrete type
If I explicitly put LinearCongruentialGenerator in place of BoundedGenerator[int], everyting compiles, but of course I want to be able to switch more sophisticated generators.
Can anyone help me understand the compiler error?
The type classes in Nim are not used to create abstract polymorphic types as it is the case with Haskell's type classes and C++'s interfaces. Instead, they are much more similar to the concepts proposal for C++. They define a set of arbitrary type requirements that can be used as overload-resolution criteria for generic functions.
If you want to work with abstract types, you can either define a type hierarchy with a common base type and use methods (which use multiple dispatch) or you can roll your own vtable-based solution. In the future, the user defined type classes will gain the ability to automatically convert the matched values to a different type (during overload resolution). This will make the vtable approach very easy to use as values of types with compatible interfaces will be convertible to a "fat pointer" carrying the vtable externally to the object (with the benefit that many pointers with different abstract types can be created for the same object). I'll be implementing these mechanisms in the next few months, hopefully before the 1.0 release.
Araq (the primary author of Nim) also has some plans for optimizing a certain type of group of closures bundled together to a cheaper representation, where the closure environment is shared between them and the end result is quite close to the traditional C++-like vtable-carrying object.
If we define a unit of measure like:
[<Measure>] type s
and then an integer with a measure
let t = 1<s>
and then convert it to a float
let r = float t
we see that r = 1.0 without a measure type. This seems very odd, as all the measure information has been lost.
You can use LanguagePrimitives.FloatWithMeasure to convert back to a float with something like
let inline floatMeasure (arg:int<'t>) : (float<'t>) =
LanguagePrimitives.FloatWithMeasure (float arg)
which enforces the right types, but this doesn't feel like the right solution as the docs for units of measure (http://msdn.microsoft.com/en-us/library/dd233243.aspx) say
However, for writing interoperability layers, there are also some explicit functions that you can use to convert unitless values to values with units. These are in the Microsoft.FSharp.Core.LanguagePrimitives module. For example, to convert from a unitless float to a float, use FloatWithMeasure, as shown in the following code.
Which seems to suggest that the function should be avoided in F# code.
Is there a more idiomatic way to do this?
Here's working snippet that does exactly what you need although gives warning
stdin(9,48): warning FS0042: This construct is deprecated: it is only for use in the F# library)):
[<NoDynamicInvocation>]
let inline convert (t: int<'u>) : float<'u> = (# "" t : 'U #)
[<Measure>] type s
let t = 1<s>
let t1 = convert t // t1: float<s>
However, I wouldn't suggest this approach.
First of all, UoM are compile-time, while type conversion let r = float t is runtime. At the moment of invocation, int -> float has no idea of whether it is int<s> or int<something_else>. So it is simply unable to infer a proper float<'u> at runtime.
Another thought is that philosophy behind UoM is wider than it's described. It is like saying the compiler, "well, it is int, but please treat it as int<s>". The goal is avoiding occasional improper use (e.g., adding int<s> to int<hours>).
Sometimes it makes no sense of int -> float conversion: think of int<ticks>, there is no sense of float<ticks>.
Further reading, credits to #kvb for pointing on this article.
(Caveat: I've not used units much in anger.)
I think that the only negative for using e.g. FloatWithMeasure is the unit-casting aspect (unitless to unitful). I think this is conceptually orthogonal to the numeric-representation-casting aspect (e.g. int to float). However there is (I think) no library function to do numeric-representation-casting on unit-ful values. Perhaps this is reflective of the fact that most unitful values model real-world continuous values, as so discrete representations like int are typically not used for them (e.g. 1<s> feels wrong; surely you mean 1.0<s>).
So I think it's fine to 'cast representations' and then 'readjust units', but I wonder how you got the values with different representations in the first place, as it's often typical for those representations to be fixed for a domain (e.g. use float everywhere).
(In any case, I do like your floatMeasure function, which un-confounds the unit-aspect from the representation-aspect, so that if you do need to only change representation, you have a way to express it directly.)
Why is it that functions in F# and OCaml (and possibly other languages) are not by default recursive?
In other words, why did the language designers decide it was a good idea to explicitly make you type rec in a declaration like:
let rec foo ... = ...
and not give the function recursive capability by default? Why the need for an explicit rec construct?
The French and British descendants of the original ML made different choices and their choices have been inherited through the decades to the modern variants. So this is just legacy but it does affect idioms in these languages.
Functions are not recursive by default in the French CAML family of languages (including OCaml). This choice makes it easy to supercede function (and variable) definitions using let in those languages because you can refer to the previous definition inside the body of a new definition. F# inherited this syntax from OCaml.
For example, superceding the function p when computing the Shannon entropy of a sequence in OCaml:
let shannon fold p =
let p x = p x *. log(p x) /. log 2.0 in
let p t x = t +. p x in
-. fold p 0.0
Note how the argument p to the higher-order shannon function is superceded by another p in the first line of the body and then another p in the second line of the body.
Conversely, the British SML branch of the ML family of languages took the other choice and SML's fun-bound functions are recursive by default. When most function definitions do not need access to previous bindings of their function name, this results in simpler code. However, superceded functions are made to use different names (f1, f2 etc.) which pollutes the scope and makes it possible to accidentally invoke the wrong "version" of a function. And there is now a discrepancy between implicitly-recursive fun-bound functions and non-recursive val-bound functions.
Haskell makes it possible to infer the dependencies between definitions by restricting them to be pure. This makes toy samples look simpler but comes at a grave cost elsewhere.
Note that the answers given by Ganesh and Eddie are red herrings. They explained why groups of functions cannot be placed inside a giant let rec ... and ... because it affects when type variables get generalized. This has nothing to do with rec being default in SML but not OCaml.
One crucial reason for the explicit use of rec is to do with Hindley-Milner type inference, which underlies all staticly typed functional programming languages (albeit changed and extended in various ways).
If you have a definition let f x = x, you'd expect it to have type 'a -> 'a and to be applicable on different 'a types at different points. But equally, if you write let g x = (x + 1) + ..., you'd expect x to be treated as an int in the rest of the body of g.
The way that Hindley-Milner inference deals with this distinction is through an explicit generalisation step. At certain points when processing your program, the type system stops and says "ok, the types of these definitions will be generalised at this point, so that when someone uses them, any free type variables in their type will be freshly instantiated, and thus won't interfere with any other uses of this definition."
It turns out that the sensible place to do this generalisation is after checking a mutually recursive set of functions. Any earlier, and you'll generalise too much, leading to situations where types could actually collide. Any later, and you'll generalise too little, making definitions that can't be used with multiple type instantiations.
So, given that the type checker needs to know about which sets of definitions are mutually recursive, what can it do? One possibility is to simply do a dependency analysis on all the definitions in a scope, and reorder them into the smallest possible groups. Haskell actually does this, but in languages like F# (and OCaml and SML) which have unrestricted side-effects, this is a bad idea because it might reorder the side-effects too. So instead it asks the user to explicitly mark which definitions are mutually recursive, and thus by extension where generalisation should occur.
There are two key reasons this is a good idea:
First, if you enable recursive definitions then you can't refer to a previous binding of a value of the same name. This is often a useful idiom when you are doing something like extending an existing module.
Second, recursive values, and especially sets of mutually recursive values, are much harder to reason about then are definitions that proceed in order, each new definition building on top of what has been already defined. It is nice when reading such code to have the guarantee that, except for definitions explicitly marked as recursive, new definitions can only refer to previous definitions.
Some guesses:
let is not only used to bind functions, but also other regular values. Most forms of values are not allowed to be recursive. Certain forms of recursive values are allowed (e.g. functions, lazy expressions, etc.), so it needs an explicit syntax to indicate this.
It might be easier to optimize non-recursive functions
The closure created when you create a recursive function needs to include an entry that points to the function itself (so the function can recursively call itself), which makes recursive closures more complicated than non-recursive closures. So it might be nice to be able to create simpler non-recursive closures when you don't need recursion
It allows you to define a function in terms of a previously-defined function or value of the same name; although I think this is bad practice
Extra safety? Makes sure that you are doing what you intended. e.g. If you don't intend it to be recursive but you accidentally used a name inside the function with the same name as the function itself, it will most likely complain (unless the name has been defined before)
The let construct is similar to the let construct in Lisp and Scheme; which are non-recursive. There is a separate letrec construct in Scheme for recursive let's
Given this:
let f x = ... and g y = ...;;
Compare:
let f a = f (g a)
With this:
let rec f a = f (g a)
The former redefines f to apply the previously defined f to the result of applying g to a. The latter redefines f to loop forever applying g to a, which is usually not what you want in ML variants.
That said, it's a language designer style thing. Just go with it.
A big part of it is that it gives the programmer more control over the complexity of their local scopes. The spectrum of let, let* and let rec offer an increasing level of both power and cost. let* and let rec are in essence nested versions of the simple let, so using either one is more expensive. This grading allows you to micromanage the optimization of your program as you can choose which level of let you need for the task at hand. If you don't need recursion or the ability to refer to previous bindings, then you can fall back on a simple let to save a bit of performance.
It's similar to the graded equality predicates in Scheme. (i.e. eq?, eqv? and equal?)