Why aren't there implicit conversions in F#? - f#

F# does not support implicit conversions. I understand that this is a feature, but I don't understand why implicit conversions are forbidden even when no information would be lost. For example:
sqrt 4 // Won't compile.
I don't see a problem implicitly converting the int 4 to a float, which is what sqrt requires.
Can anyone shed light on this?

Because its type checker almost depends on classical strong type-reconstruction. You example requires the coercion of types which is possible through implicit casts or a weak type system, but these aren't allowed in this kind of type inference.
Since F# comes from OCaml it has a type reconstruction that tries to guarantee correctness of your program by being extremely pedantic: the algorithm tries to unify the whole types of your program to a good typing and this cannot be done if a weak type rule allows to consider an integer like a float.

See also
http://lorgonblog.wordpress.com/2009/10/25/overview-of-type-inference-in-f/
which describes how overloading interacts badly with type inference. Implicit conversions cause many of the same problems as overloading, and also interact poorly with error diagnostics.

In addition to the above answers, type conversions between integer and float are not actually free in computational terms; the compiler is not doing you a favour by "hiding" them from you if your goal is to write high-performance code. This is one of the things I like about OCaml and F#: even tho' they are very high level, you still need to be aware of exactly what computation you are doing.

Related

No F# generics with constant "template arguments"?

It just occurred to me, that F# generics do not seem to accept constant values as "template parameters".
Suppose one wanted to create a type RangedInt such, that it behaves like an int but is guaranteed to only contain a sub-range of integer values.
A possible approach could be a discriminated union, similar to:
type RangedInt = | Valid of int | Invalid
But this is not working either, as there is no "type specific storage of the range information". And 2 RangedInt instances should be of different type, if the range differs, too.
Being still a bit C++ infested it would look similar to:
template<int low,int high>
class RangedInteger { ... };
Now the question, arising is two fold:
Did I miss something and constant values for F# generics exist?
If I did not miss that, what would be the idiomatic way to accomplish such a RangedInt<int,int> in F#?
Having found Tomas Petricek's blog about custom numeric types, the equivalent to my question for that blog article would be: What if he did not an IntegerZ5 but an IntegerZn<int> custom type family?
The language feature you're requesting is called Dependent Types, and F# doesn't have that feature.
It's not a particularly common language feature, and even Haskell (which most other Functional programming languages 'look up to') doesn't really have it.
There are languages with Dependent Types out there, but none of them I would consider mainstream. Probably the one I hear about the most is Idris.
Did I miss something and constant values for F# generics exist?
While F# has much strong type inference than other .NET languages, at its heart it is built on .NET.
And .NET generics only support a small subset of what is possible with C++ templates. All type arguments to generic types must be types, and there is no defaulting of type arguments either.
If I did not miss that, what would be the idiomatic way to accomplish such a RangedInt in F#?
It would depend on the details. Setting the limits at runtime is one possibility – this would be the usual approach in .NET. Another would be units of measure (this seems less likely to be a fit).
What if he did not an IntegerZ5 but an IntegerZn<int> custom type family?
I see two reasons:
It is an example, and avoiding generics keeps things simpler allowing focus on the point of the example.
What other underlying type would one use anyway? On contemporary systems smaller types (byte, Int16 etc.) are less efficient (unless space at runtime is the overwhelming concern); long would add size without benefit (it is only going to hold 5 possible values).

Pros and cons of using type annotations in Swift

I was wondering the difference of using and not using type annotations(var a: Int = 1 vs var a = 1) in Swift, so I read Apple's The Swift Programming Language.
However, it only says:
You can provide a type annotation when you declare a constant or variable, to be clear about the kind of values the constant or variable can store.
and
It is rare that you need to write type annotations in practice. If you provide an initial value for a constant or variable at the point that it is defined, Swift can almost always infer the type to be used for that constant or variable
It doesn't mention the pros and cons.
It's obviously that using type annotations makes code clear and self-explanatory, whereas not using it is easier to write the code.
Nonetheless, I'd like to know if there are any other reasons(for example, from the perspective of performance or compiler) that I should or should not use type annotations in general.
It is entirely syntactic so as long as you give the compiler enough information to infer the correct type the affect and performance at run time is exactly the same.
Edit: missed your reference to the compiler - I cannot see it having any significant impact on compile times either as it needs to evaluate your assignment expression and check type compatibility anyway.

Type classes in Nim

I am trying to make a simple use of typeclasses in Nim. Please, keep in mind that I only have been using Nim since this morning, so I may have been doing something stupid.
Anyway, I would like to define a pseudorandom generator that produces a stream of values of type T. Sometimes T is numeric, hence it makes sense to know something about the minimum and maximum values attainable - say to rescale the values. Here are my types
type
Generator*[T] = generic x
next(var x) is T
BoundedGenerator*[T] = generic x
x is Generator[T]
min(x) is T
max(x) is T
I also have such an instance, say LinearCongruentialGenerator.
Say I want to use this to define Uniform generator that produces float values in an interval. I have tried
type Uniform* = object
gen: BoundedGenerator[int]
min_p: float
max_p: float
proc create*(gen: BoundedGenerator[int], min: float, max: float): Uniform =
return Uniform(gen: gen, min_p: min, max_p: max)
I omit the obvious definitions of next, min and max.
The above, however, does not compile, due to Error: 'BoundedGenerator' is not a concrete type
If I explicitly put LinearCongruentialGenerator in place of BoundedGenerator[int], everyting compiles, but of course I want to be able to switch more sophisticated generators.
Can anyone help me understand the compiler error?
The type classes in Nim are not used to create abstract polymorphic types as it is the case with Haskell's type classes and C++'s interfaces. Instead, they are much more similar to the concepts proposal for C++. They define a set of arbitrary type requirements that can be used as overload-resolution criteria for generic functions.
If you want to work with abstract types, you can either define a type hierarchy with a common base type and use methods (which use multiple dispatch) or you can roll your own vtable-based solution. In the future, the user defined type classes will gain the ability to automatically convert the matched values to a different type (during overload resolution). This will make the vtable approach very easy to use as values of types with compatible interfaces will be convertible to a "fat pointer" carrying the vtable externally to the object (with the benefit that many pointers with different abstract types can be created for the same object). I'll be implementing these mechanisms in the next few months, hopefully before the 1.0 release.
Araq (the primary author of Nim) also has some plans for optimizing a certain type of group of closures bundled together to a cheaper representation, where the closure environment is shared between them and the end result is quite close to the traditional C++-like vtable-carrying object.

F# Type Inference

I am kind of new to F# so maybe my question is dumb. I wrote a program in F# that used generic types. Compiler determined the types not the way that I desired because I had a bug in the deepest function call that I had instantiated the type to a wrong type. This obviously resulted in type mismatch elsewhere that I had used the type as I had expected. I have to mention to find the root of problem I tried to explicitly enforce the higher level functions to use the types that I desired for the generic type. However the type mismatch was shown in those high-level functions not for the low level function were the type was instantiated. I think it's not very convenient way of determining types because usually it is much easier for programmers to determine the type on higher level functions and this explicit type assignment should result in type error in lower level function were the type has been determined. In my experience it seems automatic type determination of the compiler overrides the explicit type declaration. Am I understanding something wrong here?
Something like the code below:
type test<'a,'b>={Func:'a->'b; Arg:'a}
let C(a)=
a.Func(2)|>ignore
let B(a)=
C(a)
let A(a:test<int64,int64>)=
B(a) //<-----------------------the type mismatch is detected here
Having the error detected on such high level function call makes it difficult to find the root of the problem because now not only I have to look for the bugs regarding the value of variables but also the type determination bugs.
F#'s type inference is (fairly) strict top-to-bottom, left-to-right. See here for a discussion: Why is F#'s type inference so fickle?
You can check this by rearranging your code like this (just for demonstration):
type test<'a,'b>={Func:'a->'b; Arg:'a}
let rec B(a)=
C(a)
and A(a:test<int64,int64>)=
B(a)
and C(a)=
a.Func(2)|>ignore // type mismatch now here
The convenience level depends mostly on the concrete code, in my experience. But I have to admit that I too have sometimes been surprised by type mismatch error messages. It takes some time getting used to.

Are there well known algorithms for deducing the "return types" of parser rules?

Given a grammar and the attached action code, are there any standard solution for deducing what type each production needs to result in (and consequently, what type the invoking production should expect to get from it)?
I'm thinking of an OO program and action code that employs something like c#'s var syntax (but I'm not looking for something that is c# specific).
This would be fairly simple if it were not for function overloading and recursive grammars.
The issue arises with cases like this:
Foo ::=
Bar Baz { return Fig(Bar, Baz); }
Foo Pit { return Pop(Foo, Pit); } // typeof(foo) = fn(typeof(Foo))
If you are writing code in a functional language it is easy; standard Hindley-Milner type inference works great. Do not do this. In my EBNF parser generator (never released but source code available on request), which supports Icon, c, and Standard ML, I actually implemented the idea you are asking about for the Standard ML back end: all the types were inferred. The resulting grammars were nearly impossible to debug.
If you throw overloading into the mix, the results are only going to be harder to debug. (That's right! This just in! Harder than impossible! Greater than infinity! Past my bedtime!) If you really want to try it yourself you're welcome to my code. (You don't; there's a reason I never released it.)
The return value of a grammar action is really no different from a local variable, so you should be able to use C# type inference to do the job. See this paper for some insight into how C# type inference is implemented.
The standard way of doing type inference is the Hindley-Milner algorithm, but that will not handle overloading out-of-the-box.
Note that even parser generators for type-inferencing languages don't typically infer the types of grammar actions. For example, ocamlyacc requires type annotations. The Happy parser generator for Haskell can infer types, but seems to discourage the practice. This might indicate that inferring types in grammars is difficult, a bad idea, or both.
[UPDATE] Very much pwned by Norman Ramsey, who has the benefit of bitter experience.

Resources