I am kind of new to F# so maybe my question is dumb. I wrote a program in F# that used generic types. Compiler determined the types not the way that I desired because I had a bug in the deepest function call that I had instantiated the type to a wrong type. This obviously resulted in type mismatch elsewhere that I had used the type as I had expected. I have to mention to find the root of problem I tried to explicitly enforce the higher level functions to use the types that I desired for the generic type. However the type mismatch was shown in those high-level functions not for the low level function were the type was instantiated. I think it's not very convenient way of determining types because usually it is much easier for programmers to determine the type on higher level functions and this explicit type assignment should result in type error in lower level function were the type has been determined. In my experience it seems automatic type determination of the compiler overrides the explicit type declaration. Am I understanding something wrong here?
Something like the code below:
type test<'a,'b>={Func:'a->'b; Arg:'a}
let C(a)=
a.Func(2)|>ignore
let B(a)=
C(a)
let A(a:test<int64,int64>)=
B(a) //<-----------------------the type mismatch is detected here
Having the error detected on such high level function call makes it difficult to find the root of the problem because now not only I have to look for the bugs regarding the value of variables but also the type determination bugs.
F#'s type inference is (fairly) strict top-to-bottom, left-to-right. See here for a discussion: Why is F#'s type inference so fickle?
You can check this by rearranging your code like this (just for demonstration):
type test<'a,'b>={Func:'a->'b; Arg:'a}
let rec B(a)=
C(a)
and A(a:test<int64,int64>)=
B(a)
and C(a)=
a.Func(2)|>ignore // type mismatch now here
The convenience level depends mostly on the concrete code, in my experience. But I have to admit that I too have sometimes been surprised by type mismatch error messages. It takes some time getting used to.
Related
It just occurred to me, that F# generics do not seem to accept constant values as "template parameters".
Suppose one wanted to create a type RangedInt such, that it behaves like an int but is guaranteed to only contain a sub-range of integer values.
A possible approach could be a discriminated union, similar to:
type RangedInt = | Valid of int | Invalid
But this is not working either, as there is no "type specific storage of the range information". And 2 RangedInt instances should be of different type, if the range differs, too.
Being still a bit C++ infested it would look similar to:
template<int low,int high>
class RangedInteger { ... };
Now the question, arising is two fold:
Did I miss something and constant values for F# generics exist?
If I did not miss that, what would be the idiomatic way to accomplish such a RangedInt<int,int> in F#?
Having found Tomas Petricek's blog about custom numeric types, the equivalent to my question for that blog article would be: What if he did not an IntegerZ5 but an IntegerZn<int> custom type family?
The language feature you're requesting is called Dependent Types, and F# doesn't have that feature.
It's not a particularly common language feature, and even Haskell (which most other Functional programming languages 'look up to') doesn't really have it.
There are languages with Dependent Types out there, but none of them I would consider mainstream. Probably the one I hear about the most is Idris.
Did I miss something and constant values for F# generics exist?
While F# has much strong type inference than other .NET languages, at its heart it is built on .NET.
And .NET generics only support a small subset of what is possible with C++ templates. All type arguments to generic types must be types, and there is no defaulting of type arguments either.
If I did not miss that, what would be the idiomatic way to accomplish such a RangedInt in F#?
It would depend on the details. Setting the limits at runtime is one possibility – this would be the usual approach in .NET. Another would be units of measure (this seems less likely to be a fit).
What if he did not an IntegerZ5 but an IntegerZn<int> custom type family?
I see two reasons:
It is an example, and avoiding generics keeps things simpler allowing focus on the point of the example.
What other underlying type would one use anyway? On contemporary systems smaller types (byte, Int16 etc.) are less efficient (unless space at runtime is the overwhelming concern); long would add size without benefit (it is only going to hold 5 possible values).
I was wondering the difference of using and not using type annotations(var a: Int = 1 vs var a = 1) in Swift, so I read Apple's The Swift Programming Language.
However, it only says:
You can provide a type annotation when you declare a constant or variable, to be clear about the kind of values the constant or variable can store.
and
It is rare that you need to write type annotations in practice. If you provide an initial value for a constant or variable at the point that it is defined, Swift can almost always infer the type to be used for that constant or variable
It doesn't mention the pros and cons.
It's obviously that using type annotations makes code clear and self-explanatory, whereas not using it is easier to write the code.
Nonetheless, I'd like to know if there are any other reasons(for example, from the perspective of performance or compiler) that I should or should not use type annotations in general.
It is entirely syntactic so as long as you give the compiler enough information to infer the correct type the affect and performance at run time is exactly the same.
Edit: missed your reference to the compiler - I cannot see it having any significant impact on compile times either as it needs to evaluate your assignment expression and check type compatibility anyway.
F# does not support implicit conversions. I understand that this is a feature, but I don't understand why implicit conversions are forbidden even when no information would be lost. For example:
sqrt 4 // Won't compile.
I don't see a problem implicitly converting the int 4 to a float, which is what sqrt requires.
Can anyone shed light on this?
Because its type checker almost depends on classical strong type-reconstruction. You example requires the coercion of types which is possible through implicit casts or a weak type system, but these aren't allowed in this kind of type inference.
Since F# comes from OCaml it has a type reconstruction that tries to guarantee correctness of your program by being extremely pedantic: the algorithm tries to unify the whole types of your program to a good typing and this cannot be done if a weak type rule allows to consider an integer like a float.
See also
http://lorgonblog.wordpress.com/2009/10/25/overview-of-type-inference-in-f/
which describes how overloading interacts badly with type inference. Implicit conversions cause many of the same problems as overloading, and also interact poorly with error diagnostics.
In addition to the above answers, type conversions between integer and float are not actually free in computational terms; the compiler is not doing you a favour by "hiding" them from you if your goal is to write high-performance code. This is one of the things I like about OCaml and F#: even tho' they are very high level, you still need to be aware of exactly what computation you are doing.
In real terms, what is the difference between a "string" and a "string option"?
Aside from minor sytnax issues, the only difference I have seen is that you can pass a "null" to string while a string option expects a "none".
I don't particularly like the answer I've typed up below, because I think the reader will either see it as 'preaching to the choir' or as 'some complex nonsense', but I've decided to post it anyway, in case it invites fruitful comment-discussion.
First off, it may be noteworthy to understand that
let x : string option = Some(null)
is a valid value, that indicates the presence (rather than absence) of a value (Some rather than None), but the value itself is null. (The meaning of such a value would depend on context.)
If you're looking for what I see as the 'right' mental model, it goes something like this...
The whole notion that "all reference types admit a 'null' value" is one of the biggest and most costly mistakes of .Net and the CLR. If the platform were resdesigned from scratch today, I think most folks agree that references would be non-nullable by default, and you would need an explicit mechanism to opt-in to null. As it stands today, there are hundreds, if not thousands of APIs that take e.g. "string foo" and do not want a null (e.g. would throw ArgumentNullException if you passed null). Clearly this is something better handled by a type system. Ideally, 'string' would mean 'non-null', and for the minority of APIs that do want null, you spell that out, e.g. "Nullable<string> foo" or "Option<string> foo" or whatever. So it's the existing .Net platform that's the 'oddball' here.
Many functional languages (such as ML, one of the main influences of F#) have known this forever, and so designed their type systems 'right', where if you want to admit a 'null' value, you use a generic type constructor to explicitly signal data that intentionally can have 'asbence of a value' as a legal value. In ML, this is done with the "'t option" type - 'option' is a fine, general-purpose solution to this issue. F#'s core is compatible (cross-compiles) with OCaml, an ML dialect, and thus F# inherits this type from its ML ancestry.
But F# also needs to integrate with the CLR, and in the CLR, all references can be null. F# attempts to walk a somewhat fine line, in that you can define new class types in F#, and for those types, F# will behave ML-like (and not easily admit null as a value):
type MyClass() = class end
let mc : MyClass = null // does not compile
however the same type defined in a C# assembly will admit null as a proper value in F#. (And F# still allows a back-door:
let mc : MyClass = Unchecked.defaultof<_> // mc is null
to effectively get around the normal F# type system and access the CLR directly.)
This is all starting to sound complicated, but basically the F# system lets you pretty much program lots of F# in the 'ML' style, where you never need to worry about null/NullReferenceExceptions, because the type system prevents you from doing the wrong things here. But F# has to integrate nicely with .Net, so all types that originate from non-F# code (like 'string') still admit null values, and so when programming with those types you still have to program as defensively as you normally do on the CLR. With regards to null, effectively F# provides a haven where it is easy to do 'programming without null, the way God intended', but at the same time interoperate with the rest of .Net.
I haven't really answered your question, but if you follow my logic, then you would not ask the question (or would unask it, a la Joshu's MU from "Godel, Escher, Bach").
I think you could reclassify this as a more general question
What is the difference between option random ref type and just a random ref type
The difference is with an option, you are providing an explicit empty value case. It's a declarative way of saying "I might not provide a value". A option of value None unambiguously represents a lack of a value.
Often times people use null to represent a lack of a value. Unfortunately this is ambiguous to the casual reader because it's unknown if null represents a valid value or the lack of a value. Option removes this ambiguity.
In the process of transforming a given efficient pointer-based hash map implementation into a generic hash map implementation, I stumbled across the following problem:
I have a class representing a hash node (the hash map implementation uses a binary tree)
THashNode <KEY_TYPE, VALUE_TYPE> = class
public
Key : KEY_TYPE;
Value : VALUE_TYPE;
Left : THashNode <KEY_TYPE, VALUE_TYPE>;
Right : THashNode <KEY_TYPE, VALUE_TYPE>;
end;
In addition to that there is a function that should return a pointer to a hash node. I wanted to write
PHashNode = ^THashNode <KEY_TYPE, VALUE_TYPE>
but that doesn't compile (';' expected but '<' found).
How can I have a pointer to a generic type?
And adressed to Barry Kelly: if you read this: yes, this is based on your hash map implementation. You haven't written such a generic version of your implementation yourself, have you? That would save me some time :)
Sorry, Smasher. Pointers to open generic types are not supported because generic pointer types are not supported, although it is possible (compiler bug) to create them in certain circumstances (particularly pointers to nested types inside a generic type); this "feature" can't be removed in an update in case we break someone's code. The limitation on generic pointer types ought to be removed in the future, but I can't make promises when.
If the type in question is the one in JclStrHashMap I wrote (or the ancient HashList unit), well, the easiest way to reproduce it would be to change the node type to be a class and pass around any double-pointers as Pointer with appropriate casting. However, if I were writing that unit again today, I would not implement buckets as binary trees. I got the opportunity to write the dictionary in the Generics.Collections unit, though with all the other Delphi compiler work time was too tight before shipping for solid QA, and generic feature support itself was in flux until fairly late.
I would prefer to implement the hash map buckets as one of double-hashing, per-bucket dynamic arrays or linked lists of cells from a contiguous array, whichever came out best from tests using representative data. The logic is that cache miss cost of following links in tree/list ought to dominate any difference in bucket search between tree and list with a good hash function. The current dictionary is implemented as straight linear probing primarily because it was relatively easy to implement and worked with the available set of primitive generic operations.
That said, the binary tree buckets should have been an effective hedge against poor hash functions; if they were balanced binary trees (=> even more modification cost), they would be O(1) on average and O(log n) worst case performance.
To actually answer your question, you can't make a pointer to a generic type, because "generic types" don't exist. You have to make a pointer to a specific type, with the type parameters filled in.
Unfortunately, the compiler doesn't like finding angle brackets after a ^. But it will accept the following:
TGeneric<T> = record
value: T;
end;
TSpecific = TGeneric<string>;
PGeneric = ^TSpecific;
But "PGeneric = ^TGeneric<string>;" gives a compiler error. Sounds like a glitch to me. I'd report that over at QC if I was you.
Why are you trying to make a pointer to an object, anyway? Delphi objects are a reference type, so they're pointers already. You can just cast your object reference to Pointer and you're good.
If Delphi supported generic pointer types at all, it would have to look like this:
type
PHashNode<K, V> = ^THashNode<K, V>;
That is, mention the generic parameters on the left side where you declare the name of the type, and then use those parameters in constructing the type on the right.
However, Delphi does not support that. See QC 66584.
On the other hand, I'd also question the necessity of having a pointer to a class type at all. Generic or not. they are needed only very rarely.
There's a generic hash map called TDictionary in the Generics.Collections unit. Unfortunately, it's badly broken at the moment, but it's apparently going to be fixed in update #3, which is due out within a matter of days, according to Nick Hodges.