Each union case in discriminated union type gets a tag number
type Result<'TSuccess,'TFailure> =
| Success of 'TSuccess
| Failure of 'TFailure
let cases = FSharpType.GetUnionCases typedefof<Result<_,_>>
for case in cases do
case.Tag
From looking at compiled code it's generated by compiler and constant depending on the order of cases. So Success is 0 and Failure is 1.
Is the tag number always generated based on the order? Is this in F# specs?
Is it possible to provide custom tag number, so that if the order changes or I put another case in the middle, between Success and Failure, their tag numbers don't change?
I'm trying to setup protobuf-net to serialize discriminated union by creating custom type model and adding Success and Failure as sub-types of Result. But for that to work need to specify the for each class, which must remain constant. I was hoping to automate the setup, but would need to be able to have a number relate to each type and for that relatonship to never change. Tag seems to be perfect, if it can be hardcoded in discriminated union definition.
So we can just read the spec:
If U has more than one case, it has one CLI nested type U.Tags. The U.Tags type contains one integer literal for each case, in increasing order starting from zero.
(section 8.5.4)
So it seems like you can rely on the order of the elements, but inserting new elements will cause new numbers to be created.
Related
let f x = System.Object.ReferenceEquals(x,x)
f id // false
I thought that at first it might because a function could be converted to a closure multiple times, but the above disproves that. Why does that last line return false?
You likely have optimizations turned on. This time it's the opposite problem.
What happens if inlining is turned on?
id will be rewritten to an instance of idFuncClass <: FSharpFunc.
The whole expression will be rewritten to:
Object.ReferenceEquals(new fsharpfun1(), new fsharpfun1())
You can turning off inlining with:
[<MethodImpl(MethodImplOptions.NoInlining)>]
let f x = System.Object.ReferenceEquals(x,x)
You'll find that the comparison works again.
But the bigger take-away is this - comparing two functions in F# is undefined behavior. In fact a function type doesn't even implement equality.
let illegal = id = id //this won't compile
Here's the relevant section in the F# Spec:
6.9.24 Values with Underspecified Object Identity and Type Identity
The CLI and F# support operations that detect object identity — that is, whether two object references refer to the same “physical” object.
For example, System.Object.ReferenceEquals(obj1, obj2) returns true if the two object references refer to the same object. Similarly, GetHashCode() returns a hash code that is partly based on physical object identity ...
The results of these operations are underspecified when used with values of the following F# types:
Function types
Tuple types
Immutable record types
Union types
Boxed immutable value types
For two values of such types, the results of System.Object.ReferenceEquals and
System.Runtime.CompilerServices.RuntimeHelpers.GetHashCode are underspecified; however, the operations terminate and do not raise exceptions.
An implementation of F# is not required to define the results of these
operations for values of these types.
What the spec advises is to treat the actual function-type and its CLR implementations as a black-box.
I have made a wrapper type called Skippable<'a> (an F# discriminated union, not unlike Option) specifically meant for indicating which members should be excluded when serializing types:
type Skippable<'a> =
| Skip
| Serialize of 'a
I have functioning converters, but during deserialization, I want missing JSON values to be serialized to the Skip case of the DU (instead of null as is currently happening).
I know of DefaultValueAttribute, but that only works with constant values, and besides I don't want to use an attribute on each and every Skippable-wrapped property in my DTOs.
Is it possible in some way to tell Newtonsoft.Json to populate missing values of a certain type (Skippable<'a>) with a certain value of that type (Skip)? Using converters, contract resolvers, or other methods?
Making Skippable a struct union is one way to do it, since then the default value (e.g. using Unchecked.defaultOf) seems to be the first case with any fields (none, in this case) at their default values.
[<Struct>]
type Skippable<'a> =
| Skip
| Serialize of 'a
// Unchecked.defaultof<Skippable<obj>> = Skip
This is part of the FSharp.JsonSkippable library, which allows you to control in a simple and strongly typed manner whether to include a given property when serializing (and determine whether a property was included when deserializing), and moreover, to control/determine exclusion separately of nullability.
I am looking to encode (in some .NET language -- fsharp seems most likely to support) a class of types that form a current context.
The rules would be that we start with an initial context of type 'a. As we progress through the computations the context will be added to, but in a way that I can always get previous values. So assume an operation that adds information to the context, 'a -> 'b, that infers that all elements of 'a are also in 'b.
The idea is similar to an immutable map, but I would like it to be statically typed. Is that feasible? How, or why not? TIA.
Update: The answer appears to be that you cannot quite do this at this time, although I have some good suggestions for modeling what I am looking for in a different way. Thanks to all who tried to help with my poorly worded question.
Separate record types in F# are distinct, even if superficially they have similar structure. Even if the fields of record 'a form a subset of fields of record 'c, there's no way of enforcing that relationship statically. If you have a valid reason to use distinct record types there, the best you could do would be to use reflection to get the fields using FSharpType.GetRecordFields and check if one forms the subset of the other.
Furthermore, introducing a new record type for each piece of data added would result in horrendous amounts of boilerplate.
I see two ways to model it that would feel more at place in F# and still allow you some way of enforcing some form of your 'a :> 'c constraint at runtime.
1) If you foresee a small number of records, all of which are useful in other parts of your program, you can use a discriminated union to enumerate the steps of your process:
type NameAndAmountAndFooDU =
| Initial of Name
| Intermediate of NameAndAmount
| Final of NameAndAmountAndFoo
With that, records that previously were unrelated types 'a and 'c, become part of a single type. That means you can store them in a list inside Context and easily go back in time to see if the changes are going in the right direction (Initial -> Intermediate -> Final).
2) If you foresee a lot of changes like 'adding' a single field, and you care more about the final product than the intermediate ones, you can define a record of option fields based on the final record:
type NameAndAmountAndFooOption =
{
Name: string option
Amount: decimal option
Foo: bool option
}
and have a way to convert it to a non-option NameAndAmountAndFoo (or the intermediate ones like NameAndAmount if you need them for some reason). Then in Context you can set the values of individual fields one at a time, and again, collect the previous records to keep track of how changes are applied.
Something like this?
type Property =
| Name of string
| Amount of float
let context = Map.empty<string,Property>
//Parse or whatever
let context = Map.add "Name" (Name("bob")) context
let context = Map.add "Amount" (Amount(3.14)) context
I have a feeling that if you could show us a bit more of your problem space, there may be a more idiomatic overall solution.
I've just noticed that there's only a little difference in declaring a non-member discriminated union:
type Color =
| Red
| Green
| Blue
and declaring an enum:
type Color =
| Red = 0
| Green = 1
| Blue = 2
What are their main differences in terms of performance, usage, etc? Do you have suggestions when to use what?
Enum are stucts and are therefore allocated on the stack, while discriminated unions are references types so are heap allocated. So, you would expect DU to be slightly less performant that enums, though in reality you'll probably never notice this difference.
More importantly a discriminated union can only ever be one of the types declared, where as enums are really just an integer, so you could cast an integer that isn't a member of the enum to the enum type. This means that when pattern matching the compiler can assert that the pattern matching is complete when you've covered all the cases for a DU, but for an enum you must always put in a default catch all the rest case, i.e for an enum you'll always need pattern matching like:
match enumColor with
| Red -> 1
| Green -> 2
| Blue -> 3
| _ -> failwith "not an enum member"
where as the last case would not be necessary with an DU.
One final point, as enums are natively supported in both C# and VB.NET, were as DUs are not, enums are often a better choice when creating a public API for consumption by other languages.
In addition to what Robert has said, pattern matching on unions is done in one of two ways. For unions with only nullary cases, i.e., cases without an associated value (this corresponds closely to enums), the compiler-generated Tag property is checked, which is an int. In this case you can expect performance to be the same as with enums. For unions having non-nullary cases, a type test is used, which I assume is also pretty fast. As Robert said, if there is a performance discrepancy it's negligible. But in the former case it should be exactly the same.
Regarding the inherent "incompleteness" of enums, when a pattern match fails what you really want to know is if a valid case wasn't covered by the match. You don't generally care if an invalid integer value was casted to the enum. In that case you want the match to fail. I almost always prefer unions, but when I have to use enums (usually for interoperability), inside the obligatory wildcard case I pass the unmatched value to a function that distinguishes between valid and invalid values and raises the appropriate error.
As of F# 4.1 there are struct discriminated unions.
These have the performance benefits of stack allocation, like enums.
They have the superior matching of discriminated unions.
They are F# specific so if you need to be understood by other .Net languages you should still use enums.
I am having a brain freeze on f#'s option types. I have 3 books and read all I can but I am not getting them.
Does someone have a clear and concise explanation and maybe a real world example?
TIA
Gary
Brian's answer has been rated as the best explanation of option types, so you should probably read it :-). I'll try to write a more concise explanation using a simple F# example...
Let's say you have a database of products and you want a function that searches the database and returns product with a specified name. What should the function do when there is no such product? When using null, the code could look like this:
Product p = GetProduct(name);
if (p != null)
Console.WriteLine(p.Description);
A problem with this approach is that you are not forced to perform the check, so you can easily write code that will throw an unexpected exception when product is not found:
Product p = GetProduct(name);
Console.WriteLine(p.Description);
When using option type, you're making the possibility of missing value explicit. Types defined in F# cannot have a null value and when you want to write a function that may or may not return value, you cannot return Product - instead you need to return option<Product>, so the above code would look like this (I added type annotations, so that you can see types):
let (p:option<Product>) = GetProduct(name)
match p with
| Some prod -> Console.WriteLine(prod.Description)
| None -> () // No product found
You cannot directly access the Description property, because the reuslt of the search is not Product. To get the actual Product value, you need to use pattern matching, which forces you to handle the case when a value is missing.
Summary. To summarize, the purpose of option type is to make the aspect of "missing value" explicit in the type and to force you to check whether a value is available each time you work with values that may possibly be missing.
See,
http://msdn.microsoft.com/en-us/library/dd233245.aspx
The intuition behind the option type is that it "implements" a null-value. But in contrast to null, you have to explicitly require that a value can be null, whereas in most other languages, references can be null by default. There is a similarity to SQLs NULL/NOT NULL if you are familiar with those.
Why is this clever? It is clever because the language can assume that no output of any expression can ever be null. Hence, it can eliminate all null-pointer checks from the code, yielding a lot of extra speed. Furthermore, it unties the programmer from having to check for the null-case all the same, should he or she want to produce safe code.
For the few cases where a program does require a null value, the option type exist. As an example, consider a function which asks for a key inside an .ini file. The key returned is an integer, but the .ini file might not contain the key. In this case, it does make sense to return 'null' if the key is not to be found. None of the integer values are useful - the user might have entered exactly this integer value in the file. Hence, we need to 'lift' the domain of integers and give it a new value representing "no information", i.e., the null. So we wrap the 'int' to an 'int option'. Now, if there is no integer value we will get 'None' and if there is an integer value, we will get 'Some(N)' where N is the integer value in question.
There are two beautiful consequences of the choice. One, we can use the general pattern match features of F# to discriminate the values in e.g., a case expression. Two, the framework of algebraic datatypes used to define the option type is exposed to the programmer. That is, if there were no option type in F# we could have created it ourselves!