I don't understand how to use the provided Map from Stdlib. I tried creating a Map from String to String, and it worked, but whenever I try using the lookup function I get weird errors:
This is a simplification of my code:
open import Data.Maybe
open import Data.String
open import Data.String.Properties
open import Data.Tree.AVL.Map (<-strictTotalOrder-≈) using (Map ; singleton ; lookup)
singletonMap : Map String
singletonMap = singleton "test" "a"
test : Maybe String
test = lookup singletonMap "test"
I get the following error:
(Data.Tree.AVL.Tree <-strictTotalOrder-≈
(Data.Tree.AVL.Value.const
(Relation.Binary.Bundles.StrictPartialOrder.Eq.setoid
(Relation.Binary.Bundles.StrictTotalOrder.strictPartialOrder
<-strictTotalOrder-≈))
String))
!=< String
when checking that the expression singletonMap has type
Relation.Binary.Bundles.StrictTotalOrder.Carrier
<-strictTotalOrder-≈
I'm having trouble trying to even understand what's happening, it looks like lookup is not expecting an argument like singletonMap, because the types do not match, but how can that be possible? I created it using singleton, and the functions have the following types (they can be seen in Data.Tree.AVL.Map as well):
singleton : Key → V → Map V
lookup : Map V → Key → Maybe V
They are literally the same.
You're linking to the documentation of the current dev version.
In older, released versions, lookup takes the key first, and the Map second.
You should write
test : Maybe String
test = lookup "test" singletonMap
The dev version is gearing up to release a version 2.0 with breaking changes in the name of uniformity (all lookups will now have a type of the form Container -> Key -> Val because the type of keys may depend on the type of the container, but not the other way around).
Related
I have written several helper functions in F# that enable me to deal with the dynamic nature of Excel over the COM/PIA interface. However when I go to use these functions in an Excel-DNA UDF they do not work as expected as Excel-DNA is pre-processing the values in the array from excel.
e.g. null is turned into ExcelDna.Integration.ExcelEmpty
This interferes with my own validation code that was anticipating a null. I am able to work around this by adding an additional case to my pattern matching:
let (|XlEmpty|_|) (x: obj) =
match x with
| null -> Some XlEmpty
| :? ExcelDna.Integration.ExcelEmpty -> Some XlEmpty
| _ -> None
However it feels like a waste to convert and then convert again. Is there a way to tell Excel-DNA not to do additional processing of the range values in a UDF and supply them equivalent to the COM/PIA interface? i.e. Range.Value XlRangeValueDataType.xlRangeValueDefault
EDIT:
I declare my arguments as obj like this:
[<ExcelFunction(Description = "Validates a Test Table Row")>]
let isTestRow (headings: obj) (row: obj) =
let validator = TestTable.validator
let headingsList = TestTable.testHeadings
validateRow validator headingsList headings row
I have done some more digging and #Jim Foye's suggested question also confirms this. For UDF's, Excel-DNA works over the C API rather than COM and therefore has to do its own marshaling. The possible values are shown in this file:
https://github.com/Excel-DNA/ExcelDna/blob/2aa1bd9afaf76084c1d59e2330584edddb888eb1/Distribution/Reference.txt
The reason to use ExcelEmpty (the user supplied an empty cell) is that for a UDF, the argument can also be ExcelMissing (the user supplied no argument) which might both be reasonably null and there is a need to disambiguate.
I will adjust my pattern matching to be compatible with both the COM marshaling and the ExcelDNA marshaling.
Firstly, obtain a schema and parse:
type desc = JsonProvider< """[{"name": "", "age": 1}]""", InferTypesFromValues=true >
let json = """[{"name": "Kitten", "age": 322}]"""
let typedJson = desc.Parse(json)
Now we can access typedJson.[0] .Age and .Name properties, however, I'd like to pattern match on them at compile-time to get an error if the schema is changed.
Since those properties are erased and we cannot obtain them at run-time:
let ``returns false``() =
typedJson.[0].GetType()
.FindMembers(MemberTypes.All, BindingFlags.Public ||| BindingFlags.Instance,
MemberFilter(fun _ _ -> true), null)
|> Array.exists (fun m -> m.ToString().Contains("Age"))
...I've made a runtime-check version using active patterns:
let (|Name|Age|) k =
let toID = NameUtils.uniqueGenerator NameUtils.nicePascalName
let idk = toID k
match idk with
| _ when idk.Equals("Age") -> Age
| _ when idk.Equals("Name") -> Name
| ex_val -> failwith (sprintf "\"%s\" shouldn't even compile!" ex_val)
typedJson.[0].JsonValue.Properties()
|> Array.map (fun (k, v) ->
match k with
| Age -> v.AsInteger().ToString() // ...
| Name -> v.AsString()) // ...
|> Array.iter (printfn "%A")
In theory, if FSharp.Data wasn't OS I wouldn't be able to implement toID. Generally, the whole approach seems wrong and redoing the work.
I know that discriminated unions can't be generated using type providers, but maybe there's a better way to do all this checking at compile-time?
As far as I know it cannot be possible to find out if "Json schema has changed" at compile-time using the given TP.
That's why:
JsonProvider<sample> is exactly what kicks in at compile-time providing a type for manipulating Json contents at run-time. This provided erased type has couple of run-time static methods common for any sample and type Root
extending IJsonDocument with few instance properties including ones based on compile-time provided sample (in your case - properties Name and Age).There is exactly one very relaxed implicit
Json "schema" behind JsonProvider-provided type, no another such entity to compare with for change at compile-time;
at run-time only provided type desc with its static methods and its Root type with correspondent instance methods
are at your service for manipulating arbitrary Json contents. All this jazz is pretty much agnostic with regard to
"Json schema", in your given case as long as run-time Json contents represent an array its elements may be pretty much any.
For example,
type desc = JsonProvider<"""[{"name": "", "age": 1}]"""> // # compile-time
let ``kinda "typed" json`` = desc.Parse("""[]""") // # run-time
let ``another kinda "typed" json`` =
desc.Parse("""[{"contents":"whatever", "text":"blah-blah-blah"},[{"extra":42}]]""")
both will be happily parsed at run-time as "typed Json" conforming to "schema" derived by TP from the given sample, although apparently Name and Age are missing and exceptions will be raised if accessed.
It is doable building another Json TP that relies upon a formal Json schema.
It may consume reference to the given schema from a schema repository upon type creation and allow manipulating elements
of the parsed Json payload only via provided accessors derived from the schema at compile-time.
In this case change of the
referred schema may break compilation if provided accessors being used in the code are incompatible with the change.
Such arrangement accompanied by run-time Json payload validator or validating parser may provide reliable enterprise-quality
Json schema change management.
JsonProvider TP from Fsharp.Data lacks such Json schema handling abilities, so payload validations are to be done in run-time only.
Quoting your comments which explain a little better what you are trying to achieve:
Thank you! But what I'm trying to achieve is to get a compiler error
if I add a new field e.g. Color to the json schema and then ignore it
while later processing. In case of unions it would be instant FS0025.
and:
yes, I must process all fields, so I can't rely on _. I want it so
when the schema changes, my F# program won't compile without adding
necessary handling functionality(and not just ignoring new field or
crashing at runtime).
The simplest solution for your purpose is to construct a "test" object.
The provided type comes with two constructors: one takes a JSonValue and parses it - effectively the same as JsonValue.Parse - while the other requires every field to be filled in.
That's the one that interests us.
We're also going to invoke it using named parameters, so that we'll be safe not only if fields are added or removed, but also if they are renamed or changed.
type desc = JsonProvider< """[{"name": "SomeName", "age": 1}]""", InferTypesFromValues=true >
let TestObjectPleaseIgnore = new desc.Root (name = "Kitten", age = 322)
// compiles
(Note that I changed the value of name in the sample to "SomeName", because "" was being inferred as a generic JsonValue.)
Now if more fields suddenly appear in the sample used by the type provider, the constructor will become incomplete and fail to compile.
type desc = JsonProvider< """[{"name": "SomeName", "age": 1, "color" : "Red"}]""", InferTypesFromValues=true >
let TestObjectPleaseIgnore = new desc.Root (name = "Kitten", age = 322)
// compilation error: The member or object constructor 'Root' taking 1 arguments are not accessible from this code location. All accessible versions of method 'Root' take 1 arguments.
Obviously, the error refers to the 1-argument constructor because that's the one it tried to fit, but you'll see that now the provided type has a 3-parameter constructor replacing the 2-parameter one.
If you use InferTypesFromValues=false, you get a strong type back:
type desc = JsonProvider< """[{"name": "", "age": 1}]""", InferTypesFromValues=false >
You can use the desc type to define active patterns over the properties you care about:
let (|Name|_|) target (candidate : desc.Root) =
if candidate.Name = target then Some target else None
let (|Age|_|) target (candidate : desc.Root) =
if candidate.Age = target then Some target else None
These active patterns can be used like this:
let json = """[{"name": "Kitten", "age": 322}]"""
let typedJson = desc.Parse(json)
match typedJson.[0] with
| Name "Kitten" n -> printfn "Name is %s" n
| Age 322m a -> printfn "Age is %M" a
| _ -> printfn "Nothing matched"
Given the typedJson value here, that match is going to print out "Name is Kitten".
<tldr>
Build some parser to handle your issues. http://www.quanttec.com/fparsec/
</tldr>
So...
You want something that can read something and do something with it. Without knowing apriori what any of those somethings is.
Good luck with that one.
You do not want type provider to do this for you. Type providers are made with the full purpose of being "at compile time this is what I saw, and thats what I will use".
With that said:
You want some other type of parser, where you are able to check the "schema" (some definition of what you know is going to come or you saw last time vs. what actually came). In effect some dynamic parser getting the data to some dynamic structure, with dynamic types.
And mind you, dynamic is not static. F# has static types and a lot is based on that. And type providers more so. Fighting that will make your head ache. It is of course possible, and it might even be possible by fighting the type providers to actually work with such an approach, but then again its not really a type provider nor its purpose.
In C#, in order to access a property/method I might downcast to a type and then access the property:
Fruit f = new Banana();
((Banana)f).Peel();
What is the equivalent in F#? I tried the following but the intellisense after typing the period does not give any members of the Banana type so I'm guessing this isn't valid.
let f = new Banana()
(f :?> Banana).Peel()
I know I can do it by introducing an intermediate let statement but was wondering if there was a one liner.
UPDATE
Ok I was trying to give a general example but I guess something got lost in my translation. I'm working with Excel interop. In any case here is the more specific code.
(worksheet.Cells.[rowNumber, dateColumn] :?> Excel.Range).<member here>
But it looks like it was just a temporary glitch in intellisense because today (after rebooting my computer) it seems to work fine. Yesterday it was just showing me basic object members of Equals, GetHashCode, GetType, and ToString. Now today this same expression is giving me all the members of the Excel.Range class. So I it looks like everything works as expected.
Your initial guess is correct, this code works:
type Fruit() =
member me.Test() = printfn "Test"
type Banana() =
inherit Fruit()
member me.Peel() = printfn "Peeling !"
let f = new Banana() :> Fruit
(f :?> Banana).Peel()
You are not showing how did you define those types, probably you have something wrong there.
Also keep in mind that in F# you have to be explicit to upcast. In your example your f object is already a Banana, nothing in your code upcast it to a Fruit.
F# has feature called "Type extension" that gives a developer ability to extend existing types.
There is two types of extensions: intrinsic extension and optional extension. First one is similar to partial types in C# and second one is something similar to method extension (but more powerful).
To use intrinsic extension we should put two declarations into the same file. In this case compiler will merge two definitions into one final type (i.e. this is two "parts" of one type).
The issue is that those two types has different access rules for different members and values:
// SampleType.fs
// "Main" declaration
type SampleType(a: int) =
let f1 = 42
let func() = 42
[<DefaultValue>]
val mutable f2: int
member private x.f3 = 42
static member private f4 = 42
member private this.someMethod() =
// "Main" declaration has access to all values (a, f1 and func())
// as well as to all members (f2, f3, f4)
printf "a: %d, f1: %d, f2: %d, f3: %d, f4: %d, func(): %d"
a f1 this.f2 this.f3 SampleType.f4 (func())
// "Partial" declaration
type SampleType with
member private this.anotherMethod() =
// But "partial" declaration has no access to values (a, f1 and func())
// and following two lines won't compile
//printf "a: %d" a
//printf "f1: %d" f1
//printf "func(): %d" (func())
// But has access to private members (f2, f3 and f4)
printf "f2: %d, f3: %d, f4: %d"
this.f2 this.f3 SampleType.f4
I read F# specification but didn't find any ideas why F# compiler differentiate between value and member declarations.
In 8.6.1.3 section of F# spec said that "The functions and values defined by instance definitions are lexically scoped (and thus implicitly private) to the object being defined.". Partial declaration has all access to all private members (static and instance). My guess is that by "lexical scope" specification authors specifically mean only "main" declaration but this behavior seems weird to me.
The question is: is this behavior intentional and what rationale behind it?
This is a great question! As you pointed out, the specification says that "local values are lexically scoped to the object being defined", but looking at the F# specification, it does not actually define what lexical scoping means in this case.
As your sample shows, the current behavior is that the lexical scope of object definition is just the primary type definition (excluding intrinsic extensions). I'm not too surprised by that, but I see that the other interpretation would make sense too...
I think a good reason for this is that the two kinds of extensions should behave the same (as much as possible) and you should be able to refactor your code from using one to using the other as you need. The two kinds only differ in how they are compiled under the cover. This property would be broken if one kind allowed access to lexical scope while the other did not (because, extension members technically cannot do that).
That said, I think this could be (at least) clarified in the specification. The best way to report this is to send email to fsbugs at microsoft dot com.
I sent this question to fsbugs at microsoft dot com and got following answer from Don Syme:
Hi Sergey,
Yes, the behaviour is intentional. When you use “let” in the class scope the identifier has lexical scope over the type definition. The value may not even be placed in a field – for example if a value is not captured by any methods then it becomes local to the constructor. This analysis is done locally to the class.
I understand that you expect the feature to work like partial classes in C#. However it just doesn’t work that way.
I think term "lexical scope" should be define more clearly in the spec, because otherwise current behavior would be surprising for other developers as well.
Many thanks to Don for his response!
So, by a hilarious series of events, I downloaded the FParsec source and tried to build it. Unfortunately, it's not compatible with the new 1.9.9.9. I fixed the easy problems, but there are a couple of discriminated unions that still don't work.
Specifically, Don Syme's post explains that discriminated unions containing items of type obj or -> don't automatically get equality or comparison constraints, since objects don't support comparison and functions don't support equality either. (It's not clear whether the automatically generated equality/comparison was buggy before, but the code won't even compile now that they're no longer generated.)
Here are some examples of the problematic DUs:
type PrecedenceParserOp<'a,'u'> =
| PrefixOp of string * Parser<unit,'u> * int * bool * ('a -> 'a)
| others ...
type ErrorMessage =
| ...
| OtherError of obj
| ...
Here are the offending uses:
member t.RemoveOperator (op: PrecedenceParserOp<'a, 'u>) =
// some code ...
if top.OriginalOp <> op then false // requires equality constraint
// etc etc ...
or, for the comparison constraint
let rec printMessages (pos: Pos) (msgs: ErrorMessage list) ind =
// other code ...
for msg in Set.ofList msgs do // iterate over ordered unique messages
// etc etc ...
As far I can tell, Don's solution of tagging each instance with a unique int is the Right Way to implement a custom equality/comparison constraint (or a maybe a unique int tuple so that individual branches of the DU can be ordered). But this is inconvenient for the user of the DU. Now, construction of the DU requires calling a function to get the next stamp.
Is there some way to hide the tag-getting and present the same constructors to users of the library? That is, to change the implementation without changing the interface? This is especially important because it appears (from what I understand of the code) that PrecedenceParserOp is a public type.
What source did you download for FParsec? I grabbed the latest from the FParsec BitBucket repository, and I didn't have to make any changes at all to the FParsec source to get it to compile in VS 2010 RC.
Edit: I take that back. I did get build errors from the InterpLexYacc and InterpFParsec sample projects, but the core FParsec and FParsecCS projects build just fine.
One thing you could do is add [<CustomEquality>] and [<CustomComparison>] attributes and define your own .Equals override and IComparable implementation. Of course, this would require you to handle the obj and _ -> _ components yourself in an appropriate way, which may or may not be possible. If you can control what's being passed into the OtherError constructor, you ought to be able to make this work for the ErrorMessage type by downcasting the obj to a type which is itself structurally comparable. However, the PrecendenceParserOp case is a bit trickier - you might be able to get by with using reference equality on the function components as long as you don't need comparison as well.