Instantiate type classes in locale contexts - typeclass

Suppose I have some locale where a type-class can be inferred from the assumptions.
locale some_locale =
fixes xs :: "'x list"
assumes xs_contains_UNIV: "set xs = UNIV"
begin
lemma finite_type: "OFCLASS('x, finite_class)"
proof (intro class.finite.of_class.intro class.finite.intro)
have "finite (set xs)" ..
then show "finite (UNIV :: 'x set)" unfolding xs_contains_UNIV .
qed
Can I then instantiate the type-class in some way?
Direct instantiation (instance, instantiation) does not work in locale contexts.
I also had no luck with interpretation, as finite has no fixed constants,
so I cannot specify the type I want to interpret.
interpretation finite (* subgoal: "class.finite TYPE('a)" for an arbitrary type 'a *)
proof
show "finite (UNIV :: 'x set)" (* "Failed to refine any pending goal", because of the type mismatch *) sorry
show "finite (UNIV :: 'a set)" (* would work, but impossible to prove for arbitrary type 'a *) oops
instance 'x :: finite (* not possible in locale context (Bad context for command "instance") *)
I know that I could simply fixes xs :: "'x::finite list",
which is probably the solution I will have to accept,
but that just seems redundant.

It's currently (Isabelle2021-1) impossible to instantiate a type class inside a locale. However, you can interpret the local context of the type class inside a locale. This makes all the theorems and definitions available that have been made inside the type class context, but not the definitions and theorems that have been proven outside of this context.
interpretation finite
proof
show "finite (UNIV :: 'a set)"
If that is not enough for your use case, then adding the sort constraint to the locale declaration is the way to go.
Note that this only works if the type variable 'x in the locale is renamed to 'a, as the interpretation command for unknown reasons insists on using the type variable 'a.

Related

Understanding F# value restriction

I'm learning F#. I'm here because I had something hard to understand about value restriction.
Here are the examples from the book I'm studying with.
let mapFirst = List.map fst
Since I had learned FP with haskell, I was pretty sure that this code would be well compiled, but it was not the case. It resulted error FS0030 (Sorry that I can't copy-paste fsi error message, since it was written in korean). Instead, I had to provide an explicit argument like:
let mapFirst inp = List.map fst inp // or inp |> List.map fst
But why? I thought that with the above example, compiler can surely infer the type of given value:
val mapFirst : ('a * 'b) list -> 'a list
If I remind correctly, I called this thing in haskell eta-conversion, and above two examples are entirely identical. (Maybe not entirely, though). Why should I privide parameters explicitly to the function can be curried without any loss of information?
I've understood that something like
let empties = Array.create 100 []
will not compile and why, but I don't think It has something to do with my question.
※ I took a look on this question, but it did not help.
This has to do with mutability.
Consider this snippet:
type T<'a> = { mutable x : 'a option }
let t = { x = None }
The type of t is T<'a> - that is, t is generic, it has a generic parameter 'a, meaning t.x can be of any type - whatever the consumer chooses.
Then, suppose in one part of the program you do:
t.x <- Some 42
Perfectly legitimate: when accessing t you choose 'a = int and then t.x : int option, so you can push Some 42 into it.
Then, suppose in another part of your program you do:
t.x <- Some "foo"
Oh noes, what happens now? Is t.x : int option or is it string option? If the compiler faithfully compiled your code, it would result in data corruption. So the compiler refuses, just in case.
Since in general the compiler can't really check if there is something mutable deep inside your type, it takes the safe route and rejects values (meaning "not functions") that are inferred to be generic.
Note that this applies to syntactic values, not logical ones. Even if your value is really a function, but isn't syntactically defined as such (i.e. lacks parameters), the value restriction still applies. As an illustration, consider this:
type T<'a> = { mutable x : 'a option }
let f t x =
t.x <- Some x
let g = f { x = None }
Here, even though g is really a function, the restriction works in exactly the same as with my first example above: every call to g tries to operate on the same generic value T<'a>
In some simpler cases the compiler can take a shortcut though. Thus, for example this line alone doesn't compile:
let f = List.map id
But these two lines do:
let f = List.map id
let x = f [1;2;3]
This is because the second line allows the compiler to infer that f : list int -> list int, so the generic parameter disappears, and everybody is happy.
In practice it turns out that this shortcut covers the vast majority of cases. The only time you really bump against the value restriction is when you try to export such generic value from the module.
In Haskell this whole situation doesn't happen, because Haskell doesn't admit mutation. Simple as that.
But then again, even though Haskell doesn't admit mutation, it kinda sorta does - via unsafePerformIO. And guess what - in that scenario you do risk bumping into the same problem. It's even mentioned in the documentation.
Except GHC doesn't refuse to compile it - after all, if you're using unsafePerformIO, you must know what you're doing. Right? :-)

How do I extract useful information from the payload of a GADT / existential type?

I'm trying to use Menhir's incremental parsing API and introspection APIs in a generated parser. I want to, say, determine the semantic value associated with a particular LR(1) stack entry; i.e. a token that's been previously consumed by the parser.
Given an abstract parsing checkpoint, encapsulated in Menhir's type 'a env, I can extract a “stack element” from the LR automaton; it looks like this:
type element =
| Element: 'a lr1state * 'a * position * position -> element
The type element describes one entry in the stack of the LR(1) automaton. In a stack element of the form Element (s, v, startp, endp), s is a (non-initial) state and v is a semantic value. The value v is associated with the incoming symbol A of the state s. In other words, the value v was pushed onto the stack just before the state s was entered. Thus, for some type 'a, the state s has type 'a lr1state and the value v has type 'a ...
In order to do anything useful with the value v, one must gain information about the type 'a, by inspection of the state s. So far, the type 'a lr1state is abstract, so there is no way of inspecting s. The inspection API (§9.3) offers further tools for this purpose.
Okay, cool! So I go and dive into the inspection API:
The type 'a terminal is a generalized algebraic data type (GADT). A value of type 'a terminal represents a terminal symbol (without a semantic value). The index 'a is the type of the semantic values associated with this symbol ...
type _ terminal =
| T_A : unit terminal
| T_B : int terminal
The type 'a nonterminal is also a GADT. A value of type 'a nonterminal represents a nonterminal symbol (without a semantic value). The index 'a is the type of the semantic values associated with this symbol ...
type _ nonterminal =
| N_main : thing nonterminal
Piecing these together, I get something like the following (where "command" is one of my grammar's nonterminals, and thus N_command is a string nonterminal):
let current_command (env : 'a env) =
let rec f i =
match Interpreter.get i env with
| None -> None
| Some Interpreter.Element (lr1state, v, _startp, _endp) ->
match Interpreter.incoming_symbol lr1state with
| Interpreter.N Interpreter.N_command -> Some v
| _ -> f (i + 1)
in
f 0
Unfortunately, this is puking up very confusing type-errors for me:
File "src/incremental.ml", line 110, characters 52-53:
Error: This expression has type string but an expression was expected of type
string
This instance of string is ambiguous:
it would escape the scope of its equation
This is a bit above my level! I'm pretty sure I understand why I can't do what I tried to do above; but I don't understand what my alternatives are. In fact, the Menhir manual specifically mentions this complexity:
This function can be used to gain access to the semantic value v in a stack element Element (s, v, _, _). Indeed, by case analysis on the symbol incoming_symbol s, one gains information about the type 'a, hence one obtains the ability to do something useful with the value v.
Okay, but that's what I thought I did, above: case-analysis by match'ing on incoming_symbol s, pulling out the case where v is of a single, specific type: string.
tl;dr: how do I extract the string payload from this GADT, and do something useful with it?
If your error sounds like
This instance of string is ambiguous:
it would escape the scope of its equation
it means that the type checker is not really sure if outside of the pattern matching branch the type of v should be a string, or another type that is equal to string but only inside the branch. You just need to add a type annotation when leaving the branch to remove this ambiguity:
| Interpreter.(N N_command) -> Some (v:string)

Coq: typeclasses vs dependent records

I can't understand the difference between typeclasses and dependent records in Coq. The reference manual gives the syntax of typeclasses, but says nothing about what they really are and how should you use them. A bit of thinking and searching reveals that typeclasses essentially are dependent records with a bit of syntactic sugar that allows Coq to automatically infer some implicit instances and parameters. It seems that the algorithm for typeclasses works better when there is more or a less only one possible instance of it in any given context, but that's not a big issue since we can always move all fields of typeclass to its parameters, removing ambiguity. Also the Instance declaration is automatically added to the Hints database which can often ease the proofs but will also sometimes break them, if the instances were too general and caused proof search loops or explosions. Are there any other issues I should be aware of? What is the heuristic for choosing between the two? E.g. would I lose anything if I use only records and set their instances as implicit parameters whenever possible?
You are right: type classes in Coq are just records with special plumbing and inference (there's also the special case of single-method type classes, but it doesn't really affect this answer in any way). Therefore, the only reason you would choose type classes over "pure" dependent records is to benefit from the special inference that you get with them: inference with plain dependent records is not very powerful and doesn't allow you to omit much information.
As an example, consider the following code, which defines a monoid type class, instantiating it with natural numbers:
Class monoid A := Monoid {
op : A -> A -> A;
id : A;
opA : forall x y z, op x (op y z) = op (op x y) z;
idL : forall x, op id x = x;
idR : forall x, op x id = x
}.
Require Import Arith.
Instance nat_plus_monoid : monoid nat := {|
op := plus;
id := 0;
opA := plus_assoc;
idL := plus_O_n;
idR := fun n => eq_sym (plus_n_O n)
|}.
Using type class inference, we can use any definitions that work for any monoid directly with nat, without supplying the type class argument, e.g.
Definition times_3 (n : nat) := op n (op n n).
However, if you make the above definition into a regular record by replacing Class and Instance by Record and Definition, the same definition fails:
Toplevel input, characters 38-39: Error: In environment n : nat The term "n" has type "nat" while it is expected to have type "monoid ?11".
The only caveat with type classes is that the instance inference engine gets a bit lost sometimes, causing hard-to-understand error messages to appear. That being said, it's not really a disadvantage over dependent records, given that this possibility isn't even available there.

Delegate/Func conversion and misleading compiler error message

I thought that conversions between F# functions and System.Func had to be done manually, but there appears to be a case where the compiler (sometimes) does it for you. And when it goes wrong the error message isn't accurate:
module Foo =
let dict = new System.Collections.Generic.Dictionary<string, System.Func<obj,obj>>()
let f (x:obj) = x
do
// Question 1: why does this compile without explicit type conversion?
dict.["foo"] <- fun (x:obj) -> x
// Question 2: given that the above line compiles, why does this fail?
dict.["bar"] <- f
The last line fails to compile, and the error is:
This expression was expected to have type
System.Func<obj,obj>
but here has type
'a -> obj
Clearly the function f doesn't have a signature of 'a > obj. If the F# 3.1 compiler is happy with the first dictionary assignment, then why not the second?
The part of the spec that should explain this is 8.13.7 Type Directed Conversions at Member Invocations. In short, when invoking a member, an automatic conversion from an F# function to a delegate will be applied. Unfortunately, the spec is a bit unclear; from the wording it seems that this conversion might apply to any function expression, but in practice it only appears to apply to anonymous function expressions.
The spec is also a bit out of date; in F# 3.0 type directed conversions also enable a conversion to a System.Linq.Expressions.Expression<SomeDelegateType>.
EDIT
In looking at some past correspondence with the F# team, I think I've tracked down how a conversion could get applied to a non-syntactic function expression. I'll include it here for completeness, but it's a bit of a strange corner case, so for most purposes you should probably consider the rule to be that only syntactic functions will have the type directed conversion applied.
The exception is that overload resolution can result in converting an arbitrary expression of function type; this is partly explained by section 14.4 Method Application Resolution, although it's pretty dense and still not entirely clear. Basically, the argument expressions are only elaborated when there are multiple overloads; when there's just a single candidate method, the argument types are asserted against the unelaborated arguments (note: it's not obvious that this should actually matter in terms of whether the conversion is applicable, but it does matter empirically). Here's an example demonstrating this exception:
type T =
static member M(i:int) = "first overload"
static member M(f:System.Func<int,int>) = "second overload"
let f i = i + 1
T.M f |> printfn "%s"
EDIT: This answer explains only the mysterious promotion to 'a -> obj. #kvb points out that replacing obj with int in OPs example still doesn't work, so that promotion is in itself insufficient explanation for the observed behaviour.
To increase flexibility, the F# type elaborator may under certain conditions promote a named function from f : SomeType -> OtherType to f<'a where 'a :> SomeType> : 'a -> OtherType. This is to reduce the need for upcasts. (See spec. 14.4.2.)
Question 2 first:
dict["bar"] <- f (* Why does this fail? *)
Because f is a "named function", its type is promoted from f : obj -> obj following sec. 14.4.2 to the seemingly less restrictive f<'a where 'a :> obj> : 'a -> obj. But this type is incompatible with System.Func<obj, obj>.
Question 1:
dict["foo"] <- fun (x:obj) -> x (* Why doesn't this, then? *)
This is fine because the anonymous function is not named, and so sec. 14.4.2 does not apply. The type is never promoted from obj -> obj and so fits.
We can observe the interpreter exhibit behaviour following 14.4.2:
> let f = id : obj -> obj
val f : (obj -> obj) (* Ok, f has type obj -> obj *)
> f
val it : ('a -> obj) = <fun:it#135-31> (* f promoted when used. *)
(The interpreter doesn't output constraints to obj.)

Underlying Parsec Monad

Many of the Parsec combinators I use are of a type such as:
foo :: CharParser st Foo
CharParser is defined here as:
type CharParser st = GenParser Char st
CharParser is thus a type synonym involving GenParser, itself defined here as:
type GenParser tok st = Parsec [tok] st
GenParser is then another type synonym, assigned using Parsec, defined here as:
type Parsec s u = ParsecT s u Identity
So Parsec is a partial application of ParsecT, itself listed here with type:
data ParsecT s u m a
along with the words:
"ParsecT s u m a is a parser with stream type s, user state type u,
underlying monad m and return type a."
What is the underlying monad? In particular, what is it when I use the CharParser parsers? I can't see where it's inserted in the stack. Is there a relationship to the use of the list monad in Monadic Parsing in Haskell to return multiple successful parses from an ambiguous parser?
In your case the underlying monad is Identity. However ParsecT is different from most monad transformers in that it is an instance of the Monad class even if the type parameter m is not. If you look at the source code you will note the lack of "(Monad m) =>" in the instance declaration.
So then you ask yourself, "If I were to have a non-trivial monad stack, where would it be used?"
There are a three of answers to that question:
It is used to uncons the next token out of the stream:
class (Monad m) => Stream s m t | s -> t where
uncons :: s -> m (Maybe (t,s))
Notice that uncons takes an s (the stream of tokens t) and returns its result wrapped in your monad. This allows one to do interesting thing while or even during the process of getting the next token.
It is used in the resulting output of each parser. This means you can create parsers that don't touch the input but take action in the underlying monad and use the combinators to bind them to regular parsers. In other words, lift (x :: m a) :: ParsecT s u m a.
Finally, the end result of RunParsecT and friends (until you build up to the point where m is replaced by Identity) return their results wrapped in this monad.
There is not a relationship between this monad and the one from Monadic Parsing in Haskell. In this case Hutton and Meijer are referring to the monad instance for ParsecT itself. The fact that in Parsec-3.0.0 and beyond ParsecT has become a monad transformer with an underlying monad is not relevant to the paper.
What I think you are looking for however is where the list of possible results went. In Hutton and Meijer the parser returns a list of all possible results while Parsec stubbornly returns only one. I think you are looking at the m in the result and thinking to yourself that the list of results must be hiding in there somewhere. It is not.
Parsec, for reasons of efficiency, made a choice to prefer the first matching result in Hutton and Meijer's list of results. This let's it toss away both the unused results in the tail of Hutton and Meijer's list and also the front of the stream of tokens because we never backtrack. In parsec, given the combined parser a <|> b, if a consumes any input b will never be evaluated. The way around this is try which will reset the state back to where it was if a fails then evaluate b.
You asked in the comments if this was done using Maybe or Either. The answer is "almost but not quite." If you look at the low lever run* functions you see that they return an Algebraic type which tell weather input was consumed then a second which give either the result or an error message. These types work kind of like Either, but even they are not used directly. Rather then stretch this out further, I'll refer you to the post by Antoine Latter that explains how this works and why it is done this way.
GenParser is defined in terms of Parsec, not ParsecT. Parsec in turn is defined as
type Parsec s u = ParsecT s u Identity
So the answer is that when using CharParser the underlying monad is the Identity monad.

Resources