I have a list
data List (X : Set) : Set where
<> : List X
_,_ : X -> List X -> List X
a definition for equality
data _==_ {l}{X : Set l}(x : X) : X -> Set l where
refl : x == x
and congruence
cong : forall {k l}{X : Set k}{Y : Set l}(f : X -> Y){x y} -> x == y -> f x == f y
cong f refl = refl
I am trying to prove
propFlatten2 : {X : Set } ( xs0 : List X ) (x : X) (xs1 : List X) (xs2 : List X)
-> ( xs0 ++ x , xs1 ) ++ xs2 == xs0 ++ (x , xs1 ++ xs2 )
propFlatten2 <> x xs1 xs2 = refl
propFlatten2 (x , xs0) x₁ xs1 xs2 = cong (λ l -> x , l) {!!}
Is there a better way to use directly the constructor _,_ other than through a lambda in the last line ?
Agda doesn't have any special syntax for partial application of operators. You can, however, use the operators in their usual prefix version:
x + y = _+_ x y
This is convenient when you need to partially apply leftmost argument(s):
_+_ 1 = λ x → 1 + x
When you need to partially apply arguments going from the right, your options are more limited. As mentioned in the comments, you could use one of the convenience functions such as flip (found in Function):
flip f x y = f y x -- Type omitted for brevity.
And then simply flip the arguments of _+_:
flip _+_ 1 = λ x → x + 1
Sometimes you find operators whose only purpose is to make the code a bit nicer. Best example I can think of is probably Data.Product.,_. When you write a dependent pair (Data.Product.Σ), sometimes the first part of the pair can be filled in automatically. Instead of writing:
_ , x
You can just write:
, x
It's hard to say when writing a specialized operator such as the one above is actually worth it; if your only use case is using it with congruence, I'd just stick with the lambda since it makes it very clear what's going on.
Related
While studying well-foundedness, I wanted to see how different designs behave. For example, for a type:
data _<_ (x : Nat) : Nat -> Set where
<-b : x < (suc x)
<-s : (y : Nat) -> x < y -> x < (suc y)
well-foundedness is easy to demonstrate. But if a similar type is defined differently:
data _<_ : Nat -> Nat -> Set where
z-< : (m : Nat) -> zero < (suc m)
s<s : (m n : Nat) -> m < n -> (suc m) < (suc n)
It is obvious that in both cases the descending chain is not infinite, but in the second case well-foundedness is not easy to demonstrate: it is not easy to show (y -> y < x -> Acc y) exists for a given x.
Are there some principles that help choose the designs like the first in preference to the designs like the second?
It's not impossibly hard to prove well-foundedness of the second definition, it just requires extra theorems. Here, relying on decidability of _==_ for Nat, we can construct new _<_ for the case (suc y) != x, and can rewrite the target types to use the solution to the problem known to decrease in size as the solution for suc y.
-- trying to express well-foundedness is tricky, because of how x < y is defined:
-- since both x and y decrease in the inductive step case, need special effort to
-- prove when the induction stops - when no more constructors are available
<-Well-founded : Well-founded Nat _<_
<-Well-founded x = acc (aux x) where
aux : (x y : Nat) -> y < x -> Acc _<_ y
aux zero y ()
aux x zero z-< = acc \_ ()
aux (suc x) (suc y) (s<s y<x) with is-eq? (suc y) x
... | no sy!=x = aux x (suc y) (neq y<x sy!=x)
... | yes sy==x rewrite sy==x = <-Well-founded x
The first definition is "canonical" in a sense, while the second one is not. In Agda, every inductive type has a subterm relation which is well-founded and transitive, although not necessarily total, decidable or proof-irrelevant. For W-types, it's the following:
open import Data.Product
open import Data.Sum
open import Relation.Binary.PropositionalEquality
data W (S : Set)(P : S → Set) : Set where
lim : ∀ s → (P s → W S P) → W S P
_<_ : ∀ {S P} → W S P → W S P → Set
a < lim s f = ∃ λ p → a ≡ f p ⊎ a < f p
If we define Nat as a W-type, then the generic _<_ is the same as the first definition. The first definition establishes a subterm relation even if we have no idea about the constructors of Nat. The second definition is only a subterm relation because we know that zero is reachable from every suc n. If we added an extra zero' : Nat constructor, then this would not be the case anymore.
I would like to test some definitions in system F using Agda as my typechecker and evaluator.
My first attempt to introduce Church natural numbers was by writing
Num = forall {x} -> (x -> x) -> (x -> x)
Which would be used just like a regular type alias:
zero : Num
zero f x = x
However the definition of Num does not type(kind?)check. What is the most proper way to make it working and be as close as possible to the system F notation?
The following would typecheck
Num : Set₁
Num = forall {x : Set} -> (x -> x) -> (x -> x)
zero : Num
zero f x = x
but as you see Num : Set₁, this might become a problem and you'll need --type-in-type
I was looking at the definition of cong:
cong : ∀ {a b} {A : Set a} {B : Set b} (f : A → B) {x y} → x ≡ y → f x ≡ f y
cong f refl = refl
And I couldn't understand why it is well-typed. In particular, it seems like the implicit argument of refl must be both f x and f y. To make things more clear, I wrote a non-implicit version of equality, and attempted to replicate the proof:
data Eq : (A : Set) -> A -> A -> Set where
refl : (A : Set) -> (x : A) -> Eq A x x
cong : (A : Set) -> (B : Set) -> (f : A -> B) ->
(x : A) -> (y : A) -> (e : Eq A x y) -> Eq B (f x) (f y)
cong A B f x y e = refl B (f x)
This results in a type error:
x != y of type A when checking that the expression refl B (f x) has type Eq B (f x) (f y)
As one would expect. What could I possibly have instead of (f x)? Am I missing something?
Dependent pattern matching at your service.
If we make a hole in your cong
cong : (A : Set) -> (B : Set) -> (f : A -> B) ->
(x : A) -> (y : A) -> (e : Eq A x y) -> Eq B (f x) (f y)
cong A B f x y e = {!refl B (f x)!}
and look into it, we'll see
Goal: Eq B (f x) (f y)
Have: Eq B (f x) (f x)
so the values are indeed different. But once you pattern match on e:
cong : (A : Set) -> (B : Set) -> (f : A -> B) ->
(x : A) -> (y : A) -> (e : Eq A x y) -> Eq B (f x) (f y)
cong A B f x y (refl .A .x) = {!refl B (f x)!}
the fact that x is the same thing as y is revealed and the context is silently rewritten: each occurrence of y is replaced by x, so looking into the hole we now see
Goal: Eq B (f x) (f x)
Have: Eq B (f x) (f x)
Note that we can write
cong A B f x .x (refl .A .x) = refl B (f x)
i.e. do not bind y at all and just say that it's the same as x via a dot-pattern. We gained this information by pattern matching on e : Eq A x y, because once the match is performed we know that it's e : Eq A x x actually, because that's what the type signature of refl says. Unification of Eq A x y and Eq A x x results in a trivial conclusion: y equals x and the whole context is adjusted accordingly.
That's the same logic as with Haskell GADTs:
data Value a where
ValueInt :: Int -> Value Int
ValueBool :: Bool -> Value Bool
eval :: Value a -> a
eval (ValueInt i) = i
eval (ValueBool b) = b
when you match on ValueInt and get i of type Int, you also reveal that a equals Int and add this knowledge to the context (via an equality constraint) which makes a and Int unifiable later. That is how we're able to return i as a result: because a from the type signature and Int unify perfectly as we know from the context.
Using ℕ and _≟_ from the standard library, I have
open import Data.Nat
open import Relation.Binary.PropositionalEquality
open import Relation.Nullary
foo : ℕ -> ℕ -> ℕ
foo x y with x ≟ y
foo x .x | yes refl = x
foo x y | no contra = y
data Bar : ℕ -> Set where
bar : (x : ℕ) -> Bar (foo x x)
I want to implement
mkBar : (x : ℕ) -> Bar x
mkBar x = bar x
Agda complains,
Type mismatch:
expected: x
actual: foo x x | x ≟ x
when checking that the expression bar x
has type Bar x
This makes sense to me: Agda doesn't know a priori that x ≟ x always evaluates to yes refl, so it's not going to evaluate foo x x until it knows a bit more about x.
So I tried rewriting the goal to force x ≟ x to resolve to yes refl,
eq-refl : forall x -> (x ≟ x) ≡ yes refl
eq-refl x with x ≟ x
eq-refl x | yes refl = refl
eq-refl x | no contra = ⊥-elim (contra refl)
mkBar : (x : ℕ) -> Bar x
mkBar x rewrite eq-refl x = bar x
but to no avail. Same error message. I also tried rewriting by foo x x ≡ x:
foo-eq : forall x -> foo x x ≡ x
foo-eq x rewrite eq-refl x = refl
mkBar : (x : ℕ) -> Bar x
mkBar x rewrite foo-eq x = bar x
This answer suggests pattern matching on x ≟ x on the left hand side of mkBar, but it also seems to have no effect:
mkBar : (x : ℕ) -> Bar x
mkBar x with x ≟ x
mkBar x | yes refl = bar x
mkBar x | no contra = ⊥-elim (contra refl)
I must be missing a trick here. How do I get rid of the | in the goal type and make foo x x reduce to x? (I'd prefer not to examine x directly in the LHS of mkBar.)
You were almost there: the important thing to notice is that rewrite takes an x ≡ y and replaces x by y in the goal. foo-eq x has type foo x x ≡ x but there is no foo x x to replace in the goal!
What you need to do is rewrite by sym (foo-eq x) like so:
mkBar : (x : ℕ) → Bar x
mkBar x rewrite sym (foo-eq x) = bar x
Bar x then becomes Bar (foo x x) meaning you can apply your constructor.
I am writing a basic monadic parser in Idris, to get used to the syntax and differences from Haskell. I have the basics of that working just fine, but I am stuck on trying to create VerifiedSemigroup and VerifiedMonoid instances for the parser.
Without further ado, here's the parser type, Semigroup, and Monoid instances, and the start of a VerifiedSemigroup instance.
data ParserM a = Parser (String -> List (a, String))
parse : ParserM a -> String -> List (a, String)
parse (Parser p) = p
instance Semigroup (ParserM a) where
p <+> q = Parser (\s => parse p s ++ parse q s)
instance Monoid (ParserM a) where
neutral = Parser (const [])
instance VerifiedSemigroup (ParserM a) where
semigroupOpIsAssociative (Parser p) (Parser q) (Parser r) = ?whatGoesHere
I'm basically stuck after intros, with the following prover state:
-Parser.whatGoesHere> intros
---------- Other goals: ----------
{hole3},{hole2},{hole1},{hole0}
---------- Assumptions: ----------
a : Type
p : String -> List (a, String)
q : String -> List (a, String)
r : String -> List (a, String)
---------- Goal: ----------
{hole4} : Parser (\s => p s ++ q s ++ r s) =
Parser (\s => (p s ++ q s) ++ r s)
-Parser.whatGoesHere>
It looks like I should be able to use rewrite together with appendAssociative somehow,
but I don't know how to "get inside" the lambda \s.
Anyway, I'm stuck on the theorem-proving part of the exercise - and I can't seem to find much Idris-centric theorem proving documentation. I guess maybe I need to start looking at Agda tutorials (though Idris is the dependently-typed language I'm convinced I want to learn!).
The simple answer is that you can't. Reasoning about functions is fairly awkward in intensional type theories. For example, Martin-Löf's type theory is unable to prove:
S x + y = S (x + y)
0 + y = y
x +′ S y = S (x + y)
x +′ 0 = x
_+_ ≡ _+′_ -- ???
(as far as I know, this is an actual theorem and not just "proof by lack of imagination"; however, I couldn't find the source where I read it). This also means that there is no proof for the more general:
ext : ∀ {A : Set} {B : A → Set}
{f g : (x : A) → B x} →
(∀ x → f x ≡ g x) → f ≡ g
This is called function extensionality: if you can prove that the results are equal for all arguments (that is, the functions are equal extensionally), then the functions are equal as well.
This would work perfectly for the problem you have:
<+>-assoc : {A : Set} (p q r : ParserM A) →
(p <+> q) <+> r ≡ p <+> (q <+> r)
<+>-assoc (Parser p) (Parser q) (Parser r) =
cong Parser (ext λ s → ++-assoc (p s) (q s) (r s))
where ++-assoc is your proof of associative property of _++_. I'm not sure how would it look in tactics, but it's going to be fairly similar: apply congruence for Parser and the goal should be:
(\s => p s ++ q s ++ r s) = (\s => (p s ++ q s) ++ r s)
You can then apply extensionality to get assumption s : String and a goal:
p s ++ q s ++ r s = (p s ++ q s) ++ r s
However, as I said before, we don't have function extensionality (note that this is not true for type theories in general: extensional type theories, homotopy type theory and others are able to prove this statement). The easy option is to assume it as an axiom. As with any other axiom, you risk:
Losing consistency (i.e. being able to prove falsehood; though I think function extensionality is OK)
Breaking reduction (what does a function that does case analysis only for refl do when given this axiom?)
I'm not sure how Idris handles axioms, so I won't go into details. Just beware that axioms can mess up some stuff if you are not careful.
The hard option is to work with setoids. A setoid is basically a type equipped with custom equality. The idea is that instead of having a Monoid (or VerifiedSemigroup in your case) that works on the built-in equality (= in Idris, ≡ in Agda), you have a special monoid (or semigroup) with different underlying equality. This is usually done by packing the monoid (semigroup) operations together with the equality and bunch of proofs, namely (in pseudocode):
= : A → A → Set -- equality
_*_ : A → A → A -- associative binary operation
1 : A -- neutral element
=-refl : x = x
=-trans : x = y → y = z → x = z
=-sym : x = y → y = x
*-cong : x = y → u = v → x * u = y * v -- the operation respects
-- our equality
*-assoc : x * (y * z) = (x * y) * z
1-left : 1 * x = x
1-right : x * 1 = x
The choice of equality for parsers is clear: two parsers are equal if their outputs agree for all possible inputs.
-- Parser equality
_≡p_ : {A : Set} (p q : ParserM A) → Set
Parser p ≡p Parser q = ∀ x → p x ≡ q x
This solution comes with different tradeoffs, namely that the new equality cannot fully substitute the built-in one (this tends to show up when you need to rewrite some terms). But it's great if you just want to show that your code does what it's supposed to do (up to some custom equality).