Is the image of a `hcomp` the `hcomp` of images? - agda

I am trying to fill the remaining one hole in the following program:
{-# OPTIONS --cubical #-}
module _ where
open import Cubical.Core.Everything
open import Cubical.Foundations.Everything
data S1 : Type where
base : S1
loop : base ≡ base
data NS : Type where
N : NS
S : NS
W : S ≡ N
E : N ≡ S
module _ where
open Iso
NS-Iso : Iso NS S1
NS-Iso .fun N = base
NS-Iso .fun S = base
NS-Iso .fun (W i) = base
NS-Iso .fun (E i) = loop i
NS-Iso .inv base = N
NS-Iso .inv (loop i) = (E ∙ W) i
NS-Iso .leftInv N = refl
NS-Iso .leftInv S = sym W
NS-Iso .leftInv (W i) = λ j → W (i ∨ ~ j)
NS-Iso .leftInv (E i) = λ j → compPath-filler E W (~ j) i
NS-Iso .rightInv base = refl
NS-Iso .rightInv (loop i) = ?
The type of the hole is:
fun NS-Iso (inv NS-Iso (loop i)) ≡ loop i
inv NS-Iso (loop i) of course definitionally equal to (E ∙ W) i, but what is then fun NS-Iso ((E ∙ W) i)? Is there some kind of homomorphism / continuity / similar property with which I can use the definitions of fun NS-Iso (E i) and fun NS-Iso (W i) to figure out what it is?
Since fun NS-Iso (E i) = loop i and fun NS-Iso (W i) = base, I thought this might be a valid filling (pun intended) of the hole:
NS-Iso .rightInv (loop i) = λ j → compPath-filler loop (refl {x = base}) (~ j) i
But that gives a type error:
hcomp (doubleComp-faces (λ _ → base) (λ _ → base) i) (loop i)
!=
fun NS-Iso (hcomp (doubleComp-faces (λ _ → N) W i) (E i))

I have found that the answer is, basically, yes!
Let's add a local binding of our goal in NS-Iso .rightInv (loop i) just to keep an eye on the type:
NS-Iso .rightInv (loop i) = goal
where
goal : (fun NS-Iso ∘ inv NS-Iso) (loop i) ≡ loop i
goal = ?
Since we have
NS-Iso .inv (loop i) = (E ∙ W) i
the type of goal reduces to:
step1 : fun NS-Iso ((E ∙ W) i) ≡ loop i
And now comes the crucial step, the actual answer to my question: can we push in fun NS-Iso into E ∙ W?
Let's draw wavy lines between the points and paths where fun NS-Iso is directly defined, or where we know its value by cong _ refl = refl:
base base
^ ~ ~ ^
| ~ ~ |
| ~ ~ |
| N N |
| ^ ^ |
refl | ~~~ refl | | W ~~~~~ | refl
| | | |
| N -------------> S |
| ~ E ~ |
| ~ ~ ~ |
| ~ ~ ~ |
base --------------------------------> base
loop
E ∙ W is the lid of the inner box, and it turns out yes, its image by fun NS-Iso is indeed the lid of the outer box:
step2 : (cong (fun NS-Iso) E ∙ (cong (fun NS-Iso) W)) i ≡ loop i
In graphical form:
?
base - - - - - - - - - - - - - - - - - > base
^ ~ ~ ^
| ~ ~ |
| ~ E ∙ W ~ |
| N - - - - - - -> N |
| ^ ^ |
refl | ~~~ refl | | W ~~~~~ | refl
| | | |
| N -------------> S |
| ~ E ~ |
| ~ ~ ~ |
| ~ ~ ~ |
base -------------------------------> base
loop
So we can reduce the fun NS-Iso applications now:
step3 : (loop ∙ (λ _ → base)) i ≡ loop i
which we can finally solve with a library function doubleCompPath-filler.
The complete code:
NS-Iso .rightInv (loop i) = goal
where
step3 : (loop ∙ (λ _ → base)) i ≡ loop i
step3 = cong (λ p → p i) (symP (ompPath-filler loop (λ _ → base)))
step2 : (cong (fun NS-Iso) E ∙ (cong (fun NS-Iso) W)) i ≡ loop i
step2 = step3
step1 : fun NS-Iso ((E ∙ W) i) ≡ loop i
step1 j = step2 j
goal : (fun NS-Iso ∘ inv NS-Iso) (loop i) ≡ loop i
goal j = step1 j
I don't know why I needed to eta-expand goal and step1, but otherwise Agda doesn't recognize that they meet the boundary conditions.

Related

Proof about a function that uses rewrite: a "vertical bars in goals" question

I have a function that uses rewrite to satisfy the Agda type checker. I thought that I had a reasonably good grasp of how to deal with the resulting "vertical bars" in proofs about such functions. And yet, I fail completely at dealing with these bars in my seemingly simple case.
Here are the imports and my function, step. The rewrites make Agda see that n is equal to n + 0 and that suc (acc + n) is equal to acc + suc n, respectively.
module Repro where
open import Relation.Binary.PropositionalEquality as P using (_≡_)
open import Data.Nat
open import Data.Nat.DivMod
open import Data.Nat.DivMod.Core
open import Data.Nat.Properties
open import Agda.Builtin.Nat using () renaming (mod-helper to modₕ)
step : (acc d n : ℕ) → modₕ acc (acc + n) d n ≤ acc + n
step zero d n rewrite P.sym (+-identityʳ n) = a[modₕ]n<n n (suc d) 0
step (suc acc) d n rewrite P.sym (+-suc acc n) = a[modₕ]n<n acc (suc d) (suc n)
Now for the proof, which pattern matches on acc, just like the function. Here's the zero case:
step-ok : ∀ (acc d n : ℕ) → step acc d n ≡ a[modₕ]n<n acc d n
step-ok zero d n with n | P.sym (+-identityʳ n)
step-ok zero d n | .(n + 0) | P.refl = ?
At this point, Agda tells me I'm not sure if there should be a case for the constructor P.refl, because I get stuck when trying to solve the following unification problems (inferred index ≟ expected index): w ≟ w + 0 [...]
I am also stuck in the second case, the suc acc case, albeit in a different way:
step-ok (suc acc) d n with suc (acc + n) | P.sym (+-suc acc n)
step-ok (suc acc) d n | .(acc + suc n) | P.refl = ?
Here, Agda says suc (acc + n) != w of type ℕ when checking that the type [...] of the generated with function is well-formed
Update after Sassa NF's response
I followed Sassa NF's advice and reformulated my function with P.subst instead of rewrite. I.e., I changed my right-hand side from being about n + 0 to being about n, instead of conversely changing the goal from being about n to being about n + 0:
step′ : (acc d n : ℕ) → modₕ acc (acc + n) d n ≤ acc + n
step′ zero d n = P.subst (λ # → modₕ 0 # d # ≤ #) (+-identityʳ n) (a[modₕ]n<n n (suc d) 0)
step′ (suc acc) d n = P.subst (λ # → modₕ (suc acc) # d n ≤ #) (+-suc acc n) (a[modₕ]n<n acc (suc d) (suc n))
During the proof, the P.subst in the function definition needs to be eliminated, which can be done with a with construct:
step-ok′ : ∀ (acc d n : ℕ) → step′ acc d n ≡ a[modₕ]n<n acc d n
step-ok′ zero d n with n + 0 | +-identityʳ n
... | .n | P.refl = P.refl
step-ok′ (suc acc) d n with acc + suc n | +-suc acc n
... | .(suc (acc + n)) | P.refl = P.refl
So, yay! I just finished my very first Agda proof involving a with.
Some progress on the original problem
My guess would be that my first issue is a unification issue during dependent pattern matching: there isn't any substitution that makes n identical to n + 0. More generally, in situations where one thing is a strict subterm of the other thing, I suppose that we may run into unification trouble. So, maybe using with to match n with n + 0 was asking for problems.
My second issue seems to be what the Agda language reference calls an ill-typed with-abstraction. According to the reference, this "happens when you abstract over a term that appears in the type of a subterm of the goal or argument types." The culprit seems to be the type of the goal's subterm a[modₕ]n<n (suc acc) d n, which is modₕ [...] ≤ (suc acc) + n, which contains the subterm I abstract over, (suc acc) + n.
It looks like this is usually resolved by additionally abstracting over the part of the goal that has the offending type. And, indeed, the following makes the error message go away:
step-ok (suc acc) d n with suc (acc + n) | P.sym (+-suc acc n) | a[modₕ]n<n (suc acc) d n
... | .(acc + suc n) | P.refl | rhs = {!!}
So far so good. Let's now introduce P.inspect to capture the rhs substitution:
step-ok (suc acc) d n with suc (acc + n) | P.sym (+-suc acc n) | a[modₕ]n<n (suc acc) d n | P.inspect (a[modₕ]n<n (suc acc) d) n
... | .(acc + suc n) | P.refl | rhs | P.[ rhs-eq ] = {!!}
Unfortunately, this leads to something like the original error: w != suc (acc + n) of type ℕ when checking that the type [...] of the generated with function is well-formed
One day later
Of course I'd run into the same ill-typed with-abstraction again! After all, the whole point of P.inspect is to preserve a[modₕ]n<n (suc acc) d n, so that it can construct the term a[modₕ]n<n (suc acc) d n ≡ rhs. However, preserved a[modₕ]n<n (suc acc) d n of course still has its preserved original type, modₕ [...] ≤ (suc acc) + n, whereas rhs has the modified type modₕ [...] ≤ acc + suc n. That's what's causing trouble now.
I guess one solution would be to use P.subst to change the type of the term we inspect. And, indeed, the following works, even though it is hilariously convoluted:
step-ok (suc acc) d n with suc (acc + n) | P.sym (+-suc acc n) | a[modₕ]n<n (suc acc) d n | P.inspect (λ n → P.subst (λ # → modₕ (suc acc) # d n ≤ #) (P.sym (+-suc acc n)) (a[modₕ]n<n (suc acc) d n)) n
... | .(acc + suc n) | P.refl | rhs | P.[ rhs-eq ] rewrite +-suc acc n = rhs-eq
So, yay again! I managed to fix my original second issue - basically by using P.subst in the proof instead of in the function definition. It seems, though, that using P.subst in the function definition as per Sassa NF's guidance is preferable, as it leads to much more concise code.
The unification issue is still a little mysterious to me, but on the positive side, I unexpectedly learned about the benefits of irrelevance on top of everything.
I'm accepting Sassa NF's response, as it put me on the right track towards a solution.
Your use of P.refl indicates some misunderstanding about the role of _≡_.
There is no magic in that type. It is just a dependent type with a single constructor. Proving that some x ≡ y resolves to P.refl does not tell Agda anything new about x and y: it only tells Agda that you managed to produce a witness of the type _≡_. This is the reason it cannot tell n and .(n + 0) are the same thing, or that suc (acc + n) is the same as .(acc + suc n). So both of the errors you see are really the same.
Now, what rewrite is for.
You cannot define C x ≡ C y for dependent type C _. C x and C y are different types. Equality is defined only for elements of the same type, so there is no way to even express the idea that an element of type C x is comparable to an element of type C y.
There is, however, an axiom of induction, which allows to produce elements of type C y, if you have an element of type C x and an element of type x ≡ y. Note there is no magic in the type _≡_ - that is, you can define your own type, and construct such a function, and Agda will be satisfied:
induction : {A : Set} {C : (x y : A) -> (x ≡ y) -> Set} (x y : A) (p : x ≡ y) ((x : A) -> C x x refl) -> C x y p
induction x .x refl f = f x
Or a simplified version that follows from the induction axiom:
transport : {A : Set} {C : A -> Set} (x y : A) (x ≡ y) (C x) -> C y
transport x .x refl cx = cx
What this means in practice, is that you get a proof for something - for example, A x ≡ A x, but then transport this proof along the equality x ≡ y to get a proof A x ≡ A y. This usually requires specifying the type explicitly, in this case {C = y -> A x ≡ A y}, and provide the x, the y and the C x. As such, it is a very cumbersome procedure, although the learners will benefit from doing these steps.
rewrite then is a syntactic mechanism that rewrites the types of the terms known before the rewrite, so that such transport is not needed after that. Because it is syntactic, it does interpret the type _≡_ in a special way (so if you define your own type, you need to tell Agda you are using a different type as equality). Rewriting types is not "telling" Agda that some types are equal. It just literally replaces occurrences of x in type signatures with y, so now you only need to construct things with y and refl.
Having said all that, you can see why it works for step. There rewrite P.sym ... literally replaced all occurrences of n with n + 0, including the return type of the function, so now it is modₕ acc (acc + (n + 0)) d (n + 0) ≤ acc + (n + 0). Then constructing a value of that type just works.
Then step-ok didn't work, because you only pattern-matched values. There is nothing to tell that n and (n + 0) are the same thing. But rewrite will. Or you could use a function like this transport.

Using Agda "rewrite" to prove "composition of maps is map of compositions"

I'm very new to Agda, and I'm trying to do a simple proof of "composition of maps is the map of compositions".
(An exercise taken from this course)
Relevant definition:
_=$=_ : {X Y : Set}{f f' : X -> Y}{x x' : X} ->
f == f' -> x == x' -> f x == f' x'
refl f =$= refl x = refl (f x)
and
data Vec (X : Set) : Nat -> Set where
[] : Vec X zero
_,-_ : {n : Nat} -> X -> Vec X n -> Vec X (suc n)
infixr 4 _,-_
I want to prove:
vMapCpFact : {X Y Z : Set}{f : Y -> Z}{g : X -> Y}{h : X -> Z} ->
(heq : (x : X) -> f (g x) == h x) ->
{n : Nat} (xs : Vec X n) ->
vMap f (vMap g xs) == vMap h xs
I already figured out the proof using =$=
vMapCpFact heq [] = refl []
vMapCpFact heq (x ,- xs) = refl _,-_ =$= heq x =$= vMapCpFact heq xs
But when I tried to do the proof using rewrite, I stuck at this step:
vMapCpFact heq [] = refl []
vMapCpFact heq (x ,- xs) rewrite heq x | vMapCpFact heq xs = {!!}
Agda says the goal is still
(h x ,- vMap f (vMap g xs)) == (h x ,- vMap h xs)
I wonder why the rewrite of vMapCpFact heq xs failed?
Simply because vMapCpFact heq xs didn't fire at all. The type of this expression, as reported by Agda, is
vMap _f_73 (vMap _g_74 xs) == vMap (λ z → h z) xs
i.e. Agda can't infer f and g (those _f_73 and _g_74 are unresolved metavariables) and so it can't realize what exactly to rewrite.
You can fix this by explicitly specifying f:
vMapCpFact {f = f} heq (x ,- xs) rewrite heq x | vMapCpFact {f = f} heq xs = {!!}
Now the type of the goal is
(h x ,- vMap h xs) == (h x ,- vMap h xs)
as expected.
Or you can rewrite from right to left, since the rhs of the type of vMapCpFact heq xs is fully inferred:
vMap (λ z → h z) xs
For rewriting from right to left you only need to use sym. Then the whole thing type checks:
vMapCpFact heq (x ,- xs) rewrite heq x | sym (vMapCpFact heq xs) = refl _
because the _f_73 and _g_74 metavariables are forced to unify with the actual f and g variables by the refl.

Mutually recursive proofs

I have a standard untyped lambda calculus definition and some operations and I'm trying to show a property related to the associativity of substitutions. Unfortunately, I have to show a lot of code to make things clear.
open import Data.Nat renaming (ℕ to Nat) using (zero ; suc ; _+_)
open import Data.Vec.Properties
open import Data.Vec
using (Vec ; [] ; _∷_ ; map ; lookup ; allFin ; tabulate ; tail ; head)
open import Data.Fin using (Fin ; zero ; suc)
open import Function using (_∘_ ; _$_)
open import Relation.Binary.PropositionalEquality
open ≡-Reasoning
data WellScopedTm : Nat → Set where
var : (n : Nat) → Fin n → WellScopedTm n
lam : (n : Nat) → WellScopedTm (suc n) → WellScopedTm n
app : (n : Nat) → WellScopedTm n → WellScopedTm n → WellScopedTm n
↑_ : ∀ n → Vec (Fin (suc n)) n
↑ _ = tabulate suc
rename : ∀ {n m} (t : WellScopedTm n) (is : Vec (Fin m) n) → WellScopedTm m
rename {_} {m} (var _ i) is = var m (lookup i is)
rename {n} {m} (lam _ t) is = lam m (rename t (zero ∷ map suc is))
rename {n} {m} (app _ t u) is = app m (rename t is) (rename u is)
-- q
q : (n : Nat) → WellScopedTm (suc n)
q n = var (suc n) zero
-- id
idSub : (n : Nat) → Vec (WellScopedTm n) n
idSub n = tabulate (var n)
-- weakening (derived)
lift : {n : Nat} → WellScopedTm n → WellScopedTm (suc n)
lift t = rename t (↑ _)
-- p
projSub : (n : Nat) → Vec (WellScopedTm (suc n)) n
projSub = map lift ∘ idSub -- or tabulate (lift ∘ (var n))
-- sub
sub : ∀ {n m} → WellScopedTm n → Vec (WellScopedTm m) n → WellScopedTm m
sub (var _ i) ts = lookup i ts
sub (lam _ t) ts = lam _ (sub t (var _ zero ∷ map lift ts))
sub (app _ t u) ts = app _ (sub t ts) (sub u ts)
-- composition of homs
comp : ∀ {m n k} → Vec (WellScopedTm n) k → Vec (WellScopedTm m) n → Vec (WellScopedTm m) k
comp [] _ = []
comp (t ∷ ts) us = sub t us ∷ comp ts us
Specifically, I want to show that
compAssoc : ∀ {m n k p} (ts : Vec (WellScopedTm n) k) (us : Vec (WellScopedTm m) n)
(vs : Vec (WellScopedTm p) m) → comp (comp ts us) vs ≡ comp ts (comp us vs)
compInSub : ∀ {m n k} (t : WellScopedTm n) (ts : Vec (WellScopedTm k) n)
(us : Vec (WellScopedTm m) k) → sub t (comp ts us) ≡ sub (sub t ts) us
The proofs I came up with rely on each other, the proof of associativity is this
compAssoc [] us vs = refl
compAssoc (x ∷ ts) us vs = sym $
trans (cong (λ d → d ∷ comp ts (comp us vs)) (compInSub x us vs))
(sym (cong (_∷_ (sub (sub x us) vs)) (compAssoc ts us vs)))
However, in the lambda case of the second property, I have to use associativity in the two open goals and the termination checker complains.
compInSub (var _ zero) (v ∷ ts) us = refl
compInSub (var _ (suc x)) (v ∷ ts) us = compInSub (var _ x) ts us
compInSub (app n t u) ts us =
trans (cong (λ z → app _ z (sub u (comp ts us))) (compInSub t ts us))
(cong (app _ (sub (sub t ts) us)) (compInSub u ts us))
compInSub (lam n t) ts us = sym $
begin
lam _ (sub (sub t (q _ ∷ map lift ts)) (q _ ∷ map lift us))
≡⟨ cong (lam _) (sym $ compInSub t (q _ ∷ map lift ts) (q _ ∷ map lift us)) ⟩
lam _ (sub t $ q _ ∷ comp (map lift ts) (q _ ∷ map lift us))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ comp x _)) (mlift=xs∘p ts) ⟩
lam _ (sub t $ q _ ∷ comp (comp ts (projSub _)) (q _ ∷ map lift us))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ comp _ (q _ ∷ x))) (mlift=xs∘p us) ⟩
lam _ (sub t $ q _ ∷ comp (comp ts (projSub _)) (q _ ∷ comp us (projSub _)))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ x )) {!!} ⟩ -- compAssoc ts (projSub _) (q _ ∷ comp us (projSub _))
lam _ (sub t $ q _ ∷ comp ts (comp (projSub _) (q _ ∷ comp us (projSub _))))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ comp ts x)) (p∘x∷ts (q _) (comp us (projSub _))) ⟩ --
lam _ (sub t $ q _ ∷ comp ts (comp us (projSub _)))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ x)) (sym {!!}) ⟩ -- compAssoc ts us (projSub _)
lam _ (sub t $ q _ ∷ comp (comp ts us) (projSub _))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ x)) (sym (mlift=xs∘p (comp ts us))) ⟩
lam _ (sub t $ q _ ∷ map lift (comp ts us))
∎
Is the termination checker right to disallow the calls on associativity I commented? If not, any remedies?
Lastly, some postulates so that the code typechecks
postulate p∘x∷ts : ∀ {n k : Nat} (t : WellScopedTm n) (ts : Vec (WellScopedTm n) k) → comp (projSub k) (t ∷ ts) ≡ ts
postulate mlift=xs∘p : ∀ {n m : Nat} (xs : Vec (WellScopedTm n) m) → map lift xs ≡ comp xs (projSub n)

Problems on data type indices that uses list concatenation

I'm having a nasty problem with a formalisation of a theorem that uses a data type that have some constructors whose indices have list concatenation. When I try to use emacs mode to case split, Agda returns the following error message:
I'm not sure if there should be a case for the constructor
o-success, because I get stuck when trying to solve the following
unification problems (inferred index ≟ expected index):
e₁ o e'' , x₁ ++ x'' ++ y₁ ≟ e o e' , x ++ x' ++ y
suc (n₂ + n'') , x₁ ++ x'' ≟ m' , p''
when checking that the expression ? has type
suc (.n₁ + .n') == .m' × .x ++ .x' == p'
Since the code is has more than a small number of lines, I put it on the following gist:
https://gist.github.com/rodrigogribeiro/976b3d5cc82c970314c2
Any tip is appreciated.
Best,
There was a similar question.
However you want to unify xs1 ++ xs2 ++ xs3 with ys1 ++ ys2 ++ ys3, but _++_ is not a constructor — it's a function, and it's not injective. Consider this simplified example:
data Bar {A : Set} : List A -> Set where
bar : ∀ xs {ys} -> Bar (xs ++ ys)
ex : ∀ {A} {zs : List A} -> Bar zs -> Bar zs -> List A
ex (bar xs) b = {!!}
b is of type Bar (xs ++ .ys), but b is not necessarily equal to bar .xs, so you can't pattern-match like this. Here are two Bars, which have equal types but different values:
ok : ∃₂ λ (b1 b2 : Bar (tt ∷ [])) -> b1 ≢ b2
ok = bar [] , bar (tt ∷ []) , λ ()
This is because xs1 ++ xs2 ≡ ys1 ++ ys2 doesn't imply xs1 ≡ ys1 × xs2 ≡ ys2 in general.
But it's possible to generalize an index. You can use the technique described by Vitus at the link above, or you can use this simple combinator, which forgets the index:
generalize : ∀ {α β} {A : Set α} (B : A -> Set β) {x : A} -> B x -> ∃ B
generalize B y = , y
E.g.
ex : ∀ {A} {zs : List A} -> Bar zs -> Bar zs -> List A
ex {A} (bar xs) b with generalize Bar b
... | ._ , bar ys = xs ++ ys
After all, are you sure your lemma is true?
UPDATE
Some remarks first.
Your empty case states
empty : forall x -> G :: (emp , x) => (1 , x)
that the empty parser parses the whole string. It should be
empty : forall x -> G :: (emp , x) => (1 , [])
as in the paper.
Your definition of o-fail1 contains this part:
(n , fail ∷ o)
but fail fails everything, so it should be (n , fail ∷ []). With this representation you would probably need decidable equality on A to finish the lemma, and proofs would be dirty. Clean and idiomatic way to represent something, that can fail, is to wrap it in the Maybe monad, so here is my definition of _::_=>_:
data _::_=>_ {n} (G : Con n) : Foo n × List A -> Nat × Maybe (List A) -> Set where
empty : ∀ {x} -> G :: emp , x => 1 , just []
sym-success : ∀ {a x} -> G :: sym a , (a ∷ x) => 1 , just (a ∷ [])
sym-failure : ∀ {a b x} -> ¬ (a == b) -> G :: sym a , b ∷ x => 1 , nothing
var : ∀ {x m o} {v : Fin (suc n)}
-> G :: lookup v G , x => m , o -> G :: var v , x => suc m , o
o-success : ∀ {e e' x x' y n n'}
-> G :: e , x ++ x' ++ y => n , just x
-> G :: e' , x' ++ y => n' , just x'
-> G :: e o e' , x ++ x' ++ y => suc (n + n') , just (x ++ x')
o-fail1 : ∀ {e e' x x' y n}
-> G :: e , x ++ x' ++ y => n , nothing
-> G :: e o e' , x ++ x' ++ y => suc n , nothing
o-fail2 : ∀ {e e' x x' y n n'}
-> G :: e , x ++ x' ++ y => n , just x
-> G :: e' , x' ++ y => n' , nothing
-> G :: e o e' , x ++ x' ++ y => suc (n + n') , nothing
Here is the lemma:
postulate
cut : ∀ {α} {A : Set α} -> ∀ xs {ys zs : List A} -> xs ++ ys == xs ++ zs -> ys == zs
mutual
aux : ∀ {n} {G : Con n} {e e' z x x' y n n' m' p'}
-> z == x ++ x' ++ y
-> G :: e , z => n , just x
-> G :: e' , x' ++ y => n' , just x'
-> G :: e o e' , z => m' , p'
-> suc (n + n') == m' × just (x ++ x') == p'
aux {x = x} {x'} {n = n} {n'} r pr1 pr2 (o-success {x = x''} pr3 pr4) with x | n | lemma pr1 pr3
... | ._ | ._ | refl , refl rewrite cut x'' r with x' | n' | lemma pr2 pr4
... | ._ | ._ | refl , refl = refl , refl
aux ...
lemma : ∀ {n m m'} {G : Con n} {f x p p'}
-> G :: f , x => m , p -> G :: f , x => m' , p' -> m == m' × p == p'
lemma (o-success pr1 pr2) pr3 = aux refl pr1 pr2 pr3
lemma ...
The proof proceeds as follows:
We generalize the type of lemma's pr3 in an auxiliary function as in the Vitus' answer. Now it's possible to pattern-match on pr3.
We prove, that the first parser in lemma's pr3 (called also pr3 in the aux) produces the same output as pr1.
After some rewriting, we prove that the second parser in lemma's pr3 (called pr4 in the aux) produces the same output as pr2.
And since pr1 and pr3 produce the same output, and pr2 and pr4 produce the same output, o-success pr1 pr2 and o-success pr3 pr4 produce the same output, so we put refl , refl.
The code. I didn't prove the o-fail1 and o-fail2 cases, but they should be similar.
UPDATE
Amount of boilerplate can be reduced by
Fixing the definitions of the fail cases, which contain redundant information.
Returning Maybe (List A) instead of Nat × Maybe (List A). You can compute this Nat recursively, if needed.
Using the inspect idiom instead of auxiliary functions.
I don't think there is a simpler solution. The code.

How does one prove a type of the form (a | b) in agda?

In thinking about:
In Agda is it possible to define a datatype that has equations?
I was playing with the following datatype:
data Int : Set where
Z : Int
S : Int -> Int
P : Int -> Int
The above is a poor definition of Integers, and the answers in the above give a way around this. However, one can define a reduction on the above Int type that might be useful.
normalize : Int -> Int
normalize Z = Z
normalize (S n) with normalize n
... | P m = m
... | m = S m
normalize (P n) with normalize n
... | S m = m
... | m = P m
The thing that needs to be proved is:
idempotent : (n : Int) -> normalize n \== normalize (normalize n)
When you expand the cases out, you get for example
idempotent (P n) = ?
The goal for the hole has type
(normalize (P n) | normalize n) \== normalize (normalize (P n) | normalize n)
And I haven't seen this "|" before, nor do I know how to produce a proof of a type involving them. The proof needs to pattern match,for example,
idempotent (P n) with inspect (normalize n)
... (S m) with-\== = ?
... m with-\== = ?
But here the hole for the second case still has a "|" in it. So I am a bit confused.
-------- EDIT ---------------
It would be helpful to prove a simpler statement:
normLemma : (n m : NZ) -> normalize n \== P m -> normalize (S n) \== m
The "on paper" proof is rather straightforward. Assuming normalize n = P m, consider
normalize (S n) = case normalize n of
P k -> k
x -> S x
But normalize n is assumed to be P m, hence normalize (S n) = k. Then k = m, since normalize n = P m = P k which implies m = k. Thus normalize (S n) = m.
User Vitus proposed to use normal forms.
If we have these two functions:
normalForm : ∀ n -> NormalForm (normalize n)
idempotent' : ∀ {n} -> NormalForm n -> normalize n ≡ n
then we can easily compose them to obtain the result we need:
idempotent : ∀ n -> normalize (normalize n) ≡ normalize n
idempotent = idempotent' ∘ normalForm
Here is the definition of normal forms:
data NormalForm : Int -> Set where
NZ : NormalForm Z
NSZ : NormalForm (S Z)
NPZ : NormalForm (P Z)
NSS : ∀ {n} -> NormalForm (S n) -> NormalForm (S (S n))
NPP : ∀ {n} -> NormalForm (P n) -> NormalForm (P (P n))
I.e. only terms like S (S ... (S Z)... and P (P ... (P Z)...) are in the normal form.
And proofs are rather straightforward:
normalForm : ∀ n -> NormalForm (normalize n)
normalForm Z = NZ
normalForm (S n) with normalize n | normalForm n
... | Z | nf = NSZ
... | S _ | nf = NSS nf
... | P ._ | NPZ = NZ
... | P ._ | NPP nf = nf
normalForm (P n) with normalize n | normalForm n
... | Z | nf = NPZ
... | S ._ | NSZ = NZ
... | S ._ | NSS nf = nf
... | P _ | nf = NPP nf
idempotent' : ∀ {n} -> NormalForm n -> normalize n ≡ n
idempotent' NZ = refl
idempotent' NSZ = refl
idempotent' NPZ = refl
idempotent' (NSS p) rewrite idempotent' p = refl
idempotent' (NPP p) rewrite idempotent' p = refl
The whole code: https://gist.github.com/flickyfrans/f2c7d5413b3657a94950#file-another-one
idempotent : (n : Int) -> normalize (normalize n) ≡ normalize n
idempotent Z = refl
idempotent (S n) with normalize n | inspect normalize n
... | Z | _ = refl
... | S m | [ p ] = {!!}
... | P m | [ p ] = {!!}
Context in the first hole is
Goal: (normalize (S (S m)) | (normalize (S m) | normalize m)) ≡
S (S m)
————————————————————————————————————————————————————————————
p : normalize n ≡ S m
m : Int
n : Int
(normalize (S (S m)) | (normalize (S m) | normalize m)) ≡ S (S m) is just an expanded version of normalize (S (S m)). So we can rewrite the context a bit:
Goal: normalize (S (S m)) ≡ S (S m)
————————————————————————————————————————————————————————————
p : normalize n ≡ S m
m : Int
n : Int
Due to the definition of the normalize function
normalize (S n) with normalize n
... | P m = m
... | m = S m
normalize (S n) ≡ S (normalize n), if normalize n doesn't contain Ps.
If we have an equation like normalize n ≡ S m, than m is already normalized and doesn't contain Ps. But if m doesn't contain Ps, so and normalize m. So we have normalize (S m) ≡ S (normalize m).
Let's prove a little more general lemma:
normalize-S : ∀ n {m} -> normalize n ≡ S m -> ∀ i -> normalize (m ‵add‵ i) ≡ m ‵add‵ i
where ‵add‵ is
_‵add‵_ : Int -> ℕ -> Int
n ‵add‵ 0 = n
n ‵add‵ (suc i) = S (n ‵add‵ i)
normalize-S states, that if m doesn't contain Ps, than this holds:
normalize (S (S ... (S m)...)) ≡ S (S ... (S (normalize m))...)
Here is a proof:
normalize-S : ∀ n {m} -> normalize n ≡ S m -> ∀ i -> normalize (m ‵add‵ i) ≡ m ‵add‵ i
normalize-S Z () i
normalize-S (S n) p i with normalize n | inspect normalize n
normalize-S (S n) refl i | Z | _ = {!!}
normalize-S (S n) refl i | S m | [ q ] = {!!}
normalize-S (S n) refl i | P (S m) | [ q ] = {!!}
normalize-S (P n) p i with normalize n | inspect normalize n
normalize-S (P n) () i | Z | _
normalize-S (P n) refl i | S (S m) | [ q ] = {!!}
normalize-S (P n) () i | P _ | _
Context in the first hole is
Goal: normalize (Z ‵add‵ i) ≡ Z ‵add‵ i
————————————————————————————————————————————————————————————
i : ℕ
.w : Reveal .Data.Unit.Core.hide normalize n is Z
n : Int
I.e. normalize (S (S ... (S Z)...)) ≡ S (S ... (S Z)...). We can easily prove it:
normalize-add : ∀ i -> normalize (Z ‵add‵ i) ≡ Z ‵add‵ i
normalize-add 0 = refl
normalize-add (suc i) rewrite normalize-add i with i
... | 0 = refl
... | suc _ = refl
So we can fill the first hole with normalize-add i.
Context in the second hole is
Goal: normalize (S m ‵add‵ i) ≡ S m ‵add‵ i
————————————————————————————————————————————————————————————
i : ℕ
q : .Data.Unit.Core.reveal (.Data.Unit.Core.hide normalize n) ≡ S m
m : Int
n : Int
While normalize-S n q (suc i) has this type:
(normalize (S (m ‵add‵ i)) | normalize (m ‵add‵ i)) ≡ S (m ‵add‵ i)
Or, shortly, normalize (S (m ‵add‵ i)) ≡ S (m ‵add‵ i). So we need to replace S m ‵add‵ i with S (m ‵add‵ i):
inj-add : ∀ n i -> S n ‵add‵ i ≡ S (n ‵add‵ i)
inj-add n 0 = refl
inj-add n (suc i) = cong S (inj-add n i)
And now we can write
normalize-S (S n) refl i | S m | [ q ] rewrite inj-add m i = normalize-S n q (suc i)
Context in the third hole is
Goal: normalize (m ‵add‵ i) ≡ m ‵add‵ i
————————————————————————————————————————————————————————————
i : ℕ
q : .Data.Unit.Core.reveal (.Data.Unit.Core.hide normalize n) ≡
P (S m)
m : Int
n : Int
normalize-P n q 0 gives us normalize (S m) ≡ S m where normalize-P is dual of normalize-S and has this type:
normalize-P : ∀ n {m} -> normalize n ≡ P m -> ∀ i -> normalize (m ‵sub‵ i) ≡ m ‵sub‵ i
We can apply normalize-S to something, that has type normalize (S m) ≡ S m: normalize-S (S m) (normalize-P n q 0) i. This expression has precisely the type we want. So we can write
normalize-S (S n) refl i | P (S m) | [ q ] = normalize-S (S m) (normalize-P n q 0) i
The fourth hole is similar to the third:
normalize-S (P n) refl i | S (S m) | [ q ] = normalize-S (S m) (normalize-S n q 0) i
There is a problem with this holes: Agda doesn't see, that normalize-S (S m) _ _ terminates, since S m is not syntactically smaller than S n. It's possible however to convience Agda by using well-founded recursion.
Having all this stuff we can easily proof the idempotent theorem:
idempotent : (n : Int) -> normalize (normalize n) ≡ normalize n
idempotent Z = refl
idempotent (S n) with normalize n | inspect normalize n
... | Z | _ = refl
... | S m | [ p ] = normalize-S n p 2
... | P m | [ p ] = normalize-P n p 0
idempotent (P n) with normalize n | inspect normalize n
... | Z | _ = refl
... | S m | [ p ] = normalize-S n p 0
... | P m | [ p ] = normalize-P n p 2
Here is the code: https://gist.github.com/flickyfrans/f2c7d5413b3657a94950
There are both versions: with the {-# TERMINATING #-} pragma and without.
EDIT
idempotent is simply
idempotent : ∀ n -> normalize (normalize n) ≡ normalize n
idempotent n with normalize n | inspect normalize n
... | Z | _ = refl
... | S _ | [ p ] = normalize-S n p 1
... | P _ | [ p ] = normalize-P n p 1

Resources