How to improve this Agda function with standard libraries? - agda

The deq function below returns a Dec instance for Name, which either holds a String, or prefixes a Name with one of two constructors.
data Name : Set where
Str : String -> Name
Cp0 : Name -> Name
Cp1 : Name -> Name
deq : (a : Name) -> (b : Name) -> Dec (a ≡ b)
deq (Str a) (Str b) with iseq a b
deq (Str a) (Str b) | yes p = yes (cong Str p)
deq (Str a) (Str b) | no ¬p = no f
where f : (Str a ≡ Str b) -> ⊥
f refl = ¬p refl
deq (Str a) (Cp0 b) = no (λ ())
deq (Str a) (Cp1 b) = no (λ ())
deq (Cp0 a) (Str b) = no (λ ())
deq (Cp0 a) (Cp0 b) with deq a b
deq (Cp0 a) (Cp0 b) | yes p = yes (cong Cp0 p)
deq (Cp0 a) (Cp0 b) | no ¬p = no f
where f : (Cp0 a ≡ Cp0 b) -> ⊥
f refl = ¬p refl
deq (Cp0 a) (Cp1 b) = no (λ ())
deq (Cp1 a) (Str b) = no (λ ())
deq (Cp1 a) (Cp0 b) = no (λ ())
deq (Cp1 a) (Cp1 b) with deq a b
deq (Cp1 a) (Cp1 b) | yes p = yes (cong Cp1 p)
deq (Cp1 a) (Cp1 b) | no ¬p = no f
where f : (Cp1 a ≡ Cp1 b) -> ⊥
f refl = ¬p refl
But here, a few clauses are awkward because they need a where. Is the standard-library had a cong equivalent that allowed us to turn a ≡ b -> ⊥ into F a ≡ F b -> ⊥ for a constructor F, then we could make this definition shorter. Also, if we could do the pattern matching inline with lambdas, we could improve it too, but for that we'd need to annotate the lambda input with a type. Is there any way to improve this definition?

Related

Is the image of a `hcomp` the `hcomp` of images?

I am trying to fill the remaining one hole in the following program:
{-# OPTIONS --cubical #-}
module _ where
open import Cubical.Core.Everything
open import Cubical.Foundations.Everything
data S1 : Type where
base : S1
loop : base ≡ base
data NS : Type where
N : NS
S : NS
W : S ≡ N
E : N ≡ S
module _ where
open Iso
NS-Iso : Iso NS S1
NS-Iso .fun N = base
NS-Iso .fun S = base
NS-Iso .fun (W i) = base
NS-Iso .fun (E i) = loop i
NS-Iso .inv base = N
NS-Iso .inv (loop i) = (E ∙ W) i
NS-Iso .leftInv N = refl
NS-Iso .leftInv S = sym W
NS-Iso .leftInv (W i) = λ j → W (i ∨ ~ j)
NS-Iso .leftInv (E i) = λ j → compPath-filler E W (~ j) i
NS-Iso .rightInv base = refl
NS-Iso .rightInv (loop i) = ?
The type of the hole is:
fun NS-Iso (inv NS-Iso (loop i)) ≡ loop i
inv NS-Iso (loop i) of course definitionally equal to (E ∙ W) i, but what is then fun NS-Iso ((E ∙ W) i)? Is there some kind of homomorphism / continuity / similar property with which I can use the definitions of fun NS-Iso (E i) and fun NS-Iso (W i) to figure out what it is?
Since fun NS-Iso (E i) = loop i and fun NS-Iso (W i) = base, I thought this might be a valid filling (pun intended) of the hole:
NS-Iso .rightInv (loop i) = λ j → compPath-filler loop (refl {x = base}) (~ j) i
But that gives a type error:
hcomp (doubleComp-faces (λ _ → base) (λ _ → base) i) (loop i)
!=
fun NS-Iso (hcomp (doubleComp-faces (λ _ → N) W i) (E i))
I have found that the answer is, basically, yes!
Let's add a local binding of our goal in NS-Iso .rightInv (loop i) just to keep an eye on the type:
NS-Iso .rightInv (loop i) = goal
where
goal : (fun NS-Iso ∘ inv NS-Iso) (loop i) ≡ loop i
goal = ?
Since we have
NS-Iso .inv (loop i) = (E ∙ W) i
the type of goal reduces to:
step1 : fun NS-Iso ((E ∙ W) i) ≡ loop i
And now comes the crucial step, the actual answer to my question: can we push in fun NS-Iso into E ∙ W?
Let's draw wavy lines between the points and paths where fun NS-Iso is directly defined, or where we know its value by cong _ refl = refl:
base base
^ ~ ~ ^
| ~ ~ |
| ~ ~ |
| N N |
| ^ ^ |
refl | ~~~ refl | | W ~~~~~ | refl
| | | |
| N -------------> S |
| ~ E ~ |
| ~ ~ ~ |
| ~ ~ ~ |
base --------------------------------> base
loop
E ∙ W is the lid of the inner box, and it turns out yes, its image by fun NS-Iso is indeed the lid of the outer box:
step2 : (cong (fun NS-Iso) E ∙ (cong (fun NS-Iso) W)) i ≡ loop i
In graphical form:
?
base - - - - - - - - - - - - - - - - - > base
^ ~ ~ ^
| ~ ~ |
| ~ E ∙ W ~ |
| N - - - - - - -> N |
| ^ ^ |
refl | ~~~ refl | | W ~~~~~ | refl
| | | |
| N -------------> S |
| ~ E ~ |
| ~ ~ ~ |
| ~ ~ ~ |
base -------------------------------> base
loop
So we can reduce the fun NS-Iso applications now:
step3 : (loop ∙ (λ _ → base)) i ≡ loop i
which we can finally solve with a library function doubleCompPath-filler.
The complete code:
NS-Iso .rightInv (loop i) = goal
where
step3 : (loop ∙ (λ _ → base)) i ≡ loop i
step3 = cong (λ p → p i) (symP (ompPath-filler loop (λ _ → base)))
step2 : (cong (fun NS-Iso) E ∙ (cong (fun NS-Iso) W)) i ≡ loop i
step2 = step3
step1 : fun NS-Iso ((E ∙ W) i) ≡ loop i
step1 j = step2 j
goal : (fun NS-Iso ∘ inv NS-Iso) (loop i) ≡ loop i
goal j = step1 j
I don't know why I needed to eta-expand goal and step1, but otherwise Agda doesn't recognize that they meet the boundary conditions.

How is Agda inferring the implicit argument to `Vec.foldl`?

foldl : ∀ {a b} {A : Set a} (B : ℕ → Set b) {m} →
(∀ {n} → B n → A → B (suc n)) →
B zero →
Vec A m → B m
foldl b _⊕_ n [] = n
foldl b _⊕_ n (x ∷ xs) = foldl (λ n → b (suc n)) _⊕_ (n ⊕ x) xs
When translating the above function to Lean, I was shocked to find out that its true form is actually like...
def foldl : ∀ (P : ℕ → Type a) {n : nat}
(f : ∀ {n}, P n → α → P (n+1)) (s : P 0)
(l : Vec α n), P n
| P 0 f s (nil _) := s
| P (n+1) f s (cons x xs) := foldl (fun n, P (n+1)) (λ n, #f (n+1)) (#f 0 s x) xs
I find it really impressive that Agda is able to infer the implicit argument to f correctly. How is it doing that?
foldl : ∀ {a b} {A : Set a} (B : ℕ → Set b) {m} →
(∀ {n} → B n → A → B (suc n)) →
B zero →
Vec A m → B m
foldl b _⊕_ n [] = n
foldl b _⊕_ n (x ∷ xs) = foldl (λ n → b (suc n)) _⊕_ (_⊕_ {0} n x) xs
If I pass it 0 explicitly as in the Lean version, I get a hint as to the answer. What is going on is that Agda is doing the same thing as in the Lean version, namely wrapping the implicit arg so it is suc'd.
This is surprising as I thought that implicit arguments just means that Agda should provide them on its own. I did not think it would change the function when it is passed as an argument.

How exactly can cong be well-typed?

I was looking at the definition of cong:
cong : ∀ {a b} {A : Set a} {B : Set b} (f : A → B) {x y} → x ≡ y → f x ≡ f y
cong f refl = refl
And I couldn't understand why it is well-typed. In particular, it seems like the implicit argument of refl must be both f x and f y. To make things more clear, I wrote a non-implicit version of equality, and attempted to replicate the proof:
data Eq : (A : Set) -> A -> A -> Set where
refl : (A : Set) -> (x : A) -> Eq A x x
cong : (A : Set) -> (B : Set) -> (f : A -> B) ->
(x : A) -> (y : A) -> (e : Eq A x y) -> Eq B (f x) (f y)
cong A B f x y e = refl B (f x)
This results in a type error:
x != y of type A when checking that the expression refl B (f x) has type Eq B (f x) (f y)
As one would expect. What could I possibly have instead of (f x)? Am I missing something?
Dependent pattern matching at your service.
If we make a hole in your cong
cong : (A : Set) -> (B : Set) -> (f : A -> B) ->
(x : A) -> (y : A) -> (e : Eq A x y) -> Eq B (f x) (f y)
cong A B f x y e = {!refl B (f x)!}
and look into it, we'll see
Goal: Eq B (f x) (f y)
Have: Eq B (f x) (f x)
so the values are indeed different. But once you pattern match on e:
cong : (A : Set) -> (B : Set) -> (f : A -> B) ->
(x : A) -> (y : A) -> (e : Eq A x y) -> Eq B (f x) (f y)
cong A B f x y (refl .A .x) = {!refl B (f x)!}
the fact that x is the same thing as y is revealed and the context is silently rewritten: each occurrence of y is replaced by x, so looking into the hole we now see
Goal: Eq B (f x) (f x)
Have: Eq B (f x) (f x)
Note that we can write
cong A B f x .x (refl .A .x) = refl B (f x)
i.e. do not bind y at all and just say that it's the same as x via a dot-pattern. We gained this information by pattern matching on e : Eq A x y, because once the match is performed we know that it's e : Eq A x x actually, because that's what the type signature of refl says. Unification of Eq A x y and Eq A x x results in a trivial conclusion: y equals x and the whole context is adjusted accordingly.
That's the same logic as with Haskell GADTs:
data Value a where
ValueInt :: Int -> Value Int
ValueBool :: Bool -> Value Bool
eval :: Value a -> a
eval (ValueInt i) = i
eval (ValueBool b) = b
when you match on ValueInt and get i of type Int, you also reveal that a equals Int and add this knowledge to the context (via an equality constraint) which makes a and Int unifiable later. That is how we're able to return i as a result: because a from the type signature and Int unify perfectly as we know from the context.

Mutually recursive proofs

I have a standard untyped lambda calculus definition and some operations and I'm trying to show a property related to the associativity of substitutions. Unfortunately, I have to show a lot of code to make things clear.
open import Data.Nat renaming (ℕ to Nat) using (zero ; suc ; _+_)
open import Data.Vec.Properties
open import Data.Vec
using (Vec ; [] ; _∷_ ; map ; lookup ; allFin ; tabulate ; tail ; head)
open import Data.Fin using (Fin ; zero ; suc)
open import Function using (_∘_ ; _$_)
open import Relation.Binary.PropositionalEquality
open ≡-Reasoning
data WellScopedTm : Nat → Set where
var : (n : Nat) → Fin n → WellScopedTm n
lam : (n : Nat) → WellScopedTm (suc n) → WellScopedTm n
app : (n : Nat) → WellScopedTm n → WellScopedTm n → WellScopedTm n
↑_ : ∀ n → Vec (Fin (suc n)) n
↑ _ = tabulate suc
rename : ∀ {n m} (t : WellScopedTm n) (is : Vec (Fin m) n) → WellScopedTm m
rename {_} {m} (var _ i) is = var m (lookup i is)
rename {n} {m} (lam _ t) is = lam m (rename t (zero ∷ map suc is))
rename {n} {m} (app _ t u) is = app m (rename t is) (rename u is)
-- q
q : (n : Nat) → WellScopedTm (suc n)
q n = var (suc n) zero
-- id
idSub : (n : Nat) → Vec (WellScopedTm n) n
idSub n = tabulate (var n)
-- weakening (derived)
lift : {n : Nat} → WellScopedTm n → WellScopedTm (suc n)
lift t = rename t (↑ _)
-- p
projSub : (n : Nat) → Vec (WellScopedTm (suc n)) n
projSub = map lift ∘ idSub -- or tabulate (lift ∘ (var n))
-- sub
sub : ∀ {n m} → WellScopedTm n → Vec (WellScopedTm m) n → WellScopedTm m
sub (var _ i) ts = lookup i ts
sub (lam _ t) ts = lam _ (sub t (var _ zero ∷ map lift ts))
sub (app _ t u) ts = app _ (sub t ts) (sub u ts)
-- composition of homs
comp : ∀ {m n k} → Vec (WellScopedTm n) k → Vec (WellScopedTm m) n → Vec (WellScopedTm m) k
comp [] _ = []
comp (t ∷ ts) us = sub t us ∷ comp ts us
Specifically, I want to show that
compAssoc : ∀ {m n k p} (ts : Vec (WellScopedTm n) k) (us : Vec (WellScopedTm m) n)
(vs : Vec (WellScopedTm p) m) → comp (comp ts us) vs ≡ comp ts (comp us vs)
compInSub : ∀ {m n k} (t : WellScopedTm n) (ts : Vec (WellScopedTm k) n)
(us : Vec (WellScopedTm m) k) → sub t (comp ts us) ≡ sub (sub t ts) us
The proofs I came up with rely on each other, the proof of associativity is this
compAssoc [] us vs = refl
compAssoc (x ∷ ts) us vs = sym $
trans (cong (λ d → d ∷ comp ts (comp us vs)) (compInSub x us vs))
(sym (cong (_∷_ (sub (sub x us) vs)) (compAssoc ts us vs)))
However, in the lambda case of the second property, I have to use associativity in the two open goals and the termination checker complains.
compInSub (var _ zero) (v ∷ ts) us = refl
compInSub (var _ (suc x)) (v ∷ ts) us = compInSub (var _ x) ts us
compInSub (app n t u) ts us =
trans (cong (λ z → app _ z (sub u (comp ts us))) (compInSub t ts us))
(cong (app _ (sub (sub t ts) us)) (compInSub u ts us))
compInSub (lam n t) ts us = sym $
begin
lam _ (sub (sub t (q _ ∷ map lift ts)) (q _ ∷ map lift us))
≡⟨ cong (lam _) (sym $ compInSub t (q _ ∷ map lift ts) (q _ ∷ map lift us)) ⟩
lam _ (sub t $ q _ ∷ comp (map lift ts) (q _ ∷ map lift us))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ comp x _)) (mlift=xs∘p ts) ⟩
lam _ (sub t $ q _ ∷ comp (comp ts (projSub _)) (q _ ∷ map lift us))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ comp _ (q _ ∷ x))) (mlift=xs∘p us) ⟩
lam _ (sub t $ q _ ∷ comp (comp ts (projSub _)) (q _ ∷ comp us (projSub _)))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ x )) {!!} ⟩ -- compAssoc ts (projSub _) (q _ ∷ comp us (projSub _))
lam _ (sub t $ q _ ∷ comp ts (comp (projSub _) (q _ ∷ comp us (projSub _))))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ comp ts x)) (p∘x∷ts (q _) (comp us (projSub _))) ⟩ --
lam _ (sub t $ q _ ∷ comp ts (comp us (projSub _)))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ x)) (sym {!!}) ⟩ -- compAssoc ts us (projSub _)
lam _ (sub t $ q _ ∷ comp (comp ts us) (projSub _))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ x)) (sym (mlift=xs∘p (comp ts us))) ⟩
lam _ (sub t $ q _ ∷ map lift (comp ts us))
∎
Is the termination checker right to disallow the calls on associativity I commented? If not, any remedies?
Lastly, some postulates so that the code typechecks
postulate p∘x∷ts : ∀ {n k : Nat} (t : WellScopedTm n) (ts : Vec (WellScopedTm n) k) → comp (projSub k) (t ∷ ts) ≡ ts
postulate mlift=xs∘p : ∀ {n m : Nat} (xs : Vec (WellScopedTm n) m) → map lift xs ≡ comp xs (projSub n)

How does one prove a type of the form (a | b) in agda?

In thinking about:
In Agda is it possible to define a datatype that has equations?
I was playing with the following datatype:
data Int : Set where
Z : Int
S : Int -> Int
P : Int -> Int
The above is a poor definition of Integers, and the answers in the above give a way around this. However, one can define a reduction on the above Int type that might be useful.
normalize : Int -> Int
normalize Z = Z
normalize (S n) with normalize n
... | P m = m
... | m = S m
normalize (P n) with normalize n
... | S m = m
... | m = P m
The thing that needs to be proved is:
idempotent : (n : Int) -> normalize n \== normalize (normalize n)
When you expand the cases out, you get for example
idempotent (P n) = ?
The goal for the hole has type
(normalize (P n) | normalize n) \== normalize (normalize (P n) | normalize n)
And I haven't seen this "|" before, nor do I know how to produce a proof of a type involving them. The proof needs to pattern match,for example,
idempotent (P n) with inspect (normalize n)
... (S m) with-\== = ?
... m with-\== = ?
But here the hole for the second case still has a "|" in it. So I am a bit confused.
-------- EDIT ---------------
It would be helpful to prove a simpler statement:
normLemma : (n m : NZ) -> normalize n \== P m -> normalize (S n) \== m
The "on paper" proof is rather straightforward. Assuming normalize n = P m, consider
normalize (S n) = case normalize n of
P k -> k
x -> S x
But normalize n is assumed to be P m, hence normalize (S n) = k. Then k = m, since normalize n = P m = P k which implies m = k. Thus normalize (S n) = m.
User Vitus proposed to use normal forms.
If we have these two functions:
normalForm : ∀ n -> NormalForm (normalize n)
idempotent' : ∀ {n} -> NormalForm n -> normalize n ≡ n
then we can easily compose them to obtain the result we need:
idempotent : ∀ n -> normalize (normalize n) ≡ normalize n
idempotent = idempotent' ∘ normalForm
Here is the definition of normal forms:
data NormalForm : Int -> Set where
NZ : NormalForm Z
NSZ : NormalForm (S Z)
NPZ : NormalForm (P Z)
NSS : ∀ {n} -> NormalForm (S n) -> NormalForm (S (S n))
NPP : ∀ {n} -> NormalForm (P n) -> NormalForm (P (P n))
I.e. only terms like S (S ... (S Z)... and P (P ... (P Z)...) are in the normal form.
And proofs are rather straightforward:
normalForm : ∀ n -> NormalForm (normalize n)
normalForm Z = NZ
normalForm (S n) with normalize n | normalForm n
... | Z | nf = NSZ
... | S _ | nf = NSS nf
... | P ._ | NPZ = NZ
... | P ._ | NPP nf = nf
normalForm (P n) with normalize n | normalForm n
... | Z | nf = NPZ
... | S ._ | NSZ = NZ
... | S ._ | NSS nf = nf
... | P _ | nf = NPP nf
idempotent' : ∀ {n} -> NormalForm n -> normalize n ≡ n
idempotent' NZ = refl
idempotent' NSZ = refl
idempotent' NPZ = refl
idempotent' (NSS p) rewrite idempotent' p = refl
idempotent' (NPP p) rewrite idempotent' p = refl
The whole code: https://gist.github.com/flickyfrans/f2c7d5413b3657a94950#file-another-one
idempotent : (n : Int) -> normalize (normalize n) ≡ normalize n
idempotent Z = refl
idempotent (S n) with normalize n | inspect normalize n
... | Z | _ = refl
... | S m | [ p ] = {!!}
... | P m | [ p ] = {!!}
Context in the first hole is
Goal: (normalize (S (S m)) | (normalize (S m) | normalize m)) ≡
S (S m)
————————————————————————————————————————————————————————————
p : normalize n ≡ S m
m : Int
n : Int
(normalize (S (S m)) | (normalize (S m) | normalize m)) ≡ S (S m) is just an expanded version of normalize (S (S m)). So we can rewrite the context a bit:
Goal: normalize (S (S m)) ≡ S (S m)
————————————————————————————————————————————————————————————
p : normalize n ≡ S m
m : Int
n : Int
Due to the definition of the normalize function
normalize (S n) with normalize n
... | P m = m
... | m = S m
normalize (S n) ≡ S (normalize n), if normalize n doesn't contain Ps.
If we have an equation like normalize n ≡ S m, than m is already normalized and doesn't contain Ps. But if m doesn't contain Ps, so and normalize m. So we have normalize (S m) ≡ S (normalize m).
Let's prove a little more general lemma:
normalize-S : ∀ n {m} -> normalize n ≡ S m -> ∀ i -> normalize (m ‵add‵ i) ≡ m ‵add‵ i
where ‵add‵ is
_‵add‵_ : Int -> ℕ -> Int
n ‵add‵ 0 = n
n ‵add‵ (suc i) = S (n ‵add‵ i)
normalize-S states, that if m doesn't contain Ps, than this holds:
normalize (S (S ... (S m)...)) ≡ S (S ... (S (normalize m))...)
Here is a proof:
normalize-S : ∀ n {m} -> normalize n ≡ S m -> ∀ i -> normalize (m ‵add‵ i) ≡ m ‵add‵ i
normalize-S Z () i
normalize-S (S n) p i with normalize n | inspect normalize n
normalize-S (S n) refl i | Z | _ = {!!}
normalize-S (S n) refl i | S m | [ q ] = {!!}
normalize-S (S n) refl i | P (S m) | [ q ] = {!!}
normalize-S (P n) p i with normalize n | inspect normalize n
normalize-S (P n) () i | Z | _
normalize-S (P n) refl i | S (S m) | [ q ] = {!!}
normalize-S (P n) () i | P _ | _
Context in the first hole is
Goal: normalize (Z ‵add‵ i) ≡ Z ‵add‵ i
————————————————————————————————————————————————————————————
i : ℕ
.w : Reveal .Data.Unit.Core.hide normalize n is Z
n : Int
I.e. normalize (S (S ... (S Z)...)) ≡ S (S ... (S Z)...). We can easily prove it:
normalize-add : ∀ i -> normalize (Z ‵add‵ i) ≡ Z ‵add‵ i
normalize-add 0 = refl
normalize-add (suc i) rewrite normalize-add i with i
... | 0 = refl
... | suc _ = refl
So we can fill the first hole with normalize-add i.
Context in the second hole is
Goal: normalize (S m ‵add‵ i) ≡ S m ‵add‵ i
————————————————————————————————————————————————————————————
i : ℕ
q : .Data.Unit.Core.reveal (.Data.Unit.Core.hide normalize n) ≡ S m
m : Int
n : Int
While normalize-S n q (suc i) has this type:
(normalize (S (m ‵add‵ i)) | normalize (m ‵add‵ i)) ≡ S (m ‵add‵ i)
Or, shortly, normalize (S (m ‵add‵ i)) ≡ S (m ‵add‵ i). So we need to replace S m ‵add‵ i with S (m ‵add‵ i):
inj-add : ∀ n i -> S n ‵add‵ i ≡ S (n ‵add‵ i)
inj-add n 0 = refl
inj-add n (suc i) = cong S (inj-add n i)
And now we can write
normalize-S (S n) refl i | S m | [ q ] rewrite inj-add m i = normalize-S n q (suc i)
Context in the third hole is
Goal: normalize (m ‵add‵ i) ≡ m ‵add‵ i
————————————————————————————————————————————————————————————
i : ℕ
q : .Data.Unit.Core.reveal (.Data.Unit.Core.hide normalize n) ≡
P (S m)
m : Int
n : Int
normalize-P n q 0 gives us normalize (S m) ≡ S m where normalize-P is dual of normalize-S and has this type:
normalize-P : ∀ n {m} -> normalize n ≡ P m -> ∀ i -> normalize (m ‵sub‵ i) ≡ m ‵sub‵ i
We can apply normalize-S to something, that has type normalize (S m) ≡ S m: normalize-S (S m) (normalize-P n q 0) i. This expression has precisely the type we want. So we can write
normalize-S (S n) refl i | P (S m) | [ q ] = normalize-S (S m) (normalize-P n q 0) i
The fourth hole is similar to the third:
normalize-S (P n) refl i | S (S m) | [ q ] = normalize-S (S m) (normalize-S n q 0) i
There is a problem with this holes: Agda doesn't see, that normalize-S (S m) _ _ terminates, since S m is not syntactically smaller than S n. It's possible however to convience Agda by using well-founded recursion.
Having all this stuff we can easily proof the idempotent theorem:
idempotent : (n : Int) -> normalize (normalize n) ≡ normalize n
idempotent Z = refl
idempotent (S n) with normalize n | inspect normalize n
... | Z | _ = refl
... | S m | [ p ] = normalize-S n p 2
... | P m | [ p ] = normalize-P n p 0
idempotent (P n) with normalize n | inspect normalize n
... | Z | _ = refl
... | S m | [ p ] = normalize-S n p 0
... | P m | [ p ] = normalize-P n p 2
Here is the code: https://gist.github.com/flickyfrans/f2c7d5413b3657a94950
There are both versions: with the {-# TERMINATING #-} pragma and without.
EDIT
idempotent is simply
idempotent : ∀ n -> normalize (normalize n) ≡ normalize n
idempotent n with normalize n | inspect normalize n
... | Z | _ = refl
... | S _ | [ p ] = normalize-S n p 1
... | P _ | [ p ] = normalize-P n p 1

Resources