I'm trying to prove that binary sort tree insertion. I'm in the middle of the proof and the environment look like this:
-- new rx : ℕ
Goal: SortedTree (node leaf x (node (insertTree new rl) rx rr))
Have: SortedTree (insertTree new (node rl rx rr) | new ≤? rx)
————————————————————————————————————————————————————————————
new≤rx : new ≤ rx
Note the | new ≤? rx which AFAIK means "I need to know value to reduce it further"
How do I apply new≤rx?
insertTree is defined like this:
insertTree : (a : ℕ) → (t : Tree ℕ) → Tree ℕ
insertTree a (node l x r) with a ≤? x
insertTree a (node l x r) | yes p = node (insertTree a l) x r
insertTree a (node l x r) | no ¬p = node l x (insertTree a r)
I know the value of new ≤? rx (it's yes new≤rx), in fact I withed by new ≤? x right now:
... | no ¬n≤x with new ≤? rx
... | yes new≤rx = {!proof⟦insertTree⟧ (node rl rx rr) (sorted-rhs st) new !}
... | no p = {!!}
So how can I tell agda that value of new ≤? rx is known and it should go with
insertTree a (node l x r) | yes p = node (insertTree a l) x r?
I tried to use rewrites
where prf : (new ≤? rx ≡ yes new≤rx) → SortedTree (node (insertTree new rl) rx rr)
prf p1 rewrite p1 = {!proof⟦insertTree⟧ (node rl rx rr) (sorted-rhs st) new !}
But even though I have p1 : new ≤? rx ≡ yes new≤rx, agda ignores it and says that I have SortedTree (insertTree new (node rl rx rr) | new ≤? rx)
I managed to got it with inspect + rewrite.
proof⟦insertTree⟧ (node leaf x (node rl rx rr)) st new with new ≤? x
... | yes p = st-node (empty new) (sorted-rhs st) p (st-cmp-rhs st)
... | no ¬n≤x with new ≤? rx | inspect (_≤?_ new) rx
... | yes p | PropEq.[ eq ] = {!!}
where prf : (new ≤? rx ≡ yes p) → SortedTree (insertTree new (node rl rx rr)) → SortedTree (node (insertTree new rl) rx rr)
prf p0 p rewrite p0 = p
Note that prf subproof accepts p of type SortedTree (insertTree new (node rl rx rr)) and return the p itself, but thanks to rewrite, returned type is changed.
Related
I am trying to fill the remaining one hole in the following program:
{-# OPTIONS --cubical #-}
module _ where
open import Cubical.Core.Everything
open import Cubical.Foundations.Everything
data S1 : Type where
base : S1
loop : base ≡ base
data NS : Type where
N : NS
S : NS
W : S ≡ N
E : N ≡ S
module _ where
open Iso
NS-Iso : Iso NS S1
NS-Iso .fun N = base
NS-Iso .fun S = base
NS-Iso .fun (W i) = base
NS-Iso .fun (E i) = loop i
NS-Iso .inv base = N
NS-Iso .inv (loop i) = (E ∙ W) i
NS-Iso .leftInv N = refl
NS-Iso .leftInv S = sym W
NS-Iso .leftInv (W i) = λ j → W (i ∨ ~ j)
NS-Iso .leftInv (E i) = λ j → compPath-filler E W (~ j) i
NS-Iso .rightInv base = refl
NS-Iso .rightInv (loop i) = ?
The type of the hole is:
fun NS-Iso (inv NS-Iso (loop i)) ≡ loop i
inv NS-Iso (loop i) of course definitionally equal to (E ∙ W) i, but what is then fun NS-Iso ((E ∙ W) i)? Is there some kind of homomorphism / continuity / similar property with which I can use the definitions of fun NS-Iso (E i) and fun NS-Iso (W i) to figure out what it is?
Since fun NS-Iso (E i) = loop i and fun NS-Iso (W i) = base, I thought this might be a valid filling (pun intended) of the hole:
NS-Iso .rightInv (loop i) = λ j → compPath-filler loop (refl {x = base}) (~ j) i
But that gives a type error:
hcomp (doubleComp-faces (λ _ → base) (λ _ → base) i) (loop i)
!=
fun NS-Iso (hcomp (doubleComp-faces (λ _ → N) W i) (E i))
I have found that the answer is, basically, yes!
Let's add a local binding of our goal in NS-Iso .rightInv (loop i) just to keep an eye on the type:
NS-Iso .rightInv (loop i) = goal
where
goal : (fun NS-Iso ∘ inv NS-Iso) (loop i) ≡ loop i
goal = ?
Since we have
NS-Iso .inv (loop i) = (E ∙ W) i
the type of goal reduces to:
step1 : fun NS-Iso ((E ∙ W) i) ≡ loop i
And now comes the crucial step, the actual answer to my question: can we push in fun NS-Iso into E ∙ W?
Let's draw wavy lines between the points and paths where fun NS-Iso is directly defined, or where we know its value by cong _ refl = refl:
base base
^ ~ ~ ^
| ~ ~ |
| ~ ~ |
| N N |
| ^ ^ |
refl | ~~~ refl | | W ~~~~~ | refl
| | | |
| N -------------> S |
| ~ E ~ |
| ~ ~ ~ |
| ~ ~ ~ |
base --------------------------------> base
loop
E ∙ W is the lid of the inner box, and it turns out yes, its image by fun NS-Iso is indeed the lid of the outer box:
step2 : (cong (fun NS-Iso) E ∙ (cong (fun NS-Iso) W)) i ≡ loop i
In graphical form:
?
base - - - - - - - - - - - - - - - - - > base
^ ~ ~ ^
| ~ ~ |
| ~ E ∙ W ~ |
| N - - - - - - -> N |
| ^ ^ |
refl | ~~~ refl | | W ~~~~~ | refl
| | | |
| N -------------> S |
| ~ E ~ |
| ~ ~ ~ |
| ~ ~ ~ |
base -------------------------------> base
loop
So we can reduce the fun NS-Iso applications now:
step3 : (loop ∙ (λ _ → base)) i ≡ loop i
which we can finally solve with a library function doubleCompPath-filler.
The complete code:
NS-Iso .rightInv (loop i) = goal
where
step3 : (loop ∙ (λ _ → base)) i ≡ loop i
step3 = cong (λ p → p i) (symP (ompPath-filler loop (λ _ → base)))
step2 : (cong (fun NS-Iso) E ∙ (cong (fun NS-Iso) W)) i ≡ loop i
step2 = step3
step1 : fun NS-Iso ((E ∙ W) i) ≡ loop i
step1 j = step2 j
goal : (fun NS-Iso ∘ inv NS-Iso) (loop i) ≡ loop i
goal j = step1 j
I don't know why I needed to eta-expand goal and step1, but otherwise Agda doesn't recognize that they meet the boundary conditions.
I'm having a nasty problem with a formalisation of a theorem that uses a data type that have some constructors whose indices have list concatenation. When I try to use emacs mode to case split, Agda returns the following error message:
I'm not sure if there should be a case for the constructor
o-success, because I get stuck when trying to solve the following
unification problems (inferred index ≟ expected index):
e₁ o e'' , x₁ ++ x'' ++ y₁ ≟ e o e' , x ++ x' ++ y
suc (n₂ + n'') , x₁ ++ x'' ≟ m' , p''
when checking that the expression ? has type
suc (.n₁ + .n') == .m' × .x ++ .x' == p'
Since the code is has more than a small number of lines, I put it on the following gist:
https://gist.github.com/rodrigogribeiro/976b3d5cc82c970314c2
Any tip is appreciated.
Best,
There was a similar question.
However you want to unify xs1 ++ xs2 ++ xs3 with ys1 ++ ys2 ++ ys3, but _++_ is not a constructor — it's a function, and it's not injective. Consider this simplified example:
data Bar {A : Set} : List A -> Set where
bar : ∀ xs {ys} -> Bar (xs ++ ys)
ex : ∀ {A} {zs : List A} -> Bar zs -> Bar zs -> List A
ex (bar xs) b = {!!}
b is of type Bar (xs ++ .ys), but b is not necessarily equal to bar .xs, so you can't pattern-match like this. Here are two Bars, which have equal types but different values:
ok : ∃₂ λ (b1 b2 : Bar (tt ∷ [])) -> b1 ≢ b2
ok = bar [] , bar (tt ∷ []) , λ ()
This is because xs1 ++ xs2 ≡ ys1 ++ ys2 doesn't imply xs1 ≡ ys1 × xs2 ≡ ys2 in general.
But it's possible to generalize an index. You can use the technique described by Vitus at the link above, or you can use this simple combinator, which forgets the index:
generalize : ∀ {α β} {A : Set α} (B : A -> Set β) {x : A} -> B x -> ∃ B
generalize B y = , y
E.g.
ex : ∀ {A} {zs : List A} -> Bar zs -> Bar zs -> List A
ex {A} (bar xs) b with generalize Bar b
... | ._ , bar ys = xs ++ ys
After all, are you sure your lemma is true?
UPDATE
Some remarks first.
Your empty case states
empty : forall x -> G :: (emp , x) => (1 , x)
that the empty parser parses the whole string. It should be
empty : forall x -> G :: (emp , x) => (1 , [])
as in the paper.
Your definition of o-fail1 contains this part:
(n , fail ∷ o)
but fail fails everything, so it should be (n , fail ∷ []). With this representation you would probably need decidable equality on A to finish the lemma, and proofs would be dirty. Clean and idiomatic way to represent something, that can fail, is to wrap it in the Maybe monad, so here is my definition of _::_=>_:
data _::_=>_ {n} (G : Con n) : Foo n × List A -> Nat × Maybe (List A) -> Set where
empty : ∀ {x} -> G :: emp , x => 1 , just []
sym-success : ∀ {a x} -> G :: sym a , (a ∷ x) => 1 , just (a ∷ [])
sym-failure : ∀ {a b x} -> ¬ (a == b) -> G :: sym a , b ∷ x => 1 , nothing
var : ∀ {x m o} {v : Fin (suc n)}
-> G :: lookup v G , x => m , o -> G :: var v , x => suc m , o
o-success : ∀ {e e' x x' y n n'}
-> G :: e , x ++ x' ++ y => n , just x
-> G :: e' , x' ++ y => n' , just x'
-> G :: e o e' , x ++ x' ++ y => suc (n + n') , just (x ++ x')
o-fail1 : ∀ {e e' x x' y n}
-> G :: e , x ++ x' ++ y => n , nothing
-> G :: e o e' , x ++ x' ++ y => suc n , nothing
o-fail2 : ∀ {e e' x x' y n n'}
-> G :: e , x ++ x' ++ y => n , just x
-> G :: e' , x' ++ y => n' , nothing
-> G :: e o e' , x ++ x' ++ y => suc (n + n') , nothing
Here is the lemma:
postulate
cut : ∀ {α} {A : Set α} -> ∀ xs {ys zs : List A} -> xs ++ ys == xs ++ zs -> ys == zs
mutual
aux : ∀ {n} {G : Con n} {e e' z x x' y n n' m' p'}
-> z == x ++ x' ++ y
-> G :: e , z => n , just x
-> G :: e' , x' ++ y => n' , just x'
-> G :: e o e' , z => m' , p'
-> suc (n + n') == m' × just (x ++ x') == p'
aux {x = x} {x'} {n = n} {n'} r pr1 pr2 (o-success {x = x''} pr3 pr4) with x | n | lemma pr1 pr3
... | ._ | ._ | refl , refl rewrite cut x'' r with x' | n' | lemma pr2 pr4
... | ._ | ._ | refl , refl = refl , refl
aux ...
lemma : ∀ {n m m'} {G : Con n} {f x p p'}
-> G :: f , x => m , p -> G :: f , x => m' , p' -> m == m' × p == p'
lemma (o-success pr1 pr2) pr3 = aux refl pr1 pr2 pr3
lemma ...
The proof proceeds as follows:
We generalize the type of lemma's pr3 in an auxiliary function as in the Vitus' answer. Now it's possible to pattern-match on pr3.
We prove, that the first parser in lemma's pr3 (called also pr3 in the aux) produces the same output as pr1.
After some rewriting, we prove that the second parser in lemma's pr3 (called pr4 in the aux) produces the same output as pr2.
And since pr1 and pr3 produce the same output, and pr2 and pr4 produce the same output, o-success pr1 pr2 and o-success pr3 pr4 produce the same output, so we put refl , refl.
The code. I didn't prove the o-fail1 and o-fail2 cases, but they should be similar.
UPDATE
Amount of boilerplate can be reduced by
Fixing the definitions of the fail cases, which contain redundant information.
Returning Maybe (List A) instead of Nat × Maybe (List A). You can compute this Nat recursively, if needed.
Using the inspect idiom instead of auxiliary functions.
I don't think there is a simpler solution. The code.
In thinking about:
In Agda is it possible to define a datatype that has equations?
I was playing with the following datatype:
data Int : Set where
Z : Int
S : Int -> Int
P : Int -> Int
The above is a poor definition of Integers, and the answers in the above give a way around this. However, one can define a reduction on the above Int type that might be useful.
normalize : Int -> Int
normalize Z = Z
normalize (S n) with normalize n
... | P m = m
... | m = S m
normalize (P n) with normalize n
... | S m = m
... | m = P m
The thing that needs to be proved is:
idempotent : (n : Int) -> normalize n \== normalize (normalize n)
When you expand the cases out, you get for example
idempotent (P n) = ?
The goal for the hole has type
(normalize (P n) | normalize n) \== normalize (normalize (P n) | normalize n)
And I haven't seen this "|" before, nor do I know how to produce a proof of a type involving them. The proof needs to pattern match,for example,
idempotent (P n) with inspect (normalize n)
... (S m) with-\== = ?
... m with-\== = ?
But here the hole for the second case still has a "|" in it. So I am a bit confused.
-------- EDIT ---------------
It would be helpful to prove a simpler statement:
normLemma : (n m : NZ) -> normalize n \== P m -> normalize (S n) \== m
The "on paper" proof is rather straightforward. Assuming normalize n = P m, consider
normalize (S n) = case normalize n of
P k -> k
x -> S x
But normalize n is assumed to be P m, hence normalize (S n) = k. Then k = m, since normalize n = P m = P k which implies m = k. Thus normalize (S n) = m.
User Vitus proposed to use normal forms.
If we have these two functions:
normalForm : ∀ n -> NormalForm (normalize n)
idempotent' : ∀ {n} -> NormalForm n -> normalize n ≡ n
then we can easily compose them to obtain the result we need:
idempotent : ∀ n -> normalize (normalize n) ≡ normalize n
idempotent = idempotent' ∘ normalForm
Here is the definition of normal forms:
data NormalForm : Int -> Set where
NZ : NormalForm Z
NSZ : NormalForm (S Z)
NPZ : NormalForm (P Z)
NSS : ∀ {n} -> NormalForm (S n) -> NormalForm (S (S n))
NPP : ∀ {n} -> NormalForm (P n) -> NormalForm (P (P n))
I.e. only terms like S (S ... (S Z)... and P (P ... (P Z)...) are in the normal form.
And proofs are rather straightforward:
normalForm : ∀ n -> NormalForm (normalize n)
normalForm Z = NZ
normalForm (S n) with normalize n | normalForm n
... | Z | nf = NSZ
... | S _ | nf = NSS nf
... | P ._ | NPZ = NZ
... | P ._ | NPP nf = nf
normalForm (P n) with normalize n | normalForm n
... | Z | nf = NPZ
... | S ._ | NSZ = NZ
... | S ._ | NSS nf = nf
... | P _ | nf = NPP nf
idempotent' : ∀ {n} -> NormalForm n -> normalize n ≡ n
idempotent' NZ = refl
idempotent' NSZ = refl
idempotent' NPZ = refl
idempotent' (NSS p) rewrite idempotent' p = refl
idempotent' (NPP p) rewrite idempotent' p = refl
The whole code: https://gist.github.com/flickyfrans/f2c7d5413b3657a94950#file-another-one
idempotent : (n : Int) -> normalize (normalize n) ≡ normalize n
idempotent Z = refl
idempotent (S n) with normalize n | inspect normalize n
... | Z | _ = refl
... | S m | [ p ] = {!!}
... | P m | [ p ] = {!!}
Context in the first hole is
Goal: (normalize (S (S m)) | (normalize (S m) | normalize m)) ≡
S (S m)
————————————————————————————————————————————————————————————
p : normalize n ≡ S m
m : Int
n : Int
(normalize (S (S m)) | (normalize (S m) | normalize m)) ≡ S (S m) is just an expanded version of normalize (S (S m)). So we can rewrite the context a bit:
Goal: normalize (S (S m)) ≡ S (S m)
————————————————————————————————————————————————————————————
p : normalize n ≡ S m
m : Int
n : Int
Due to the definition of the normalize function
normalize (S n) with normalize n
... | P m = m
... | m = S m
normalize (S n) ≡ S (normalize n), if normalize n doesn't contain Ps.
If we have an equation like normalize n ≡ S m, than m is already normalized and doesn't contain Ps. But if m doesn't contain Ps, so and normalize m. So we have normalize (S m) ≡ S (normalize m).
Let's prove a little more general lemma:
normalize-S : ∀ n {m} -> normalize n ≡ S m -> ∀ i -> normalize (m ‵add‵ i) ≡ m ‵add‵ i
where ‵add‵ is
_‵add‵_ : Int -> ℕ -> Int
n ‵add‵ 0 = n
n ‵add‵ (suc i) = S (n ‵add‵ i)
normalize-S states, that if m doesn't contain Ps, than this holds:
normalize (S (S ... (S m)...)) ≡ S (S ... (S (normalize m))...)
Here is a proof:
normalize-S : ∀ n {m} -> normalize n ≡ S m -> ∀ i -> normalize (m ‵add‵ i) ≡ m ‵add‵ i
normalize-S Z () i
normalize-S (S n) p i with normalize n | inspect normalize n
normalize-S (S n) refl i | Z | _ = {!!}
normalize-S (S n) refl i | S m | [ q ] = {!!}
normalize-S (S n) refl i | P (S m) | [ q ] = {!!}
normalize-S (P n) p i with normalize n | inspect normalize n
normalize-S (P n) () i | Z | _
normalize-S (P n) refl i | S (S m) | [ q ] = {!!}
normalize-S (P n) () i | P _ | _
Context in the first hole is
Goal: normalize (Z ‵add‵ i) ≡ Z ‵add‵ i
————————————————————————————————————————————————————————————
i : ℕ
.w : Reveal .Data.Unit.Core.hide normalize n is Z
n : Int
I.e. normalize (S (S ... (S Z)...)) ≡ S (S ... (S Z)...). We can easily prove it:
normalize-add : ∀ i -> normalize (Z ‵add‵ i) ≡ Z ‵add‵ i
normalize-add 0 = refl
normalize-add (suc i) rewrite normalize-add i with i
... | 0 = refl
... | suc _ = refl
So we can fill the first hole with normalize-add i.
Context in the second hole is
Goal: normalize (S m ‵add‵ i) ≡ S m ‵add‵ i
————————————————————————————————————————————————————————————
i : ℕ
q : .Data.Unit.Core.reveal (.Data.Unit.Core.hide normalize n) ≡ S m
m : Int
n : Int
While normalize-S n q (suc i) has this type:
(normalize (S (m ‵add‵ i)) | normalize (m ‵add‵ i)) ≡ S (m ‵add‵ i)
Or, shortly, normalize (S (m ‵add‵ i)) ≡ S (m ‵add‵ i). So we need to replace S m ‵add‵ i with S (m ‵add‵ i):
inj-add : ∀ n i -> S n ‵add‵ i ≡ S (n ‵add‵ i)
inj-add n 0 = refl
inj-add n (suc i) = cong S (inj-add n i)
And now we can write
normalize-S (S n) refl i | S m | [ q ] rewrite inj-add m i = normalize-S n q (suc i)
Context in the third hole is
Goal: normalize (m ‵add‵ i) ≡ m ‵add‵ i
————————————————————————————————————————————————————————————
i : ℕ
q : .Data.Unit.Core.reveal (.Data.Unit.Core.hide normalize n) ≡
P (S m)
m : Int
n : Int
normalize-P n q 0 gives us normalize (S m) ≡ S m where normalize-P is dual of normalize-S and has this type:
normalize-P : ∀ n {m} -> normalize n ≡ P m -> ∀ i -> normalize (m ‵sub‵ i) ≡ m ‵sub‵ i
We can apply normalize-S to something, that has type normalize (S m) ≡ S m: normalize-S (S m) (normalize-P n q 0) i. This expression has precisely the type we want. So we can write
normalize-S (S n) refl i | P (S m) | [ q ] = normalize-S (S m) (normalize-P n q 0) i
The fourth hole is similar to the third:
normalize-S (P n) refl i | S (S m) | [ q ] = normalize-S (S m) (normalize-S n q 0) i
There is a problem with this holes: Agda doesn't see, that normalize-S (S m) _ _ terminates, since S m is not syntactically smaller than S n. It's possible however to convience Agda by using well-founded recursion.
Having all this stuff we can easily proof the idempotent theorem:
idempotent : (n : Int) -> normalize (normalize n) ≡ normalize n
idempotent Z = refl
idempotent (S n) with normalize n | inspect normalize n
... | Z | _ = refl
... | S m | [ p ] = normalize-S n p 2
... | P m | [ p ] = normalize-P n p 0
idempotent (P n) with normalize n | inspect normalize n
... | Z | _ = refl
... | S m | [ p ] = normalize-S n p 0
... | P m | [ p ] = normalize-P n p 2
Here is the code: https://gist.github.com/flickyfrans/f2c7d5413b3657a94950
There are both versions: with the {-# TERMINATING #-} pragma and without.
EDIT
idempotent is simply
idempotent : ∀ n -> normalize (normalize n) ≡ normalize n
idempotent n with normalize n | inspect normalize n
... | Z | _ = refl
... | S _ | [ p ] = normalize-S n p 1
... | P _ | [ p ] = normalize-P n p 1
Agda 2.3.2.1 can't see that the following function terminates:
open import Data.Nat
open import Data.List
open import Relation.Nullary
merge : List ℕ → List ℕ → List ℕ
merge (x ∷ xs) (y ∷ ys) with x ≤? y
... | yes p = x ∷ merge xs (y ∷ ys)
... | _ = y ∷ merge (x ∷ xs) ys
merge xs ys = xs ++ ys
Agda wiki says that it's OK for the termination checker if the arguments on recursive calls decrease lexicographically. Based on that it seems that this function should also pass. So what am I missing here? Also, is it maybe OK in previous versions of Agda? I've seen similar code on the Internet and no one mentioned termination issues there.
I cannot give you the reason why exactly this happens, but I can show you how to cure the symptoms. Before I start: This is a known problem with the termination checker. If you are well-versed in Haskell, you could take a look at the source.
One possible solution is to split the function into two: first one for the case where the first argument gets smaller and second for the second one:
mutual
merge : List ℕ → List ℕ → List ℕ
merge (x ∷ xs) (y ∷ ys) with x ≤? y
... | yes _ = x ∷ merge xs (y ∷ ys)
... | no _ = y ∷ merge′ x xs ys
merge xs ys = xs ++ ys
merge′ : ℕ → List ℕ → List ℕ → List ℕ
merge′ x xs (y ∷ ys) with x ≤? y
... | yes _ = x ∷ merge xs (y ∷ ys)
... | no _ = y ∷ merge′ x xs ys
merge′ x xs [] = x ∷ xs
So, the first function chops down xs and once we have to chop down ys, we switch to the second function and vice versa.
Another (perhaps surprising) option, which is also mentioned in the issue report, is to introduce the result of recursion via with:
merge : List ℕ → List ℕ → List ℕ
merge (x ∷ xs) (y ∷ ys) with x ≤? y | merge xs (y ∷ ys) | merge (x ∷ xs) ys
... | yes _ | r | _ = x ∷ r
... | no _ | _ | r = y ∷ r
merge xs ys = xs ++ ys
And lastly, we can perform the recursion on Vectors and then convert back to List:
open import Data.Vec as V
using (Vec; []; _∷_)
merge : List ℕ → List ℕ → List ℕ
merge xs ys = V.toList (go (V.fromList xs) (V.fromList ys))
where
go : ∀ {n m} → Vec ℕ n → Vec ℕ m → Vec ℕ (n + m)
go {suc n} {suc m} (x ∷ xs) (y ∷ ys) with x ≤? y
... | yes _ = x ∷ go xs (y ∷ ys)
... | no _ rewrite lem n m = y ∷ go (x ∷ xs) ys
go xs ys = xs V.++ ys
However, here we need a simple lemma:
open import Relation.Binary.PropositionalEquality
lem : ∀ n m → n + suc m ≡ suc (n + m)
lem zero m = refl
lem (suc n) m rewrite lem n m = refl
We could also have go return List directly and avoid the lemma altogether:
merge : List ℕ → List ℕ → List ℕ
merge xs ys = go (V.fromList xs) (V.fromList ys)
where
go : ∀ {n m} → Vec ℕ n → Vec ℕ m → List ℕ
go (x ∷ xs) (y ∷ ys) with x ≤? y
... | yes _ = x ∷ go xs (y ∷ ys)
... | no _ = y ∷ go (x ∷ xs) ys
go xs ys = V.toList xs ++ V.toList ys
The first trick (i.e. split the function into few mutually recursive ones) is actually quite good to remember. Since the termination checker doesn't look inside the definitions of other functions you use, it rejects a great deal of perfectly fine programs, consider:
data Rose {a} (A : Set a) : Set a where
[] : Rose A
node : A → List (Rose A) → Rose A
And now, we'd like to implement mapRose:
mapRose : ∀ {a b} {A : Set a} {B : Set b} →
(A → B) → Rose A → Rose B
mapRose f [] = []
mapRose f (node t ts) = node (f t) (map (mapRose f) ts)
The termination checker, however, doesn't look inside the map to see if it doesn't do anything funky with the elements and just rejects this definition. We must inline the definition of map and write a pair of mutually recursive functions:
mutual
mapRose : ∀ {a b} {A : Set a} {B : Set b} →
(A → B) → Rose A → Rose B
mapRose f [] = []
mapRose f (node t ts) = node (f t) (mapRose′ f ts)
mapRose′ : ∀ {a b} {A : Set a} {B : Set b} →
(A → B) → List (Rose A) → List (Rose B)
mapRose′ f [] = []
mapRose′ f (t ∷ ts) = mapRose f t ∷ mapRose′ f ts
Usually, you can hide most of the mess in a where declaration:
mapRose : ∀ {a b} {A : Set a} {B : Set b} →
(A → B) → Rose A → Rose B
mapRose {A = A} {B = B} f = go
where
go : Rose A → Rose B
go-list : List (Rose A) → List (Rose B)
go [] = []
go (node t ts) = node (f t) (go-list ts)
go-list [] = []
go-list (t ∷ ts) = go t ∷ go-list ts
Note: Declaring signatures of both functions before they are defined can be used instead of mutual in newer versions of Agda.
Update: The development version of Agda got an update to the termination checker, I'll let the commit message and release notes speak for themselves:
A revision of call graph completion that can deal with arbitrary termination depth.
This algorithm has been sitting around in MiniAgda for some time,
waiting for its great day. It is now here!
Option --termination-depth can now be retired.
And from the release notes:
Termination checking of functions defined by 'with' has been improved.
Cases which previously required --termination-depth (now obsolete!)
to pass the termination checker (due to use of 'with') no longer
need the flag. For example
merge : List A → List A → List A
merge [] ys = ys
merge xs [] = xs
merge (x ∷ xs) (y ∷ ys) with x ≤ y
merge (x ∷ xs) (y ∷ ys) | false = y ∷ merge (x ∷ xs) ys
merge (x ∷ xs) (y ∷ ys) | true = x ∷ merge xs (y ∷ ys)
This failed to termination check previously, since the 'with'
expands to an auxiliary function merge-aux:
merge-aux x y xs ys false = y ∷ merge (x ∷ xs) ys
merge-aux x y xs ys true = x ∷ merge xs (y ∷ ys)
This function makes a call to merge in which the size of one of the
arguments is increasing. To make this pass the termination checker
now inlines the definition of merge-aux before checking, thus
effectively termination checking the original source program.
As a result of this transformation doing 'with' on a variable no
longer preserves termination. For instance, this does not
termination check:
bad : Nat → Nat
bad n with n
... | zero = zero
... | suc m = bad m
And indeed, your original function now passes the termination check!
I was proving some properties of filter and map, everything went quite good until I stumbled on this property: filter p (map f xs) ≡ map f (filter (p ∘ f) xs). Here's a part of the code that's relevant:
open import Relation.Binary.PropositionalEquality
open import Data.Bool
open import Data.List hiding (filter)
import Level
filter : ∀ {a} {A : Set a} → (A → Bool) → List A → List A
filter _ [] = []
filter p (x ∷ xs) with p x
... | true = x ∷ filter p xs
... | false = filter p xs
Now, because I love writing proofs using the ≡-Reasoning module, the first thing I tried was:
open ≡-Reasoning
open import Function
filter-map : ∀ {a b} {A : Set a} {B : Set b}
(xs : List A) (f : A → B) (p : B → Bool) →
filter p (map f xs) ≡ map f (filter (p ∘ f) xs)
filter-map [] _ _ = refl
filter-map (x ∷ xs) f p with p (f x)
... | true = begin
filter p (map f (x ∷ xs))
≡⟨ refl ⟩
f x ∷ filter p (map f xs)
-- ...
But alas, that didn't work. After trying for one hour, I finally gave up and proved it in this way:
filter-map (x ∷ xs) f p with p (f x)
... | true = cong (λ a → f x ∷ a) (filter-map xs f p)
... | false = filter-map xs f p
Still curious about why going through ≡-Reasoning didn't work, I tried something very trivial:
filter-map-def : ∀ {a b} {A : Set a} {B : Set b}
(x : A) xs (f : A → B) (p : B → Bool) → T (p (f x)) →
filter p (map f (x ∷ xs)) ≡ f x ∷ filter p (map f xs)
filter-map-def x xs f p _ with p (f x)
filter-map-def x xs f p () | false
filter-map-def x xs f p _ | true = -- not writing refl on purpose
begin
filter p (map f (x ∷ xs))
≡⟨ refl ⟩
f x ∷ filter p (map f xs)
∎
But typechecker doesn't agree with me. It would seem that the current goal remains filter p (f x ∷ map f xs) | p (f x) and even though I pattern matched on p (f x), filter just won't reduce to f x ∷ filter p (map f xs).
Is there a way to make this work with ≡-Reasoning?
Thanks!
The trouble with with-clauses is that Agda forgets the information it learned from pattern match unless you arrange beforehand for this information to be preserved.
More precisely, when Agda sees a with expression clause, it replaces all the occurences of expression in the current context and goal with a fresh variable w and then gives you that variable with updated context and goal into the with-clause, forgetting everything about its origin.
In your case, you write filter p (map f (x ∷ xs)) inside the with-block, so it goes into scope after Agda has performed the rewriting, so Agda has already forgotten the fact that p (f x) is true and does not reduce the term.
You can preserve the proof of equality by using one of the "Inspect"-patterns from the standard library, but I'm not sure how it can be useful in your case.