Why does Agda reduce my function application for some arguments but not for others? - agda

I am playing with joinˡ⁺ from the standard library's AVL tree implementation. This function is defined with six pattern matching clauses. When I apply the function to an argument, then Agda does or doesn't reduce my function application expression, depending on which of the six clauses matches my argument. (Or so it seems to me.)
Here's code that applies the function to an argument that matches the function's first clause. It's the left-hand side of the equality in the goal. Agda reduces it to the right-hand side and I can finish the proof with refl. So this one works as expected.
(Note that the code uses version 1.3 of the standard library. It seems that more recent versions moved the AVL tree code from Data.AVL to Data.Tree.AVL.)
module Repro2 where
open import Data.Nat using (ℕ ; suc)
open import Data.Nat.Properties using (<-strictTotalOrder)
open import Data.Product using (_,_)
open import Relation.Binary.PropositionalEquality using (_≡_ ; refl)
open import Data.AVL.Indexed <-strictTotalOrder
okay :
∀ {l u h} k₆ k₂ (t₁ : Tree (const ℕ) _ _ _) k₄ t₃ t₅ t₇ b →
joinˡ⁺ {l = l} {u} {suc (suc h)} {suc h} {suc (suc h)}
k₆ (1# , node k₂ t₁ (node {hˡ = h} {suc h} {suc h} k₄ t₃ t₅ b) ∼+) t₇ ∼-
≡
(0# , node k₄ (node k₂ t₁ t₃ (max∼ b)) (node k₆ t₅ t₇ (∼max b)) ∼0)
okay k₆ k₂ t₁ k₄ t₃ t₅ t₇ b = refl
The next example targets the second clause of the function definition. Unlike above, the goal does not reduce at all this time, i.e., the joinˡ⁺ doesn't go away.
not-okay : ∀ {l u h} k₄ k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ t₅ →
joinˡ⁺ {l = l} {u} {suc h} {h} {suc h}
k₄ (1# , node k₂ t₁ t₃ ∼-) t₅ ∼-
≡
(0# , node k₂ t₁ (node k₄ t₃ t₅ ∼0) ∼0)
not-okay k₄ k₂ t₁ t₃ t₅ = {!!}
What am I missing?
Addition after MrO's answer
MrO nailed it. What I knew was that if a clause pattern-matches a subterm of an argument (or the whole argument), then I obviously need to pass a matching data constructor for that subterm to make the evaluator pick that clause. However, that's not enough. As MrO pointed out, in some cases I also need to pass data constructors for subterms that other clauses (i.e., not just the clause I'm going for) pattern-match, even though the clause at hand doesn't care about them.
To explore this (to me: major new) insight, I tried out the remaining four clauses of joinˡ⁺. The last clause, clause #6, led to another insight.
Here's clause #3. It works pretty much the same as not-okay.
clause₃ : ∀ {l u h} k₄ k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ t₅ →
joinˡ⁺ {l = l} {u} {suc h} {h} {suc h}
k₄ (1# , node k₂ t₁ t₃ ∼0) t₅ ∼-
≡
(1# , node k₂ t₁ (node k₄ t₃ t₅ ∼-) ∼+)
-- This does not work:
-- clause₃ k₄ k₂ t₁ t₃ t₅ = {!!}
clause₃ k₄ k₂ t₁ (node k t₃ t₄ bal) t₅ = refl
Clause #4 is more involved.
clause₄ : ∀ {l u h} k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ →
joinˡ⁺ {l = l} {u} {h} {h} {h}
k₂ (1# , t₁) t₃ ∼0
≡
(1# , node k₂ t₁ t₃ ∼-)
-- This does not work:
-- clause₄ k₂ t₁ t₃ = {!!}
-- This still doesn't, because of t' (or so I thought):
-- clause₄ k₂ (node k t t′ b) t₃ = {!!}
-- Surprise! This still doesn't, because of b:
-- clause₄ k₂ (node k t (leaf l<u) b) t₃ = {!!}
-- clause₄ k₂ (node k t (node k′ t′′ t′′′ b') b) t₃ = {!!}
clause₄ k₂ (node k t (leaf l<u) ∼0) t₃ = refl
clause₄ k₂ (node k t (leaf l<u) ∼-) t₃ = refl
clause₄ k₂ (node k t (node k′ t′′ t′′′ b') ∼+) t₃ = refl
clause₄ k₂ (node k t (node k′ t′′ t′′′ b') ∼0) t₃ = refl
clause₄ k₂ (node k t (node k′ t′′ t′′′ b') ∼-) t₃ = refl
Clause #5 is analogous to clause #4.
clause₅ : ∀ {l u h} k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ →
joinˡ⁺ {l = l} {u} {h} {suc h} {suc h}
k₂ (1# , t₁) t₃ ∼+
≡
(0# , node k₂ t₁ t₃ ∼0)
clause₅ k₂ (node k t (leaf l<u) ∼0) t₃ = refl
clause₅ k₂ (node k t (leaf l<u) ∼-) t₃ = refl
clause₅ k₂ (node k t (node k′ t'′ t′′′ b′) ∼+) t₃ = refl
clause₅ k₂ (node k t (node k′ t'′ t′′′ b′) ∼0) t₃ = refl
clause₅ k₂ (node k t (node k′ t'′ t′′′ b′) ∼-) t₃ = refl
Clause #6 was a bit of a surprise to me. I thought that I needed to pass data constructors wherever any of the clauses required them. But that's not what MrO said. And it shows in this clause:
clause₆ : ∀ {l u h} k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ b →
joinˡ⁺ {l = l} {u} {h} {h} {h}
k₂ (0# , t₁) t₃ b
≡
(0# , node k₂ t₁ t₃ b)
clause₆ k₂ t₁ t₃ b = refl
Easier than I thought: no additional data constructors required. Why? I went to read the pattern matching part of the Agda reference:
https://agda.readthedocs.io/en/v2.6.1/language/function-definitions.html#case-trees
I had read it before, but had completely failed to apply what it says. Agda finds the clause to be picked by way of a decision tree, a case tree. To me, it now looks like Agda needs data constructors as long as it hasn't reached a leaf of the case tree, i.e., as long as it hasn't figured out which clause to pick.
For the function at hand, the case tree seems to start with the question: 0# or 1#? At least that would explain clause #6:
If it's 0# then we know that it must be clause #6, no more data constructors required. Clause #6 is the only match for 0#. So, we're at a leaf, our case tree traversal is over.
If it's 1# then we need to do more matching, i.e., move down in the case tree to the next level. There, we need another data constructor to look at. In total, we thus need a data constructor for each visited level of the case tree.
At least this is my current mental model, which seems to be supported by the observations made about joinˡ⁺.
Trying to validate this mental model a little more, I went and modified my copy of the standard library by reversing the order of the six clauses. As Agda builds the case tree by going through the clauses in order and going left to right within each clause, this should give us a much better case tree.
0# vs. 1# would still be the first level of the decision tree, but it would be followed by the outer balance, followed by the inner balance. We wouldn't need to split trees into nodes, except for the now last (previously first) clause, which actually matches on that.
And, indeed, things turn out as expected. Here's what the proofs look like with the reversed order of clauses in my modified standard library.
clause₁′ : ∀ {l u h} k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ b →
joinˡ⁺ {l = l} {u} {h} {h} {h}
k₂ (0# , t₁) t₃ b
≡
(0# , node k₂ t₁ t₃ b)
clause₁′ k₂ t₁ t₃ b = refl
clause₂′ : ∀ {l u h} k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ →
joinˡ⁺ {l = l} {u} {h} {suc h} {suc h}
k₂ (1# , t₁) t₃ ∼+
≡
(0# , node k₂ t₁ t₃ ∼0)
clause₂′ k₂ t₁ t₃ = refl
clause₃′ : ∀ {l u h} k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ →
joinˡ⁺ {l = l} {u} {h} {h} {h}
k₂ (1# , t₁) t₃ ∼0
≡
(1# , node k₂ t₁ t₃ ∼-)
clause₃′ k₂ t₁ t₃ = refl
clause₄′ : ∀ {l u h} k₄ k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ t₅ →
joinˡ⁺ {l = l} {u} {suc h} {h} {suc h}
k₄ (1# , node k₂ t₁ t₃ ∼0) t₅ ∼-
≡
(1# , node k₂ t₁ (node k₄ t₃ t₅ ∼-) ∼+)
clause₄′ k₄ k₂ t₁ t₃ t₅ = refl
not-okay′ : ∀ {l u h} k₄ k₂ (t₁ : Tree (const ℕ) _ _ _) t₃ t₅ →
joinˡ⁺ {l = l} {u} {suc h} {h} {suc h}
k₄ (1# , node k₂ t₁ t₃ ∼-) t₅ ∼-
≡
(0# , node k₂ t₁ (node k₄ t₃ t₅ ∼0) ∼0)
not-okay′ k₄ k₂ t₁ t₃ t₅ = refl
okay′ :
∀ {l u h} k₆ k₂ (t₁ : Tree (const ℕ) _ _ _) k₄ t₃ t₅ t₇ b →
joinˡ⁺ {l = l} {u} {suc (suc h)} {suc h} {suc (suc h)}
k₆ (1# , node k₂ t₁ (node {hˡ = h} {suc h} {suc h} k₄ t₃ t₅ b) ∼+) t₇ ∼-
≡
(0# , node k₄ (node k₂ t₁ t₃ (max∼ b)) (node k₆ t₅ t₇ (∼max b)) ∼0)
okay′ k₆ k₂ t₁ k₄ t₃ t₅ t₇ b = refl

In order for Agda to be able to reduce your expression you need to pattern match on t₃
not-okay _ _ _ (leaf _) _ = refl
not-okay _ _ _ (node _ _ _ _) _ = refl
My understanding as to why this is needed is the following: joinˡ⁺ is defined inductively on five parameters. In every case, you need to specify all of these parameters for Agda to reduce the expression (by this, I mean that Agda needs to know, for all these 5 parameters, which constructors are currently given).
In your not-okay function, you consider the quantity joinˡ⁺ {l = l} {u} {suc h} {h} {suc h} k₄ (1# , node k₂ t₁ t₃ ∼-) t₅ ∼- in which case four of the five parameters where specified constructor-wise (1#, node k₂ t₁ t₃ ∼-, ∼- and ∼-), but not t₃ which was the missing idea.
On the contrary, in your okay function, you consider the quantity joinˡ⁺ {l = l} {u} {suc (suc h)} {suc h} {suc (suc h)} k₆ (1# , node k₂ t₁ (node {hˡ = h} {suc h} {suc h} k₄ t₃ t₅ b) ∼+) t₇ ∼- where all five of these elements were already specified.

Related

Abusing instance arguments to mimic tactics

I'm trying to prove some lemmas that are mutually recursive, but unfortunately not structurally recursive, so I have to use Data.Nat.Induction.Acc, resulting in half of my code dedicated to explicitly mentioning proofs of facts like m ≤ m ⊔ n. Ideally, I'd like to hide these technicalities as much as possible, and upon a quick glance implicit arguments seem promising (and much more lightweight than going full metaprogramming/reflection). But, alas, I'm stuck on that route.
As a model example, consider some mutually recursive trees:
open import Data.Nat.Base
open import Data.Nat.Properties
open import Relation.Binary.PropositionalEquality using (_≡_; refl; subst; sym)
mutual
data U : Set where
U-only : U
U-with-Vs : (v₁ v₂ : V) → U
data V : Set where
V-only : V
V-with-Us : (u₁ u₂ : U) -> V
along with some functions yielding something that's smaller (for the obvious definition of a size), but not structurally smaller:
mutual
iso-U : U → V
iso-U U-only = V-only
iso-U (U-with-Vs v₁ v₂) = V-with-Us (iso-V v₁) (iso-V v₂)
iso-V : V → U
iso-V V-only = U-only
iso-V (V-with-Us u₁ u₂) = U-with-Vs (iso-U u₁) (iso-U u₂)
Now let's define those obvious size measures and prove that iso doesn't change that size:
mutual
size-U : U → ℕ
size-U U-only = zero
size-U (U-with-Vs v₁ v₂) = suc (size-V v₁ ⊔ size-V v₂)
size-V : V → ℕ
size-V V-only = zero
size-V (V-with-Us u₁ u₂) = suc (size-U u₁ ⊔ size-U u₂)
mutual
size-U-iso-V : ∀ v
→ size-U (iso-V v) ≡ size-V v
size-U-iso-V V-only = refl
size-U-iso-V (V-with-Us u₁ u₂) rewrite size-V-iso-U u₁ | size-V-iso-U u₂ = refl
size-V-iso-U : ∀ u
→ size-V (iso-U u) ≡ size-U u
size-V-iso-U U-only = refl
size-V-iso-U (U-with-Vs v₁ v₂) rewrite size-U-iso-V v₁ | size-U-iso-V v₂ = refl
Finally we get to write a nonsensical and useless function that still models what I need to do in my real code:
open import Data.Nat.Induction
module Explicit where
mutual
count-U : (u : U) → Acc _<_ (size-U u) → ℕ
count-U U-only _ = zero
count-U (U-with-Vs v₁ v₂) (acc rec) =
let ineq = m≤m⊔n (size-V v₁) (size-V v₂)
ineq' = subst (_≤ size-V v₁ ⊔ size-V v₂) (sym (size-U-iso-V v₁)) ineq
r₁ = rec _ (s≤s ineq')
r₂ = rec _ (s≤s (n≤m⊔n _ _))
in suc (count-U (iso-V v₁) r₁ + count-V v₂ r₂)
count-V : (v : V) → Acc _<_ (size-V v) → ℕ
count-V V-only _ = zero
count-V (V-with-Us u₁ u₂) (acc rec) =
let r₁ = rec _ (s≤s (m≤m⊔n _ _))
r₂ = rec _ (s≤s (n≤m⊔n _ _))
in suc (count-U u₁ r₁ + count-U u₂ r₂)
This typechecks, but all those r₁s, r₂s and whatever they require in count-U are completely irrelevant to the logic of these functions, and I'd like to get rid of them.
Let's give it a shot with instance arguments? Here's my attempt:
module Instance where
instance
m≤m⊔n' : ∀ {m n} → m ≤ m ⊔ n
m≤m⊔n' {m} {n} = m≤m⊔n m n
n≤m⊔n' : ∀ {m n} → n ≤ m ⊔ n
n≤m⊔n' {m} {n} = n≤m⊔n m n
acc-rec : ∀ {a z} → ⦃ Acc _<_ z ⦄ → ⦃ a < z ⦄ → Acc _<_ a
acc-rec ⦃ acc rec ⦄ ⦃ a<z ⦄ = rec _ a<z
mutual
count-U : (u : U) → ⦃ Acc _<_ (size-U u) ⦄ → ℕ
count-U U-only = zero
count-U (U-with-Vs v₁ v₂) = suc ({! !} + count-V v₂)
count-V : (v : V) → ⦃ Acc _<_ (size-V v) ⦄ → ℕ
count-V V-only = zero
count-V (V-with-Us u₁ u₂) = {! !}
Agda doesn't like it, though, apparently considering the instance argument to count-U as a candidate, and being not sure which one of the two lemmas about ⊔ to use:
Failed to solve the following constraints:
Resolve instance argument
_124
: (v₃ v₄ : V) ⦃ z : Acc _<_ (size-U (U-with-Vs v₃ v₄)) ⦄ →
size-V v₄ < _z_122 (v₁ = v₃) (v₂ = v₄)
Candidates
λ {m} {n} → m≤m⊔n m n : ({m n : ℕ} → m ≤ m ⊔ n)
λ {m} {n} → n≤m⊔n m n : ({m n : ℕ} → n ≤ m ⊔ n)
Resolve instance argument
_123
: (v₃ v₄ : V) ⦃ z : Acc _<_ (size-U (U-with-Vs v₃ v₄)) ⦄ →
Acc _<_ (_z_122 (v₁ = v₃) (v₂ = v₄))
Candidates
_ : Acc _<_ (size-U (U-with-Vs v₁ v₂))
acc-rec : ({a z : ℕ} ⦃ _ : Acc _<_ z ⦄ ⦃ _ : a < z ⦄ → Acc _<_ a)
And even if I leave just a single top-level instance of presumably the right shape
acc-rec : ∀ {m n} → ⦃ Acc _<_ (suc (m ⊔ n)) ⦄ → Acc _<_ n
acc-rec ⦃ acc rec ⦄ = rec _ (s≤s (n≤m⊔n _ _))
Agda would still complain.
I've re-read the section on instance resolution in Agda docs a few times, but I'm still not sure why it behaves this way.
What am I doing wrong? Can I achieve what I want with instance arguments? Or shall I just go and learn Agda metaprogramming?

Why won't the following Agda code typecheck?

I'm new to Agda and am puzzled by this one.
open import Data.Vec
open import Data.Nat
open import Data.Nat.DivMod
open import Data.Fin hiding (_+_ ; splitAt)
open import Data.Product
open import Relation.Binary.PropositionalEquality
difference : ∀ m (n : Fin m) → ∃ λ o → m ≡ toℕ n + o
difference zero ()
difference (suc m) zero = suc m , refl
difference (suc m) (suc n) with difference m n
difference (suc m) (suc n) | o , p1 = o , cong suc p1
takeFin : ∀ {A : Set} {m : ℕ} (n : Fin m) → Vec A m → Vec A (toℕ n)
takeFin {A} {m = m} n vec with difference m n
... | o , p rewrite p with splitAt (toℕ n) vec
... | xs , _ , _ = xs
The takeFin function gives the error message:
m != lhs of type ℕ
when checking that the type
{m : ℕ} (n : Fin m) (o : ℕ) (p : m ≡ toℕ n + o) (lhs : ℕ) →
lhs ≡ toℕ n + o → {A : Set} (vec : Vec A lhs) → Vec A (toℕ n)
of the generated with function is well-formed
but the following functions do compile
takeFin' : ∀ {A : Set} {m : ℕ} (n : Fin m) → Vec A m → Vec A m
takeFin' {A} {m = m} n a vec with difference m n
... | o , p rewrite p with splitAt (toℕ n) vec
... | xs , ys , _ = xs ++ ys
takeFin'' : ∀ {A : Set} {m : ℕ} (n : Fin m) → A → Vec A m → Vec A (toℕ n)
takeFin'' {A} {m = m} n a vec = replicate a
Can anyone help me out?
As new Agda users tend to do, you did complicate matters a lot more than you needed to. What you intend to prove can actually be done in a much simpler way, as follows:
open import Data.Vec
open import Data.Fin
takeFin : ∀ {a} {A : Set a} {m} {n : Fin m} → Vec A m → Vec A (toℕ n)
takeFin {n = zero} (x ∷ v) = []
takeFin {n = suc _} (x ∷ v) = x ∷ takeFin v
You should always try to write simple inductive proofs rather than using unnecessary intermediate lemmas.
As to why your version does not typecheck (it's not compilation, it's type checking) the reason lies in your rewrite call which is made on an element of m ≡ toℕ n + o while your goal is of type Vec A (toℕ n) and does not contain any occurrence of m. What you want to do instead is to transform the type of vec in your context, while rewrite only acts over the goal. Here is how I would make it work:
takeFin : ∀ {A : Set} {m} {n : Fin m} → Vec A m → Vec A (toℕ n)
takeFin {m = m} {n} vec with difference m n
... | _ , p = proj₁ (splitAt (toℕ n) (subst (Vec _) p vec))
It works but as you can see it is far less elegant (and it also requires the difference function that you defined) and, more importantly, it uses subst which is often discouraged.
As a side note, and mostly for fun, it's possible to make the function a bit more concise and elegant (but less understandable) as follows:
open import Function
takeFin : ∀ {A : Set} {m} {n : Fin m} → Vec A m → Vec A (toℕ n)
takeFin {n = n} = proj₁ ∘ (splitAt (toℕ n)) ∘ (subst (Vec _) (proj₂ (difference _ n)))
This version, while a lot more complicated to read, shows how powerful Agda is in inferring the values of parameters, as only n is explicitly given.

How does one use identity elimination (in agda) to prove Eckmann Hilton for higher dimensional paths in HoTT?

I'm trying to replicate the main lemma in the HoTT book (page 70) for proving the Eckmann Hilton Theorem, only using J (no pattern matching).
It says "But, in general, the two ways of defining horizontal composition agree, α ⋆ β = α ⋆' β, as we can see by induction on α and β and then on the two remaining 1-paths, to reduce everything to reflexivity..."
I'm quite confused as to if the E type signature is correct - should r' and s have different paths? d won't refine, so I assume there's something wrong with E? I also don't really understand which two paths I'm supposed to induct upon to complete the proof, are they r' and s? If so, I don't understand what these final motives should be? Doesn't reducing 'β' down to r eliminate the need for further induction on 1-paths?
Any answers/solutions, and more imporatntly, ways of thinking about the problem are welcome.
_⋆≡⋆'_ : {A : Set} → {a b c : A} {p q : a ≡ b} {r' s : b ≡ c} (α : p ≡ q) (β : r' ≡ s) → (α ⋆ β) ≡ (α ⋆' β)
_⋆≡⋆'_ {A} {a} {b} {c} {p} {q} {r'} {s} α β = J D d p q α c r' s β
where
D : (p q : a ≡ b) → p ≡ q → Set
D p q α = (c : A) (r' s : b ≡ c) (β : r' ≡ s) → (α ⋆ β) ≡ (α ⋆' β)
E : (r' s : b ≡ c) → r' ≡ s → Set
-- E p q β = (r ⋆ β) ≡ (r ⋆' β)
E r' s β = (_⋆_ {A} {b = b} {c} {r} {r} {r' = r'} {s = s} r β) ≡ (r ⋆' β)
e : ((s : b ≡ c) → E s s r)
e r = r --this is for testing purposes
d : ((p : a ≡ b) → D p p r)
d p c r' s β = {!J E e !}
Below is the rest of the code to get here.
module q where
data _≡_ {A : Set} (a : A) : A → Set where
r : a ≡ a
infix 20 _≡_
J : {A : Set}
→ (D : (x y : A) → (x ≡ y) → Set)
-- → (d : (a : A) → (D a a r ))
→ ((a : A) → (D a a r ))
→ (x y : A)
→ (p : x ≡ y)
------------------------------------
→ D x y p
J D d x .x r = d x
_∙_ : {A : Set} → {x y : A} → (p : x ≡ y) → {z : A} → (q : y ≡ z) → x ≡ z
_∙_ {A} {x} {y} p {z} q = J D d x y p z q
where
D : (x₁ y₁ : A) → x₁ ≡ y₁ → Set
D x y p = (z : A) → (q : y ≡ z) → x ≡ z
d : (z₁ : A) → D z₁ z₁ r
d = λ v z q → q
infixl 40 _∙_
_⁻¹ : {A : Set} {x y : A} → x ≡ y → y ≡ x
-- _⁻¹ {A = A} {x} {y} p = J2 D d x y p
_⁻¹ {A} {x} {y} p = J D d x y p
where
D : (x y : A) → x ≡ y → Set
D x y p = y ≡ x
d : (a : A) → D a a r
d a = r
infixr 50 _⁻¹
iₗ : {A : Set} {x y : A} (p : x ≡ y) → p ≡ r ∙ p
iₗ {A} {x} {y} p = J D d x y p
where
D : (x y : A) → x ≡ y → Set
D x y p = p ≡ r ∙ p
d : (a : A) → D a a r
d a = r
iᵣ : {A : Set} {x y : A} (p : x ≡ y) → p ≡ p ∙ r
iᵣ {A} {x} {y} p = J D d x y p
where
D : (x y : A) → x ≡ y → Set
D x y p = p ≡ p ∙ r
d : (a : A) → D a a r
d a = r
_∙ᵣ_ : {A : Set} → {b c : A} {a : A} {p q : a ≡ b} (α : p ≡ q) (r' : b ≡ c) → p ∙ r' ≡ q ∙ r'
_∙ᵣ_ {A} {b} {c} {a} {p} {q} α r' = J D d b c r' a α
where
D : (b c : A) → b ≡ c → Set
D b c r' = (a : A) {p q : a ≡ b} (α : p ≡ q) → p ∙ r' ≡ q ∙ r'
d : (a : A) → D a a r
d a a' {p} {q} α = iᵣ p ⁻¹ ∙ α ∙ iᵣ q
-- iᵣ == ruₚ in the book
_∙ₗ_ : {A : Set} → {a b : A} (q : a ≡ b) {c : A} {r' s : b ≡ c} (β : r' ≡ s) → q ∙ r' ≡ q ∙ s
_∙ₗ_ {A} {a} {b} q {c} {r'} {s} β = J D d a b q c β
where
D : (a b : A) → a ≡ b → Set
D a b q = (c : A) {r' s : b ≡ c} (β : r' ≡ s) → q ∙ r' ≡ q ∙ s
d : (a : A) → D a a r
d a a' {r'} {s} β = iₗ r' ⁻¹ ∙ β ∙ iₗ s
_⋆_ : {A : Set} → {a b c : A} {p q : a ≡ b} {r' s : b ≡ c} (α : p ≡ q) (β : r' ≡ s) → p ∙ r' ≡ q ∙ s
_⋆_ {A} {q = q} {r' = r'} α β = (α ∙ᵣ r') ∙ (q ∙ₗ β)
_⋆'_ : {A : Set} → {a b c : A} {p q : a ≡ b} {r' s : b ≡ c} (α : p ≡ q) (β : r' ≡ s) → p ∙ r' ≡ q ∙ s
_⋆'_ {A} {p = p} {s = s} α β = (p ∙ₗ β) ∙ (α ∙ᵣ s)
In formalization, based path induction is far more convenient than the two-sided version. With based J, we essentially rewrite in the goal type the right endpoint of a path to the left one and the path itself to reflexivity. With non-based J, we rewrite both endpoints to a "fresh" opaque variable, hence we lose the "connection" of the left endpoint to other constructions in scope (since the left endpoint may occur in other types in scope).
I haven't looked at the exact issue with your definition, but I note that with based J it's almost trivial.
data _≡_ {A : Set} (a : A) : A → Set where
r : a ≡ a
infix 20 _≡_
J : {A : Set}{x : A}(P : ∀ y → x ≡ y → Set) → P x r → ∀ {y} p → P y p
J {A} {x} P pr r = pr
tr : {A : Set}(P : A → Set){x y : A} → x ≡ y → P x → P y
tr P p px = J (λ y _ → P y) px p
_∙_ : {A : Set} → {x y z : A} → (p : x ≡ y) → (q : y ≡ z) → x ≡ z
_∙_ {A} {x} {y} {z} p q = tr (x ≡_) q p
ap : {A B : Set}(f : A → B){x y : A} → x ≡ y → f x ≡ f y
ap f {x} {y} p = tr (λ y → f x ≡ f y) p r
infixl 40 _∙_
_∙ᵣ_ : {A : Set} → {b c : A} {a : A} {p q : a ≡ b} (α : p ≡ q) (r' : b ≡ c) → p ∙ r' ≡ q ∙ r'
α ∙ᵣ r' = ap (_∙ r') α
_∙ₗ_ : {A : Set} → {a b : A} (q : a ≡ b) {c : A} {r' s : b ≡ c} (β : r' ≡ s) → q ∙ r' ≡ q ∙ s
q ∙ₗ β = ap (q ∙_) β
_⋆_ : {A : Set} → {a b c : A} {p q : a ≡ b} {r' s : b ≡ c} (α : p ≡ q) (β : r' ≡ s) → p ∙ r' ≡ q ∙ s
_⋆_ {q = q} {r'} α β = (α ∙ᵣ r') ∙ (q ∙ₗ β)
_⋆'_ : {A : Set} → {a b c : A} {p q : a ≡ b} {r' s : b ≡ c} (α : p ≡ q) (β : r' ≡ s) → p ∙ r' ≡ q ∙ s
_⋆'_ {A} {p = p} {s = s} α β = (p ∙ₗ β) ∙ (α ∙ᵣ s)
_⋆≡⋆'_ : {A : Set} → {a b c : A} {p q : a ≡ b} {r' s : b ≡ c} (α : p ≡ q) (β : r' ≡ s) → (α ⋆ β) ≡ (α ⋆' β)
_⋆≡⋆'_ {A} {a} {b} {c} {p} {q} {r'} {s} α β =
J (λ s β → (α ⋆ β) ≡ (α ⋆' β))
(J (λ q α → (α ⋆ r) ≡ (α ⋆' r))
r
α) -- induction on α
β -- induction on β

How is Agda inferring the implicit argument to `Vec.foldl`?

foldl : ∀ {a b} {A : Set a} (B : ℕ → Set b) {m} →
(∀ {n} → B n → A → B (suc n)) →
B zero →
Vec A m → B m
foldl b _⊕_ n [] = n
foldl b _⊕_ n (x ∷ xs) = foldl (λ n → b (suc n)) _⊕_ (n ⊕ x) xs
When translating the above function to Lean, I was shocked to find out that its true form is actually like...
def foldl : ∀ (P : ℕ → Type a) {n : nat}
(f : ∀ {n}, P n → α → P (n+1)) (s : P 0)
(l : Vec α n), P n
| P 0 f s (nil _) := s
| P (n+1) f s (cons x xs) := foldl (fun n, P (n+1)) (λ n, #f (n+1)) (#f 0 s x) xs
I find it really impressive that Agda is able to infer the implicit argument to f correctly. How is it doing that?
foldl : ∀ {a b} {A : Set a} (B : ℕ → Set b) {m} →
(∀ {n} → B n → A → B (suc n)) →
B zero →
Vec A m → B m
foldl b _⊕_ n [] = n
foldl b _⊕_ n (x ∷ xs) = foldl (λ n → b (suc n)) _⊕_ (_⊕_ {0} n x) xs
If I pass it 0 explicitly as in the Lean version, I get a hint as to the answer. What is going on is that Agda is doing the same thing as in the Lean version, namely wrapping the implicit arg so it is suc'd.
This is surprising as I thought that implicit arguments just means that Agda should provide them on its own. I did not think it would change the function when it is passed as an argument.

Mutually recursive proofs

I have a standard untyped lambda calculus definition and some operations and I'm trying to show a property related to the associativity of substitutions. Unfortunately, I have to show a lot of code to make things clear.
open import Data.Nat renaming (ℕ to Nat) using (zero ; suc ; _+_)
open import Data.Vec.Properties
open import Data.Vec
using (Vec ; [] ; _∷_ ; map ; lookup ; allFin ; tabulate ; tail ; head)
open import Data.Fin using (Fin ; zero ; suc)
open import Function using (_∘_ ; _$_)
open import Relation.Binary.PropositionalEquality
open ≡-Reasoning
data WellScopedTm : Nat → Set where
var : (n : Nat) → Fin n → WellScopedTm n
lam : (n : Nat) → WellScopedTm (suc n) → WellScopedTm n
app : (n : Nat) → WellScopedTm n → WellScopedTm n → WellScopedTm n
↑_ : ∀ n → Vec (Fin (suc n)) n
↑ _ = tabulate suc
rename : ∀ {n m} (t : WellScopedTm n) (is : Vec (Fin m) n) → WellScopedTm m
rename {_} {m} (var _ i) is = var m (lookup i is)
rename {n} {m} (lam _ t) is = lam m (rename t (zero ∷ map suc is))
rename {n} {m} (app _ t u) is = app m (rename t is) (rename u is)
-- q
q : (n : Nat) → WellScopedTm (suc n)
q n = var (suc n) zero
-- id
idSub : (n : Nat) → Vec (WellScopedTm n) n
idSub n = tabulate (var n)
-- weakening (derived)
lift : {n : Nat} → WellScopedTm n → WellScopedTm (suc n)
lift t = rename t (↑ _)
-- p
projSub : (n : Nat) → Vec (WellScopedTm (suc n)) n
projSub = map lift ∘ idSub -- or tabulate (lift ∘ (var n))
-- sub
sub : ∀ {n m} → WellScopedTm n → Vec (WellScopedTm m) n → WellScopedTm m
sub (var _ i) ts = lookup i ts
sub (lam _ t) ts = lam _ (sub t (var _ zero ∷ map lift ts))
sub (app _ t u) ts = app _ (sub t ts) (sub u ts)
-- composition of homs
comp : ∀ {m n k} → Vec (WellScopedTm n) k → Vec (WellScopedTm m) n → Vec (WellScopedTm m) k
comp [] _ = []
comp (t ∷ ts) us = sub t us ∷ comp ts us
Specifically, I want to show that
compAssoc : ∀ {m n k p} (ts : Vec (WellScopedTm n) k) (us : Vec (WellScopedTm m) n)
(vs : Vec (WellScopedTm p) m) → comp (comp ts us) vs ≡ comp ts (comp us vs)
compInSub : ∀ {m n k} (t : WellScopedTm n) (ts : Vec (WellScopedTm k) n)
(us : Vec (WellScopedTm m) k) → sub t (comp ts us) ≡ sub (sub t ts) us
The proofs I came up with rely on each other, the proof of associativity is this
compAssoc [] us vs = refl
compAssoc (x ∷ ts) us vs = sym $
trans (cong (λ d → d ∷ comp ts (comp us vs)) (compInSub x us vs))
(sym (cong (_∷_ (sub (sub x us) vs)) (compAssoc ts us vs)))
However, in the lambda case of the second property, I have to use associativity in the two open goals and the termination checker complains.
compInSub (var _ zero) (v ∷ ts) us = refl
compInSub (var _ (suc x)) (v ∷ ts) us = compInSub (var _ x) ts us
compInSub (app n t u) ts us =
trans (cong (λ z → app _ z (sub u (comp ts us))) (compInSub t ts us))
(cong (app _ (sub (sub t ts) us)) (compInSub u ts us))
compInSub (lam n t) ts us = sym $
begin
lam _ (sub (sub t (q _ ∷ map lift ts)) (q _ ∷ map lift us))
≡⟨ cong (lam _) (sym $ compInSub t (q _ ∷ map lift ts) (q _ ∷ map lift us)) ⟩
lam _ (sub t $ q _ ∷ comp (map lift ts) (q _ ∷ map lift us))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ comp x _)) (mlift=xs∘p ts) ⟩
lam _ (sub t $ q _ ∷ comp (comp ts (projSub _)) (q _ ∷ map lift us))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ comp _ (q _ ∷ x))) (mlift=xs∘p us) ⟩
lam _ (sub t $ q _ ∷ comp (comp ts (projSub _)) (q _ ∷ comp us (projSub _)))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ x )) {!!} ⟩ -- compAssoc ts (projSub _) (q _ ∷ comp us (projSub _))
lam _ (sub t $ q _ ∷ comp ts (comp (projSub _) (q _ ∷ comp us (projSub _))))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ comp ts x)) (p∘x∷ts (q _) (comp us (projSub _))) ⟩ --
lam _ (sub t $ q _ ∷ comp ts (comp us (projSub _)))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ x)) (sym {!!}) ⟩ -- compAssoc ts us (projSub _)
lam _ (sub t $ q _ ∷ comp (comp ts us) (projSub _))
≡⟨ cong (λ x → lam _ (sub t $ q _ ∷ x)) (sym (mlift=xs∘p (comp ts us))) ⟩
lam _ (sub t $ q _ ∷ map lift (comp ts us))
∎
Is the termination checker right to disallow the calls on associativity I commented? If not, any remedies?
Lastly, some postulates so that the code typechecks
postulate p∘x∷ts : ∀ {n k : Nat} (t : WellScopedTm n) (ts : Vec (WellScopedTm n) k) → comp (projSub k) (t ∷ ts) ≡ ts
postulate mlift=xs∘p : ∀ {n m : Nat} (xs : Vec (WellScopedTm n) m) → map lift xs ≡ comp xs (projSub n)

Resources