Why does cubical agda choose the particular two component homogeneous path composition operator it does? - agda

In the implementation of the prelude for cubical agda, there is a definition of 3 component path composition
_∙∙_∙∙_ : w ≡ x → x ≡ y → y ≡ z → w ≡ z
This definition feels reasonably natural and clean to me. But then to get the 2 component path composition operator there are 3 choices: One for fixing each of the arguments as refl.
The standard ∙ is done by fixing the first argument, and another implementation (∙') is given for fixing the third argument. There is then a proof that they are the same. But the version fixing the 2nd argument is not discussed.
It seems to me that the 2nd argument version (call it ∘) has a nice property that
sym (p1 ∘ p2) is equal to sym p2 ∘ sym p1 definitionally. This seems like it can reduce the book keeping in some proofs.
Are there reasons why this version is not the standard version? Are there other computational properties that are better with the standard version?

Related

Pointwise Equality ≗ vs Propositional Equality ≡ in Agda

When trying to prove a property over functions using list, I had to prove that the property is preserved by map over a list. Gladly, I found this useful congruence proof in Agda's standard library (here):
map-cong : ∀ {f g : A → B} → f ≗ g → map f ≗ map g
map-cong f≗g [] = refl
map-cong f≗g (x ∷ xs) = cong₂ _∷_ (f≗g x) (map-cong f≗g xs)
I am trying to do my proof with respect to propositional equality ≡ but map-cong proves pointwise equality ≗. I have several questions here:
I noticed that map-cong was previously implemented via ≡ but was generalized (I found this issue). This suggests that ≗ is a generalization of ≡. Is there a way to conclude propositional equality from pointwise equality? Something like a function f ≗ g → f ≡ g?
When looking at the implementation, point-wise equality seems to be defined as propositional equality for functions via the extensionality axiom: Two functions are equal if they yield the same results for all inputs. This is emphasized by the above definition of map-cong which indeed matches not only on the proof f≗g but also on possible input arguments. Is my understanding correct? Is there any documentation or explanation on the implementation of ≗? I found the implementation in the standard library to be undocumented and rather intricate, split across several files and using multiple levels of abstraction.
(Is there any way to conventiently browse the standard library except for browsing the source code (e.g., via grep or clicking in Github)?)
No there isn't.
That's correct. The definition of ≗ is a bit involved because it reuses existing notion but you can ask Agda to normalise it (C-c C-n in emacs) and see that it's just:
λ {A} {B} f g → (x : A) → f x ≡ g x
I typically use http://agda.github.io/agda-stdlib/

How to recover intermediate computation results from a function using "with"?

I wrote a function on the natural numbers that uses the operator _<?_ with the with-abstraction.
open import Data.Maybe
open import Data.Nat
open import Data.Nat.Properties
open import Relation.Binary.PropositionalEquality
open import Relation.Nullary
fun : ℕ → ℕ → Maybe ℕ
fun x y with x <? y
... | yes _ = nothing
... | no _ = just y
I would like to prove that if the result of computing with fun is nothing then the original two values (x and y) fulfill x < y.
So far all my attempts fall short to prove the property:
prop : ∀ (x y)
→ fun x y ≡ nothing
→ x < y
prop x y with fun x y
... | just _ = λ()
... | nothing = λ{refl → ?} -- from-yes (x <? y)}
-- This fails because the pattern matching is incomplete,
-- but it shouldn't. There are no other cases
prop' : ∀ (x y)
→ fun x y ≡ nothing
→ x < y
prop' x y with fun x y | x <? y
... | nothing | yes x<y = λ{refl → x<y}
... | just _ | no _ = λ()
--... | _ | _ = ?
In general, I've found that working with the with-abstraction is painful. It is probably due to the fact that with and | hide some magic in the background. I would like to understand what with and | really do, but the "Technical details" currently escape my understanding. Do you know where to look for to understand how to interpret them?
Concrete solution
You need to case-split on the same element on which you case-split in your function:
prop : ∀ x y → fun x y ≡ nothing → x < y
prop x y _ with x <? y
... | yes p = p
In the older versions of Agda, you would have had to write the following:
prop-old : ∀ x y → fun x y ≡ nothing → x < y
prop-old x y _ with x <? y
prop-old _ _ refl | yes p = p
prop-old _ _ () | no _
But now you are able to completely omit a case when it leads to a direct contradiction, which is, in this case, that nothing and just smth can never be equal.
Detailed explanation
To understand how with works you first need to understand how definitional equality is used in Agda to reduce goals. Definitional equality binds a function call with its associated expression depending on the structure of its input. In Agda, this is easily seen by the use of the equal sign in the definition of the different cases of a function (although since Agda builds a tree of cases some definitional equalities might not hold in some cases, but let's forget this for now).
Let us consider the following definition of the addition over naturals:
_+_ : ℕ → ℕ → ℕ
zero + b = b
(suc a) + b = suc (a + b)
This definition provides two definitional equalities that bind zero + b with b and (suc a) + b with suc (a + b). The good thing with definitional equalities (as opposed to propositional equalities) is that Agda automatically uses them to reduce goals whenever possible. This means that, for instance, if in a further goal you have the element zero + p for any p then Agda will automatically reduce it to p.
To allow Agda to do such reduction, which is fundamental in most cases, Agda needs to know which of these two equalities can be exploited, which means a case-split on the first argument of this addition has to be made in any further proof about addition for a reduction to be possible. (Except for composite proofs based on other proofs which use such case-splits).
When using with you basically add additional definitional equalities depending on the structure of the additional element. This only makes sense, understanding that, that you need to case-split on said element when doing proofs about such a function, in order for Agda once again to be able to make use of these definitional equalities.
Let us take your example and apply this reasoning to it, first without the recent ability to omit impossible cases. You need to prove the following statement:
prop-old : ∀ x y → fun x y ≡ nothing → x < y
Introducing parameters in the context, you write the following line:
prop-old x y p = ?
Having written that line, you need to provide a proof of x < y with the elements in the context. x and y are just natural so you expect p to hold enough information for this result to be provable. But, in this case, p is just of type fun x y ≡ nothing which does not give you enough information. However, this type contains a call to function fun so there is hope ! Looking at the definition of fun, we can see that it yields two definitional equalities, which depend on the structure of x <? y. This means that adding this parameter to the proof by using with once more will allow Agda to make use of these equalities. This leads to the following code:
prop-old : ∀ x y → fun x y ≡ nothing → x < y
prop-old x y p with x <? y
prop-old _ _ p | yes q = ?
prop-old _ _ p | no q = ?
At that point, not only did Agda case-split on x <? y, but it also reduced the goal because it is able, in both cases, to use a specific definitional equality of fun. Let us take a closer look at both cases:
In the yes q case, p is now of type nothing ≡ nothing and q is of type x < y which is exactly what you want to prove, which means the goal is simply solved by:
prop-old _ _ p | yes q = q
I the no q case, something more interesting happens, which is somewhat harder to understand. After reduction, p is now of type just y ≡ nothing because Agda could use the second definitional equality of fun. Since _≡_ is a data type, it is possible to case-split on p which basically asks Agda: "Look at this data type and give me all the possible constructors for an element of type just y ≡ nothing". At first, Agda only finds one possible constructor, refl, but this constructor only builds an element of a type where both sides of the equality are the same, which is not the case here by definition because just and nothing are two distinct constructors from the same data type, Maybe. Agda then concludes that there are no possible constructors that could ever build an element of such type, hence this case is actually not possible, which leads to Agda replacing p with the empty pattern () and dismissing this case. This line is thus simply:
prop-old _ _ () | no _
In the more recent versions of Agda, as I explained earlier, some of these steps are done directly by Agda which allows us to directly omit impossible cases when the emptiness of a pattern can be deduced behind the curtain, which leads to the prettier:
prop : ∀ x y → fun x y ≡ nothing → x < y
prop x y _ with x <? y
... | yes p = p
But it is the same process, just done a bit more automatically. Hopefully, these elements will be of some use in your journey towards understanding Agda.

Agda: How to apply +-right-identity to match types

Somewhere in my code, I have a hole which expects a natural number, let's call it n for our purposes. I have a function which returns me a n + 0.
Data.Nat.Properties.Simple contains a proof +-right-identity of the following type:
+-right-identity : ∀ n → n + 0 ≡ n
I'm not familiar enough with the Agda syntax and stdlib yet to know how to easily use this proof to convince the type checker that I can use my value.
More generally, how to I use a relation x ≡ y to transform a given x into y?
I found an answer in this thread: Agda Type-Checking and Commutativity / Associativity of +
For future readers, the keyword I was looking for was rewrite.
By appending rewrite +-right-identity n to the pattern matching (before the = sign), Agda "learned" about this equality.

Provable coherence in OTT

I'm playing with observational type theory.
Here is equality of π-types (π is the lowercase Π, i.e. π A B is the code for (x : A) -> B x) defined mutually with coercions:
π A₁ B₁ ≃ π A₂ B₂ = σ (A₂ ≃ A₁) λ P -> π _ λ x -> B₁ (coerce P x) ≃ B₂ x
and equality of functions defined accordingly (σ is the lowercase Σ):
_≅_ {A = π A₁ B₁} {π A₂ B₂} f₁ f₂ = σ (A₂ ≃ A₁) λ P -> π _ λ x -> f₁ (coerce P x) ≅ f₂ x
So instead of "equal functions map equal inputs to equal outputs" we have "equal functions map definitionally equal inputs to equal outputs".
In this setting coherence
coerce : ∀ {α β} {A : Univ α} {B : Univ β} -> ⟦ A ≃ B ⟧ᵀ -> ⟦ A ⟧ᵀ -> ⟦ B ⟧ᵀ
coherence : ∀ {α β} {A : Univ α} {B : Univ β}
-> (P : ⟦ A ≃ B ⟧ᵀ) -> (x : ⟦ A ⟧ᵀ) -> ⟦ x ≅ coerce P x ⟧ᵀ
(Univ 0 is Prop, Univ (suc α) is Type α)
is provable. The only thing I needed to postulate is
postulate ≃-refl : ∀ {α} -> (A : Univ α) -> ⟦ A ≃ A ⟧ᵀ
But we can tweak equality to handle A ≃ A as a special case (I think, trustMe needs a friend _≟_ : ∀ {α} {A : Set α} (x y : A) -> Maybe (x ≡ y)).
We still need to postulate something to define subst and other stuff.
Did I miss something? Do we lose any irrelevance? It seems suspicious to mention type equality in the definition of equality of functions. Do we lose much by restricting inputs of equal functions to be definitionally equal? Is there anything good about having strongly normalizing coherence or it doesn't matter, since it's computationally irrelevant anyway?
The code (I ignored positivity, termination and cumulativity issues altogether).
Firstly, thanks for asking about Observational Type Theory. Secondly, what you've done here does seem to hang together, even though it has things in different places from where Thorsten Altenkirch, Wouter Swierstra and I put them in our version of the story. Thirdly, it's no surprise (at least not to me) that coherence is derivable, leaving reflexivity the only postulate. That's true of our OTT as well, and Wouter did the proofs in Agda 1, back when we wrote that paper. Proof irrelevance and the shortness of life meant I didn't port his proofs to Agda 2.
If you've missed anything, it's lurking in your remark
We still need to postulate something to define subst and other stuff.
If you have some P : X -> Set, some a, b : X and some q : a = b, you expect to get a function in P a -> P b. The "equal functions take equal inputs to equal outputs" formulation gives you that, as refl P : P = P, so from q, we can deduce P a = P b. Your "equal functions take a given input to equal outputs" formulation does not allow you to let q bridge the gap from a to b.
In the presence of refl and subst, "two equal inputs" amounts to the same thing as "one input used in two places". It seems to me that you've moved the work into whatever else you need to get subst. Depending on how lazy your definition of coerce is (and that's how you get proof irrelevance), you will need only a postulate.
With your particular formulation, you might even get away with a homogeneous value equality. If you're fixing type gaps with coercions rather than equations, you might save yourself some trouble (and maybe get rid of that equation on the domain type in function equality). Of course, in that case, you'd need to think about how to replace the statement of coherence.
We tried quite hard to keep coercion out of the definition of equality, to retain some sort of symmetry, and to keep type equations out of value equations, mostly to have less to think about at one go. It's interesting to see that at least some parts of the construction might get easier with "a thing and its coercion" replacing "two equal things".

Is it possible to get hold of free theorems as propositional equalities?

"Free theorems" in the sense of Wadler's paper "Theorems for Free!" are equations about certain values are derived based only on their type. So that, for example,
f : {A : Set} → List A → List A
automatically satisfies
f . map g = map g . f
Can I get my hands on an Agda term, then, of the following type:
(f : {A : Set} → List A → List A) {B C : Set} (g : B → C) (xs : List B)
→ f (map g xs) ≡ map g (f xs)
or if so/if not, can I do something more/less general?
I'm aware of the existence of the Lightweight Free Theorems library but I don't think it does what I want (or if it does, I don't understand it well enough to do it).
(An example use case is that I have a functor F : Set → Set and would like to prove that a polymorphic function F A × F B → F (A × B) is automatically a natural transformation.)
No, the type theory on which Agda is build is not strong enough to prove this. This would require a feature called "internalized parametricity", see the work by Guilhem:
Jean-Philippe Bernardy and Guilhem Moulin: A Computational Interpretation of Parametricity (2012)
Guilhem Moulin: Pure Type Systems with an Internalized Parametricity Theorem (2013)
This would allow you for example to prove that all inhabitants of "(A : Set) → A → A" are equal to the (polymorphic) identity function. As far as I know, this has not been implemented in any language yet.
Chantal Keller and Marc Lasson developped a tactic for Coq generating the parametricity relation corresponding to a (closed) type and proving that this type's inhabitants satisfy the generated relation. You can find more details about this work on Keller's website.
Now in Agda's case, it is in theory possible to do the same sort of work by implementing the tactic in pure Agda using a technique called reflection.

Resources