Is this formulation of Modulo a Set? - agda

The Cubical Agda library defined a Modulo type like this:
data Modulo (k : ℕ) : Type₀ where
embed : (n : ℕ) → Modulo k
pre-step : NonZero k → (n : ℕ) → embed n ≡ embed (k + n)
Is this a Set?
Hand-wavingly, I can see that any path is a composition of refls and pre-steps, taking the form embed n ≡ embed (m * k + n); and
since _+_ is associative and 0 +_ ≡ id, the structure of how refls and pre-steps are combined doesn't matter; but how would that be formalized?

Based on #András Kovács's comment, turns out Modulo n is indeed a h-set and there is a proof in the standard library, I just missed it :)
The proof basically goes like this:
Modulo 0 is isomorphic to ℕ since NonZero 0 is empty (so we only have embed values).
Modulo (suc k) is isomorphic to Fin (suc k) basically by applying enough pre-steps until we get embed n with n < k. This is a long-winded technical proof taking up its own module.
And then of course both ℕ and Fin (suc k) are discrete, hence h-sets themselves.

Related

How to prove element addition is injective for a cubical finite multi set?

I'm playing around with the type of finite multisets as defined in the cubical standard library here:
https://github.com/agda/cubical/blob/0d272ccbf6f3b142d1b723cead28209444bc896f/Cubical/HITs/FiniteMultiset/Base.agda#L15
data FMSet (A : Type ℓ) : Type ℓ where
[] : FMSet A
_∷_ : (x : A) → (xs : FMSet A) → FMSet A
comm : ∀ x y xs → x ∷ y ∷ xs ≡ y ∷ x ∷ xs
trunc : isSet (FMSet A)
I was able to reproduce the proofs for count extensionality and one of my lemmas I showed that you can remove a element from both sides of an equality and keep the equality.
It was similar to this one: https://github.com/agda/cubical/blob/0d272ccbf6f3b142d1b723cead28209444bc896f/Cubical/HITs/FiniteMultiset/Properties.agda#L183
remove1-≡-lemma : ∀ {a} {x} xs → a ≡ x → xs ≡ remove1 a (x ∷ xs)
remove1-≡-lemma {a} {x} xs a≡x with discA a x
... | yes _ = refl
... | no a≢x = ⊥.rec (a≢x a≡x)
My proofs weren't using the same syntax but in the core libraries syntax it was
cons-path-lemma : ∀ {x} xs ys → (x ∷ xs) ≡ (x ∷ ys) → xs ≡ ys
where the proof is using remove1-≡-lemma path composed on both side of a path which is the argument path functionally composed with remove1 x.
This requires the type of the values to have decidable equality as remove1 doesn't make sense without it. But the lemma itself doesn't mention decidable equality, and so I thought I would try to prove it without having that as a hypothesis. Its now a week later and I'm at my wits end because this seems so 'obvious' but so stubborn to prove.
I'm thinking that my intuition about this being provable may be coming from my classical math background, and so it doesn't follow constructively/contiuously.
So my question is: Is this provable with no assumptions on the element type? If so what would the general structure of the proof look like, I have had trouble getting proofs that want to induct over the two FMSets simultaneously to work (as I'm mostly guessing when trying to get paths to line up as necessary). If it is not provable with no assumptions, is it possible to show that it is equivalent in some form to the necessary assumptions?
I can't offer a proof but an argument why it should be provable without assuming decidability. I think finite multisets can be represented as functions Fin n -> A and equality between multisets f and g is given by a permutation phi : Fin n ~ Fin n, (that is invertible functions on Fin n) such that f o phi = g. Now
(a :: f) 0 = a
(a :: f) (suc i) = f i
If phi : Fin (suc n) ~ Fin (suc n) proves that a :: f = a :: g you can construct a psi : Fin n ~ Fin n which proves that f = g. If phi 0 = 0 then psi n = phi (suc n) otherwise you have to obtain psi by assigning phi^-1 0 to phi 0. However this case analysis is on Fin n.
I think representing the permutation group by swapping adjacent elements is just an inconvenient representation for this problem.

How are the equational reasoning operators used in practice?

The Agda Standard Library exports some operators that allow you to write proofs in a manner similar to what you would do on paper, or how it is taught in the Haskell community. While you can write "conventional" Agda proofs in a somewhat systematic manner by refining a Goal using with abstractions, rewrites or helper lemmas as needed, it isn't really clear to me how a proof using the equality reasoning primitives "comes to be".
That is, while you can find examples about what these proofs look like when they are finished and type-check here and there, these already worked examples don't show you how they are developed in a systematic step-by-step (maybe hole-driven) manner.
How is it done in practice? Do people "refactor" an already existing proof? Do you try to "burn the candle from both sides" by starting with the left-hand and right-hand sides of the initial Goal and a hole in the middle?
Furthermore, the Agda documentation states that if the equality reasoning primitives are in scope, "then Auto will do equality reasoning using these constructs". What does that mean?
I would appreciate it if someone could point me in the right direction, or even post an example of how they develop these kinds of proofs step-by-step, what questions they ask themselves as they go through it, where they put the holes and so on. Thanks!
I think it would be more helpful for you to look at the definitions for equational reasoning for Identity here Equational Reasoning. The main point is that it is just a nicer way to apply transitivity chains and allowing the user to see the actual expression in code, rather than the proof evidence which is not that easy to read.
The way I go about building a proof using equational reasoning for any setoid is this. With the example of natural numbers
open import Relation.Binary.PropositionalEquality
open ≡-Reasoning
data ℕ : Set where
zero : ℕ
succ : ℕ → ℕ
_+_ : ℕ → ℕ → ℕ
m + zero = m
m + succ n = succ (m + n)
Let's take commutativity as an example.
This is how I start with the goals.
comm+ : ∀ m n → m + n ≡ n + m
comm+ m zero = {!!}
comm+ m (succ n) =
begin
succ (m + n)
≡⟨ {!!} ⟩
succ n + m
∎
Now I see the original expression and goal and my goal proof is between the brackets.
I work only on the expression leaving the proof objects untouched and add
what I think should work.
comm+ : ∀ m n → m + n ≡ n + m
comm+ m zero = {!!}
comm+ m (succ n) =
begin
succ (m + n)
≡⟨ {!!} ⟩
succ (n + m)
≡⟨ {!!}⟩
succ n + m
∎
Once I think I have a proof, I work on the proof objects that justify my steps.
Regarding the auto-tactic, you should not bother with that in my opinion. It's not being worked on for a while.

Why Left Identity over "Addition" is trivial proof but Right Identity is not?

I am just learning the Agda, but I do not understand that when I am trying to prove Identity over Addition then, I see that Left Identity is trivial proof.
left+identity : ∀ n -> (zero + n) ≡ n
left+identity n = refl
But It is not true for Right Identity.
right+identity : ∀ n -> (n + zero) ≡ n
right+identity zero = refl
right+identity (suc n) = cong suc (right+identity n)
I can not understand the reason. Please explain. Thanks.
The problem is how dependent typed theories deal with equality. Usually, the definition of addition is:
_+_ : Nat -> Nat -> Nat
zero + m = m -- (1)
(suc n) + m = suc (n + m) -- (2)
Notice that equation one implies left identity. When you have:
forall n -> 0 + n = n
Agda's type checker can use equation (1) of addition to verify that the equality holds. Remember, the propositional equality constructor (refl) has the type
refl : x == x
So, when you use refl as an proof for the left identity, Agda will try to reduce both sides of equality (normalize them) and check if they are indeed equal. Using the definition of addition, left identity is immediate, by equation (1).
But for the right identity this does not hold by definition. Note that when we have
n + 0 == n
Agda's type checker cannot use addition equations in order to check that this equality indeed hold. The only way to prove this equality is using induction (or, if your prefer, recursion).
Hope that this can help you.

Non trivial negation in agda

All negations, i.e. conclusions of the form A -> Bottom in agda that I've seen came from absurd pattern matchings. Are there any other cases where it's possible to get negation in agda? Are there any other cases in dependent type theory where it's possible?
Type theories usually don't have a notion of pattern matching (and by extension absurd patterns) and yet they can prove negation of the sort you are describing.
First of all, we'll have to look at data types. Without pattern matching, you characterise them by introduction and elimination rules. Introduction rules are basically constructors, they tell you how to construct a value of that type. On the other hand, elimination rules tell you how to use a value of that type. There are also associated computation rules (β-reduction and sometimes η-reduction), but we needn't deal with those now.
Elimination rules look a bit like folds (at least for positive types). For example, here's how an elimination rule for natural numbers would look like in Agda:
ℕ-elim : ∀ {p} (P : ℕ → Set p)
(s : ∀ {n} → P n → P (suc n))
(z : P 0) →
∀ n → P n
ℕ-elim P s z zero = z
ℕ-elim P s z (suc n) = s (ℕ-elim P s z n)
While Agda does have introduction rules (constructors), it doesn't have elimination rules. Instead, it has pattern matching and as you can see above, we can recover the elimination rule with it. However, we can also do the converse: we can simulate pattern matching using elimination rules. Truth be told, it's usually much more inconvenient, but it can be done - the elimination rule mentioned above basically does pattern matching on the outermost constructor and if we need to go deeper, we can just apply the elimination rule again.
So, we can sort of simulate pattern matching. What about absurd patterns? As an example, we'll take fourth Peano axiom:
peano : ∀ n → suc n ≡ zero → ⊥
However, there's a trick involved (and in fact, it's quite crucial; in Martin-Löf's type theory without universes you cannot do it without the trick, see this paper). We need to construct a function that will return two different types based on its arguments:
Nope : (m n : ℕ) → Set
Nope (suc _) zero = ⊥
Nope _ _ = ⊤
If m ≡ n, we should be able to prove that Nope m n holds (is inhabitated). And indeed, this is quite easy:
nope : ∀ m n → m ≡ n → Nope m n
nope zero ._ refl = _
nope (suc m) ._ refl = _
You can now sort of see where this is heading. If we apply nope to the "bad" proof that suc n ≡ zero, Nope (suc n) zero will reduce to ⊥ and we'll get the desired function:
peano : ∀ n → suc n ≡ zero → ⊥
peano _ p = nope _ _ p
Now, you might notice that I cheated a bit. I used pattern matching even though I said earlier that these type theories don't come with pattern matching. I'll remedy that for the next example, but I suggest you try to prove peano without pattern matching on the numbers (use ℕ-elim given above); if you really want a hardcore version, do it without pattern matching on equality as well and use this eliminator instead:
J : ∀ {a p} {A : Set a} (P : ∀ (x : A) y → x ≡ y → Set p)
(f : ∀ x → P x x refl) → ∀ x y → (p : x ≡ y) → P x y p
J P f x .x refl = f x
Another popular absurd pattern is on something of type Fin 0 (and from this example, you'll get the idea how other such absurd matches could be simulated). So, first of all, we'll need eliminator for Fin.
Fin-elim : ∀ {p} (P : ∀ n → Fin n → Set p)
(s : ∀ {n} {fn : Fin n} → P n fn → P (suc n) (fsuc fn))
(z : ∀ {n} → P (suc n) fzero) →
∀ {n} (fn : Fin n) → P n fn
Fin-elim P s z fzero = z
Fin-elim P s z (fsuc x) = s (Fin-elim P s z x)
Yes, the type is really ugly. Anyways, we're going to use the same trick, but this time, we only need to depend on one number:
Nope : ℕ → Set
Nope = ℕ-elim (λ _ → Set) (λ _ → ⊤) ⊥
Note that is equivalent to:
Nope zero = ⊥
Nope (suc _) = ⊤
Now, notice that both cases for the eliminator above (that is, s and z case) return something of type P (suc n) _. If we choose P = λ n _ → Nope n, we'll have to return something of type ⊤ for both cases - but that's easy! And indeed, it was easy:
bad : Fin 0 → ⊥
bad = Fin-elim (λ n _ → Nope n) (λ _ → _) _
The last thing you might be wondering about is how do we get value of any type from ⊥ (known as ex falso quodlibet in logic). In Agda, we obviously have:
⊥-elim : ∀ {w} {Whatever : Set w} → ⊥ → Whatever
⊥-elim ()
But it turns out that this is precisely the eliminator for ⊥, so it is given when defining this type in type theory.
Are you asking for something like
open import Relation.Nullary
→-transposition : {P Q : Set} → (P → Q) → ¬ Q → ¬ P
→-transposition p→q ¬q p = ¬q (p→q p)
?

Using multiple EqReasoning instantiations conveniently

Is there a way to conveniently use multiple instantiations of EqReasoning where the underlying Setoid is not necessarily semantic equality (i.e. ≡-Reasoning cannot be used)? The reason that ≡-Reasoning is convenient is that the type argument is implicit and it uniquely determines the Setoid in use by automatically selecting semantic equality. When using arbitrary Setoids there is no such unique selection. However a number of structures provide a canonical way to lift a Setoid:
Data.Maybe and Data.Covec have a setoid member.
Data.Vec.Equality.Equality provides enough definitions to write a canonical Setoid lifting for Vec as well. Interestingly there is also is a slightly different equality available at Relation.Binary.Vec.Pointwise, but it does not provide a direct lifting either albeit implementing all the necessary bits.
Data.List ships a Setoid lifting in Relation.Binary.List.Pointwise.
Data.Container also knows how to lift a Setoid.
By using any of these structures, one automatically gets to work with multiple Setoids even if one started out with one. When proofs use these structures (multiple of them in a single proof), it becomes difficult to write down the proof, because EqReasoning must be instantiated for all of them even though each particular Setoid is kind of obvious. This can be done by renaming begin_, _≈⟨_⟩_ and _∎, but I don't consider this renaming convenient.
Consider for example, a proof in the Setoid of Maybe where a sequence of arguments needs to be wrapped in Data.Maybe.Eq.just (think cong just) or a proof in an arbitrary Setoid that temporarily needs to wrap things in a just constructor exploiting its injectivity.
Normally, the only way Agda can pick something for you is when it is uniquely determined by the context. In the case of EqReasoning, there isn't usually enough information to pin down the Setoid, even worse actually: you could have two different Setoids over the same Carrier and _≈_ (consider for example two definitionally unequal proofs of transitivity in the isEquivalence field).
However, Agda does allow special form of implicit arguments, which can be filled as long as there is only one value of the desired type. These are known as instance arguments (think instance as in Haskell type class instances).
To demonstrate roughly how this works:
postulate
A : Set
a : A
Now, instance arguments are wrapped in double curly braces {{}}:
elem : {{x : A}} → A
elem {{x}} = x
If we decide to later use elem somewhere, Agda will check for any values of type A in scope and if there's only one of them, it'll fill that one in for {{x : A}}. If we added:
postulate
b : A
Agda will now complain that:
Resolve instance argument _x_7 : A. Candidates: [a : A, b : A]
Because Agda already allows you to perform computations on type level, instance arguments are deliberately limited in what they can do, namely Agda won't perform recursive search to fill them in. Consider for example:
eq : ... → IsEquivalence _≈_ → IsEquivalence (Eq _≈_)
where Eq is Data.Maybe.Eq mentioned in your question. When you then require Agda to fill in an instance argument of a type IsEquivalence (Eq _≈_), it won't try to find something of type IsEquivalence _≈_ and apply eq to it.
With that out of the way, let's take a look at what could work. However, bear in mind that all this stands on unification and as such you might need to push it in the right direction here and there (and if the types you are dealing with become complex, unification might require you to give it so many directions that it won't be worth it in the end).
Personally, I find instance arguments a bit fragile and I usually avoid them (and from a quick check, it would seem that so does the standard library), but your experience might vary.
Anyways, here we go. I constructed a (totally nonsensical) example to demonstrate how to do it. Some boilerplate first:
open import Data.Maybe
open import Data.Nat
open import Relation.Binary
import Relation.Binary.EqReasoning as EqR
To make this example self-contained, I wrote some kind of Setoid with natural numbers as its carrier:
data _≡ℕ_ : ℕ → ℕ → Set where
z≡z : 0 ≡ℕ 0
s≡s : ∀ {m n} → m ≡ℕ n → suc m ≡ℕ suc n
ℕ-setoid : Setoid _ _
ℕ-setoid = record
{ _≈_ = _≡ℕ_
; isEquivalence = record
{ refl = refl
; sym = sym
; trans = trans
}
}
where
refl : Reflexive _≡ℕ_
refl {zero} = z≡z
refl {suc _} = s≡s refl
sym : Symmetric _≡ℕ_
sym z≡z = z≡z
sym (s≡s p) = s≡s (sym p)
trans : Transitive _≡ℕ_
trans z≡z q = q
trans (s≡s p) (s≡s q) = s≡s (trans p q)
Now, the EqReasoning module is parametrized over a Setoid, so usually you do something like this:
open EqR ℕ-setoid
However, we'd like to have the Setoid parameter to be implicit (instance) rather than explicit, so we define and open a dummy module:
open module Dummy {c ℓ} {{s : Setoid c ℓ}} = EqR s
And we can write this simple proof:
idʳ : ∀ n → n ≡ℕ (n + 0)
idʳ 0 = z≡z
idʳ (suc n) = begin
suc n ≈⟨ s≡s (idʳ n) ⟩
suc (n + 0) ∎
Notice that we never had to specify ℕ-setoid, the instance argument picked it up because it was the only type-correct value.
Now, let's spice it up a bit. We'll add Data.Maybe.setoid into the mix. Again, because instance arguments do not perform a recursive search, we'll have to define the setoid ourselves:
Maybeℕ-setoid = setoid ℕ-setoid
_≡M_ = Setoid._≈_ Maybeℕ-setoid
I'm going to postulate few stupid things just to demonstrate that Agda indeed picks correct setoids:
postulate
comm : ∀ m n → (m + n) ≡ℕ (n + m)
eq0 : ∀ n → n ≡ℕ 0
eq∅ : just 0 ≡M nothing
lem : ∀ n → just (n + 0) ≡M nothing
lem n = begin
just (n + 0) ≈⟨ just
(begin
n + 0 ≈⟨ comm n 0 ⟩
n ≈⟨ eq0 n ⟩
0 ∎
)⟩
just 0 ≈⟨ eq∅ ⟩
nothing ∎
I figured an alternative to the proposed solution using instance arguments that slightly bends the requirements, but fits my purpose. The major burden in the question was having to explicitly open EqReasoning multiple times and especially having to invent new names for the contained symbols. A slight improvement would be to pass the correct Setoid once per relation proof. In other words passing it to begin_ or _∎ somehow. Then we could make the Setoid implicit for all the other functions!
import Relation.Binary.EqReasoning as EqR
import Relation.Binary using (Setoid)
module ExplicitEqR where
infix 1 begin⟨_⟩_
infixr 2 _≈⟨_⟩_ _≡⟨_⟩_
infix 2 _∎
begin⟨_⟩_ : ∀ {c l} (X : Setoid c l) → {x y : Setoid.Carrier X} → EqR._IsRelatedTo_ X x y → Setoid._≈_ X x y
begin⟨_⟩_ X p = EqR.begin_ X p
_∎ : ∀ {c l} {X : Setoid c l} → (x : Setoid.Carrier X) → EqR._IsRelatedTo_ X x x
_∎ {X = X} = EqR._∎ X
_≈⟨_⟩_ : ∀ {c l} {X : Setoid c l} → (x : Setoid.Carrier X) → {y z : Setoid.Carrier X} → Setoid._≈_ X x y → EqR._IsRelatedTo_ X y z → EqR._IsRelatedTo_ X x z
_≈⟨_⟩_ {X = X} = EqR._≈⟨_⟩_ X
_≡⟨_⟩_ : ∀ {c l} {X : Setoid c l} → (x : Setoid.Carrier X) → {y z : Setoid.Carrier X} → x ≡ y → EqR._IsRelatedTo_ X y z → EqR._IsRelatedTo_ X x z
_≡⟨_⟩_ {X = X} = EqR._≡⟨_⟩_ X
Reusing the nice example from Vitus answer, we can write it:
lem : ∀ n → just (n + 0) ≡M nothing
lem n = begin⟨ Data.Maybe.setoid ℕ-setoid ⟩
just (n + 0) ≈⟨ just
(begin⟨ ℕ-setoid ⟩
n + 0 ≈⟨ comm n 0 ⟩
n ≈⟨ eq0 n ⟩
0 ∎
)⟩
just 0 ≈⟨ eq∅ ⟩
nothing ∎
where open ExplicitEqR
One still has to mention the Setoids in use, to avoid the use of instance arguments as presented by Vitus. However the technique makes it significantly more convenient.

Resources