In an attempt to define skew heaps in Lean and prove some results, I have defined a type for trees together with a fusion operation:
inductive tree : Type
| lf : tree
| nd : tree -> nat -> tree -> tree
def fusion : tree -> tree -> tree
| lf t2 := t2
| t1 lf := t1
| (nd l1 x1 r1) (nd l2 x2 r2) :=
if x1 <= x2
then nd (fusion r1 (nd l2 x2 r2)) x1 l1
else nd (fusion (nd l1 x1 l1) r2) x2 l2
Then, even for an extremely simple result such as
theorem fusion_lf : ∀ (t : tree), fusion lf t = t := sorry
I'm stuck. I really have no clue for starting to write this proof. If I start like this:
begin
intro t,
induction t with g x d,
refl,
end
I can use refl for the case where t is lf, but not if it is a nd.
I'm a bit at a lost, since in Agda, it is really easy. If I define this:
data tree : Set where
lf : tree
nd : tree -> ℕ -> tree -> tree
fusion : tree -> tree -> tree
fusion lf t2 = t2
fusion t1 lf = t1
fusion (nd l1 x1 r1) (nd l2 x2 r2) with x1 ≤? x2
... | yes _ = nd (fusion r1 (nd l2 x2 r2)) x1 l1
... | no _ = nd (fusion (nd l1 x1 r1) r2) x2 l2
then the previous result is obtained directly with a refl:
fusion_lf : ∀ t -> fusion lf t ≡ t
fusion_lf t = refl
What I have missed?
This proof works.
theorem fusion_lf : ∀ (t : tree), fusion lf t = t :=
λ t, by cases t; simp [fusion]
If you try #print fusion.equations._eqn_1 or #print fusion.equations._eqn_2 and so on, you can see the lemmas that simp [fusion] will use. The case splits are not exactly the same as the case splits in the pattern matching, because the case splits in the pattern matching actually duplicate the case lf lf. This is why I needed to do cases t. Usually the equation lemmas are definitional equalities, but this time they are not, and honestly I don't know why.
Related
I have a function that uses rewrite to satisfy the Agda type checker. I thought that I had a reasonably good grasp of how to deal with the resulting "vertical bars" in proofs about such functions. And yet, I fail completely at dealing with these bars in my seemingly simple case.
Here are the imports and my function, step. The rewrites make Agda see that n is equal to n + 0 and that suc (acc + n) is equal to acc + suc n, respectively.
module Repro where
open import Relation.Binary.PropositionalEquality as P using (_≡_)
open import Data.Nat
open import Data.Nat.DivMod
open import Data.Nat.DivMod.Core
open import Data.Nat.Properties
open import Agda.Builtin.Nat using () renaming (mod-helper to modₕ)
step : (acc d n : ℕ) → modₕ acc (acc + n) d n ≤ acc + n
step zero d n rewrite P.sym (+-identityʳ n) = a[modₕ]n<n n (suc d) 0
step (suc acc) d n rewrite P.sym (+-suc acc n) = a[modₕ]n<n acc (suc d) (suc n)
Now for the proof, which pattern matches on acc, just like the function. Here's the zero case:
step-ok : ∀ (acc d n : ℕ) → step acc d n ≡ a[modₕ]n<n acc d n
step-ok zero d n with n | P.sym (+-identityʳ n)
step-ok zero d n | .(n + 0) | P.refl = ?
At this point, Agda tells me I'm not sure if there should be a case for the constructor P.refl, because I get stuck when trying to solve the following unification problems (inferred index ≟ expected index): w ≟ w + 0 [...]
I am also stuck in the second case, the suc acc case, albeit in a different way:
step-ok (suc acc) d n with suc (acc + n) | P.sym (+-suc acc n)
step-ok (suc acc) d n | .(acc + suc n) | P.refl = ?
Here, Agda says suc (acc + n) != w of type ℕ when checking that the type [...] of the generated with function is well-formed
Update after Sassa NF's response
I followed Sassa NF's advice and reformulated my function with P.subst instead of rewrite. I.e., I changed my right-hand side from being about n + 0 to being about n, instead of conversely changing the goal from being about n to being about n + 0:
step′ : (acc d n : ℕ) → modₕ acc (acc + n) d n ≤ acc + n
step′ zero d n = P.subst (λ # → modₕ 0 # d # ≤ #) (+-identityʳ n) (a[modₕ]n<n n (suc d) 0)
step′ (suc acc) d n = P.subst (λ # → modₕ (suc acc) # d n ≤ #) (+-suc acc n) (a[modₕ]n<n acc (suc d) (suc n))
During the proof, the P.subst in the function definition needs to be eliminated, which can be done with a with construct:
step-ok′ : ∀ (acc d n : ℕ) → step′ acc d n ≡ a[modₕ]n<n acc d n
step-ok′ zero d n with n + 0 | +-identityʳ n
... | .n | P.refl = P.refl
step-ok′ (suc acc) d n with acc + suc n | +-suc acc n
... | .(suc (acc + n)) | P.refl = P.refl
So, yay! I just finished my very first Agda proof involving a with.
Some progress on the original problem
My guess would be that my first issue is a unification issue during dependent pattern matching: there isn't any substitution that makes n identical to n + 0. More generally, in situations where one thing is a strict subterm of the other thing, I suppose that we may run into unification trouble. So, maybe using with to match n with n + 0 was asking for problems.
My second issue seems to be what the Agda language reference calls an ill-typed with-abstraction. According to the reference, this "happens when you abstract over a term that appears in the type of a subterm of the goal or argument types." The culprit seems to be the type of the goal's subterm a[modₕ]n<n (suc acc) d n, which is modₕ [...] ≤ (suc acc) + n, which contains the subterm I abstract over, (suc acc) + n.
It looks like this is usually resolved by additionally abstracting over the part of the goal that has the offending type. And, indeed, the following makes the error message go away:
step-ok (suc acc) d n with suc (acc + n) | P.sym (+-suc acc n) | a[modₕ]n<n (suc acc) d n
... | .(acc + suc n) | P.refl | rhs = {!!}
So far so good. Let's now introduce P.inspect to capture the rhs substitution:
step-ok (suc acc) d n with suc (acc + n) | P.sym (+-suc acc n) | a[modₕ]n<n (suc acc) d n | P.inspect (a[modₕ]n<n (suc acc) d) n
... | .(acc + suc n) | P.refl | rhs | P.[ rhs-eq ] = {!!}
Unfortunately, this leads to something like the original error: w != suc (acc + n) of type ℕ when checking that the type [...] of the generated with function is well-formed
One day later
Of course I'd run into the same ill-typed with-abstraction again! After all, the whole point of P.inspect is to preserve a[modₕ]n<n (suc acc) d n, so that it can construct the term a[modₕ]n<n (suc acc) d n ≡ rhs. However, preserved a[modₕ]n<n (suc acc) d n of course still has its preserved original type, modₕ [...] ≤ (suc acc) + n, whereas rhs has the modified type modₕ [...] ≤ acc + suc n. That's what's causing trouble now.
I guess one solution would be to use P.subst to change the type of the term we inspect. And, indeed, the following works, even though it is hilariously convoluted:
step-ok (suc acc) d n with suc (acc + n) | P.sym (+-suc acc n) | a[modₕ]n<n (suc acc) d n | P.inspect (λ n → P.subst (λ # → modₕ (suc acc) # d n ≤ #) (P.sym (+-suc acc n)) (a[modₕ]n<n (suc acc) d n)) n
... | .(acc + suc n) | P.refl | rhs | P.[ rhs-eq ] rewrite +-suc acc n = rhs-eq
So, yay again! I managed to fix my original second issue - basically by using P.subst in the proof instead of in the function definition. It seems, though, that using P.subst in the function definition as per Sassa NF's guidance is preferable, as it leads to much more concise code.
The unification issue is still a little mysterious to me, but on the positive side, I unexpectedly learned about the benefits of irrelevance on top of everything.
I'm accepting Sassa NF's response, as it put me on the right track towards a solution.
Your use of P.refl indicates some misunderstanding about the role of _≡_.
There is no magic in that type. It is just a dependent type with a single constructor. Proving that some x ≡ y resolves to P.refl does not tell Agda anything new about x and y: it only tells Agda that you managed to produce a witness of the type _≡_. This is the reason it cannot tell n and .(n + 0) are the same thing, or that suc (acc + n) is the same as .(acc + suc n). So both of the errors you see are really the same.
Now, what rewrite is for.
You cannot define C x ≡ C y for dependent type C _. C x and C y are different types. Equality is defined only for elements of the same type, so there is no way to even express the idea that an element of type C x is comparable to an element of type C y.
There is, however, an axiom of induction, which allows to produce elements of type C y, if you have an element of type C x and an element of type x ≡ y. Note there is no magic in the type _≡_ - that is, you can define your own type, and construct such a function, and Agda will be satisfied:
induction : {A : Set} {C : (x y : A) -> (x ≡ y) -> Set} (x y : A) (p : x ≡ y) ((x : A) -> C x x refl) -> C x y p
induction x .x refl f = f x
Or a simplified version that follows from the induction axiom:
transport : {A : Set} {C : A -> Set} (x y : A) (x ≡ y) (C x) -> C y
transport x .x refl cx = cx
What this means in practice, is that you get a proof for something - for example, A x ≡ A x, but then transport this proof along the equality x ≡ y to get a proof A x ≡ A y. This usually requires specifying the type explicitly, in this case {C = y -> A x ≡ A y}, and provide the x, the y and the C x. As such, it is a very cumbersome procedure, although the learners will benefit from doing these steps.
rewrite then is a syntactic mechanism that rewrites the types of the terms known before the rewrite, so that such transport is not needed after that. Because it is syntactic, it does interpret the type _≡_ in a special way (so if you define your own type, you need to tell Agda you are using a different type as equality). Rewriting types is not "telling" Agda that some types are equal. It just literally replaces occurrences of x in type signatures with y, so now you only need to construct things with y and refl.
Having said all that, you can see why it works for step. There rewrite P.sym ... literally replaced all occurrences of n with n + 0, including the return type of the function, so now it is modₕ acc (acc + (n + 0)) d (n + 0) ≤ acc + (n + 0). Then constructing a value of that type just works.
Then step-ok didn't work, because you only pattern-matched values. There is nothing to tell that n and (n + 0) are the same thing. But rewrite will. Or you could use a function like this transport.
While studying well-foundedness, I wanted to see how different designs behave. For example, for a type:
data _<_ (x : Nat) : Nat -> Set where
<-b : x < (suc x)
<-s : (y : Nat) -> x < y -> x < (suc y)
well-foundedness is easy to demonstrate. But if a similar type is defined differently:
data _<_ : Nat -> Nat -> Set where
z-< : (m : Nat) -> zero < (suc m)
s<s : (m n : Nat) -> m < n -> (suc m) < (suc n)
It is obvious that in both cases the descending chain is not infinite, but in the second case well-foundedness is not easy to demonstrate: it is not easy to show (y -> y < x -> Acc y) exists for a given x.
Are there some principles that help choose the designs like the first in preference to the designs like the second?
It's not impossibly hard to prove well-foundedness of the second definition, it just requires extra theorems. Here, relying on decidability of _==_ for Nat, we can construct new _<_ for the case (suc y) != x, and can rewrite the target types to use the solution to the problem known to decrease in size as the solution for suc y.
-- trying to express well-foundedness is tricky, because of how x < y is defined:
-- since both x and y decrease in the inductive step case, need special effort to
-- prove when the induction stops - when no more constructors are available
<-Well-founded : Well-founded Nat _<_
<-Well-founded x = acc (aux x) where
aux : (x y : Nat) -> y < x -> Acc _<_ y
aux zero y ()
aux x zero z-< = acc \_ ()
aux (suc x) (suc y) (s<s y<x) with is-eq? (suc y) x
... | no sy!=x = aux x (suc y) (neq y<x sy!=x)
... | yes sy==x rewrite sy==x = <-Well-founded x
The first definition is "canonical" in a sense, while the second one is not. In Agda, every inductive type has a subterm relation which is well-founded and transitive, although not necessarily total, decidable or proof-irrelevant. For W-types, it's the following:
open import Data.Product
open import Data.Sum
open import Relation.Binary.PropositionalEquality
data W (S : Set)(P : S → Set) : Set where
lim : ∀ s → (P s → W S P) → W S P
_<_ : ∀ {S P} → W S P → W S P → Set
a < lim s f = ∃ λ p → a ≡ f p ⊎ a < f p
If we define Nat as a W-type, then the generic _<_ is the same as the first definition. The first definition establishes a subterm relation even if we have no idea about the constructors of Nat. The second definition is only a subterm relation because we know that zero is reachable from every suc n. If we added an extra zero' : Nat constructor, then this would not be the case anymore.
In the book Types and Programing Languages of B. Pierce, the author introduce a small language in order to introduce different concepts used through the book.
The language is the following:
t::=
true
false
if t then t else t
v::=
true
false
There is three reduction rules:
if true then t2 else t3 \rightarrow t2
if false then t2 else t3 \rightarrow t3
t1 \rightarrow t1'
------------------
if t1 then t2 else t3 \rightarrow if t1' then t2 else t3
I would like to prove that every normal form is a value.
I use the following definition for the normal form:
Definition normal_form (t:term) :=
~(exists t', step t t').
I'm stuck because at one point, I have something that looks like :
~ ~(exists t : term, ...)
and I don't see how I can infer
exists t : term, ...
since we are in a intuitionistic logic.
Here is the whole proof:
Inductive term : Set :=
| true : term
| false : term
| ifthenelse : term -> term -> term -> term.
Definition normal_form (t:term) :=
~(exists t', step t t').
Inductive is_value : term -> Prop :=
| vtrue : is_value true
| vfalse : is_value false.
Lemma normal_form_implies_value : forall t, normal_form t -> is_value t.
Proof.
intro.
induction t.
intros.
apply vtrue.
intros.
apply vfalse.
intros.
unfold normal_form in H.
destruct t1.
unfold not in H.
assert (exists t' : term, step(ifthenelse true t2 t3) t').
exists t2.
apply eiftrue.
apply H in H0.
contradiction.
assert (exists t' : term, step(ifthenelse false t2 t3) t').
exists t3.
apply eiffalse.
apply H in H0.
contradiction.
assert(~(is_value (ifthenelse t1_1 t1_2 t1_3))).
intro.
inversion H0.
assert(~(normal_form(ifthenelse t1_1 t1_2 t1_3))).
intro.
apply IHt1 in H1.
contradiction.
unfold normal_form in H1.
unfold not in H1.
Should I use an other definition for the normal form? Is it possible to finish the proof without any classical axiom?
One interesting lemma to prove is the inversion lemma stating that if ifthenelse b l r is in normal form then so are b, l and r.
Lemma normal_form_ifthenelse (b l r : term) :
normal_form (ifthenelse b l r) ->
normal_form b /\ normal_form l /\ normal_form r.
Which can be proven rather easily if you are willing to use a lot of help from the automation machinery.
Proof.
intros H (* assumption "normal_form (ifthenelse b l r)" *)
; repeat split (* split the big conjunction into 3 goals *)
; intros [t redt] (* introduce the "exists t', step t t'" proofs
all the goals are now "False" *)
; apply H (* because we know that "step t t'", we are going to
be able to prove that "step (ifthenelse ...) ..."
which H says is impossible *)
; eexists (* we let Coq guess which term we are going to step to *)
; constructor (* we pick the appropriate constructor between the structural ones *)
; eapply redt. (* finally we lookup the proof we were given earlier *)
Qed.
If that's a bit too much automation for you, you can try to prove manually the following (simpler) lemma because it's the bit we are going to need in the final proof:
Lemma normal_form_ifthenelse (b l r : term) :
normal_form (ifthenelse b l r) -> normal_form b.
Your lemma can then be proven rather quickly: in the two first cases of the induction, using constructor will pick the right is_value constructor.
In the last one, we are provided with an induction hypothesis IHt1 saying that provided that t1 is a normal_form then it is_value. We can use our intermediate lemma to prove that normal_form t1 based on the fact that we know that normal_form (ifthenelse t1 t2 t3) and conclude that is_value t1.
But the fact that t1 is a value contradicts the fact normal_form (ifthenelse t1 t2 t3): we can indeed step to either t2 or t3 depending on whether t1 is true or false. False_ind is a way for us to say "and now we have derived a contradiction".
Lemma normal_form_implies_value : forall t, normal_form t -> is_value t.
Proof.
intro t; induction t; intro ht.
- constructor.
- constructor.
- destruct (normal_form_ifthenelse _ _ _ ht) as [ht1 _].
apply False_ind, ht; destruct (IHt1 ht1); eexists; constructor.
Qed.
is_value is decidable,
Lemma is_value_dec : forall t, {is_value t} + {~is_value t}.
Proof.
induction t;
try (left; constructor);
destruct IHt1;
right; intro C; inversion C.
Qed.
so you can prove normal_form_implies_value by considering those two cases (with destruct), like so:
Lemma normal_form_implies_value : forall t, normal_form t -> is_value t.
Proof.
induction t;
try constructor;
intros;
destruct (is_value_dec t1), t1;
apply False_ind;
apply H;
try (eexists; constructor; fail);
try (inversion i; fail).
contradict n;
apply IHt1;
intros [tt C];
eauto using scomp.
Qed.
scomp is a constructor for step, using this definition:
Inductive step : term -> term -> Prop :=
| strue: forall t1 t2, step (ifthenelse true t1 t2) t1
| sfalse: forall t1 t2, step (ifthenelse false t1 t2) t2
| scomp: forall t1 t1' t2 t3, step t1 t1' ->
step (ifthenelse t1 t2 t3) (ifthenelse t1' t2 t3).
I am writing a basic monadic parser in Idris, to get used to the syntax and differences from Haskell. I have the basics of that working just fine, but I am stuck on trying to create VerifiedSemigroup and VerifiedMonoid instances for the parser.
Without further ado, here's the parser type, Semigroup, and Monoid instances, and the start of a VerifiedSemigroup instance.
data ParserM a = Parser (String -> List (a, String))
parse : ParserM a -> String -> List (a, String)
parse (Parser p) = p
instance Semigroup (ParserM a) where
p <+> q = Parser (\s => parse p s ++ parse q s)
instance Monoid (ParserM a) where
neutral = Parser (const [])
instance VerifiedSemigroup (ParserM a) where
semigroupOpIsAssociative (Parser p) (Parser q) (Parser r) = ?whatGoesHere
I'm basically stuck after intros, with the following prover state:
-Parser.whatGoesHere> intros
---------- Other goals: ----------
{hole3},{hole2},{hole1},{hole0}
---------- Assumptions: ----------
a : Type
p : String -> List (a, String)
q : String -> List (a, String)
r : String -> List (a, String)
---------- Goal: ----------
{hole4} : Parser (\s => p s ++ q s ++ r s) =
Parser (\s => (p s ++ q s) ++ r s)
-Parser.whatGoesHere>
It looks like I should be able to use rewrite together with appendAssociative somehow,
but I don't know how to "get inside" the lambda \s.
Anyway, I'm stuck on the theorem-proving part of the exercise - and I can't seem to find much Idris-centric theorem proving documentation. I guess maybe I need to start looking at Agda tutorials (though Idris is the dependently-typed language I'm convinced I want to learn!).
The simple answer is that you can't. Reasoning about functions is fairly awkward in intensional type theories. For example, Martin-Löf's type theory is unable to prove:
S x + y = S (x + y)
0 + y = y
x +′ S y = S (x + y)
x +′ 0 = x
_+_ ≡ _+′_ -- ???
(as far as I know, this is an actual theorem and not just "proof by lack of imagination"; however, I couldn't find the source where I read it). This also means that there is no proof for the more general:
ext : ∀ {A : Set} {B : A → Set}
{f g : (x : A) → B x} →
(∀ x → f x ≡ g x) → f ≡ g
This is called function extensionality: if you can prove that the results are equal for all arguments (that is, the functions are equal extensionally), then the functions are equal as well.
This would work perfectly for the problem you have:
<+>-assoc : {A : Set} (p q r : ParserM A) →
(p <+> q) <+> r ≡ p <+> (q <+> r)
<+>-assoc (Parser p) (Parser q) (Parser r) =
cong Parser (ext λ s → ++-assoc (p s) (q s) (r s))
where ++-assoc is your proof of associative property of _++_. I'm not sure how would it look in tactics, but it's going to be fairly similar: apply congruence for Parser and the goal should be:
(\s => p s ++ q s ++ r s) = (\s => (p s ++ q s) ++ r s)
You can then apply extensionality to get assumption s : String and a goal:
p s ++ q s ++ r s = (p s ++ q s) ++ r s
However, as I said before, we don't have function extensionality (note that this is not true for type theories in general: extensional type theories, homotopy type theory and others are able to prove this statement). The easy option is to assume it as an axiom. As with any other axiom, you risk:
Losing consistency (i.e. being able to prove falsehood; though I think function extensionality is OK)
Breaking reduction (what does a function that does case analysis only for refl do when given this axiom?)
I'm not sure how Idris handles axioms, so I won't go into details. Just beware that axioms can mess up some stuff if you are not careful.
The hard option is to work with setoids. A setoid is basically a type equipped with custom equality. The idea is that instead of having a Monoid (or VerifiedSemigroup in your case) that works on the built-in equality (= in Idris, ≡ in Agda), you have a special monoid (or semigroup) with different underlying equality. This is usually done by packing the monoid (semigroup) operations together with the equality and bunch of proofs, namely (in pseudocode):
= : A → A → Set -- equality
_*_ : A → A → A -- associative binary operation
1 : A -- neutral element
=-refl : x = x
=-trans : x = y → y = z → x = z
=-sym : x = y → y = x
*-cong : x = y → u = v → x * u = y * v -- the operation respects
-- our equality
*-assoc : x * (y * z) = (x * y) * z
1-left : 1 * x = x
1-right : x * 1 = x
The choice of equality for parsers is clear: two parsers are equal if their outputs agree for all possible inputs.
-- Parser equality
_≡p_ : {A : Set} (p q : ParserM A) → Set
Parser p ≡p Parser q = ∀ x → p x ≡ q x
This solution comes with different tradeoffs, namely that the new equality cannot fully substitute the built-in one (this tends to show up when you need to rewrite some terms). But it's great if you just want to show that your code does what it's supposed to do (up to some custom equality).
Prp : Set₁
Prp = Set
data _∧_ (P Q : Prp) : Prp where
∧-intro : P -> Q -> P ∧ Q
infixr 2 _∧_
data _∨_ (P Q : Prp) : Prp where
∨-intro₁ : P -> P ∨ Q
∨-intro₂ : Q -> P ∨ Q
infixr 1 _∨_
there is part of code from a sample code. I am just wondering what the meaning of the infixr, and why it be used there.
Thanks
This is significant not to write parentheses in expression like
a ∧ b ∨ c
a * b + c
infixr/infixl - is a power to (r=right, l=left), when it is used in infix(between) position