how to prove universal introduction in Dafny - dafny

I am trying to find strategies to prove universally quantified assertions in Dafny. I see Dafny proves universal elimination
quite easily:
predicate P<X>(k:X)
lemma unElim<X>(x:X)
ensures (forall a:X :: P(a)) ==> P(x)
{ }
lemma elimHyp<H> ()
ensures forall k:H :: P(k)
lemma elimGoal<X> (x:X)
ensures P(x)
{ elimHyp<X>(); }
but I can not find how to prove the introduction rule:
//lemma unInto<X>(x:X)
// ensures P(x) ==> (forall a:X :: P(a))
// this definition is wrong
lemma introHyp<X> (x:X)
ensures P(x)
lemma introGoal<H> ()
ensures forall k:H :: P(k)
{ }
all ideas appreciated

Universal introduction is done using Dafny's forall statement.
lemma introHyp<X>(x: X)
ensures P(x)
lemma introGoal<H>()
ensures forall k: H :: P(k)
{
forall k: H
ensures P(k)
{
introHyp<H>(k);
}
}
In general, it looks like this:
forall x: X | R(x)
ensures P(x)
{
// for x of type X and satisfying R(x), prove P(x) here
// ...
}
So, inside the curly braces, you prove P(x) for one x. After the forall statement, you get to assume the universal quantifier
forall x: X :: R(x) ==> P(x)
If, like in my introGoal above, the body of the forall statement is exactly one lemma call and the postcondition of that lemma is what you what in the ensures clause of the forall statement, then you can omit the ensures clause of the forall statement and Dafny will infer it for you. Lemma introGoal then looks like this:
lemma introGoal<H>()
ensures forall k: H :: P(k)
{
forall k: H {
introHyp(k);
}
}
There's a Dafny Power User note on Automatic induction that may be helpful, or at least gives some additional examples.
PS. A natural next question would be how to do existential elimination. You do it using Dafny's "assign such that" statement. Here is an example:
type X
predicate P(x: X)
lemma ExistentialElimination() returns (y: X)
requires exists x :: P(x)
ensures P(y)
{
y :| P(y);
}
Some examples are found in this Dafny Power User note. Some advanced technical information about the :| operators are found in this paper.

Related

How to recover intermediate computation results from a function using "with"?

I wrote a function on the natural numbers that uses the operator _<?_ with the with-abstraction.
open import Data.Maybe
open import Data.Nat
open import Data.Nat.Properties
open import Relation.Binary.PropositionalEquality
open import Relation.Nullary
fun : ℕ → ℕ → Maybe ℕ
fun x y with x <? y
... | yes _ = nothing
... | no _ = just y
I would like to prove that if the result of computing with fun is nothing then the original two values (x and y) fulfill x < y.
So far all my attempts fall short to prove the property:
prop : ∀ (x y)
→ fun x y ≡ nothing
→ x < y
prop x y with fun x y
... | just _ = λ()
... | nothing = λ{refl → ?} -- from-yes (x <? y)}
-- This fails because the pattern matching is incomplete,
-- but it shouldn't. There are no other cases
prop' : ∀ (x y)
→ fun x y ≡ nothing
→ x < y
prop' x y with fun x y | x <? y
... | nothing | yes x<y = λ{refl → x<y}
... | just _ | no _ = λ()
--... | _ | _ = ?
In general, I've found that working with the with-abstraction is painful. It is probably due to the fact that with and | hide some magic in the background. I would like to understand what with and | really do, but the "Technical details" currently escape my understanding. Do you know where to look for to understand how to interpret them?
Concrete solution
You need to case-split on the same element on which you case-split in your function:
prop : ∀ x y → fun x y ≡ nothing → x < y
prop x y _ with x <? y
... | yes p = p
In the older versions of Agda, you would have had to write the following:
prop-old : ∀ x y → fun x y ≡ nothing → x < y
prop-old x y _ with x <? y
prop-old _ _ refl | yes p = p
prop-old _ _ () | no _
But now you are able to completely omit a case when it leads to a direct contradiction, which is, in this case, that nothing and just smth can never be equal.
Detailed explanation
To understand how with works you first need to understand how definitional equality is used in Agda to reduce goals. Definitional equality binds a function call with its associated expression depending on the structure of its input. In Agda, this is easily seen by the use of the equal sign in the definition of the different cases of a function (although since Agda builds a tree of cases some definitional equalities might not hold in some cases, but let's forget this for now).
Let us consider the following definition of the addition over naturals:
_+_ : ℕ → ℕ → ℕ
zero + b = b
(suc a) + b = suc (a + b)
This definition provides two definitional equalities that bind zero + b with b and (suc a) + b with suc (a + b). The good thing with definitional equalities (as opposed to propositional equalities) is that Agda automatically uses them to reduce goals whenever possible. This means that, for instance, if in a further goal you have the element zero + p for any p then Agda will automatically reduce it to p.
To allow Agda to do such reduction, which is fundamental in most cases, Agda needs to know which of these two equalities can be exploited, which means a case-split on the first argument of this addition has to be made in any further proof about addition for a reduction to be possible. (Except for composite proofs based on other proofs which use such case-splits).
When using with you basically add additional definitional equalities depending on the structure of the additional element. This only makes sense, understanding that, that you need to case-split on said element when doing proofs about such a function, in order for Agda once again to be able to make use of these definitional equalities.
Let us take your example and apply this reasoning to it, first without the recent ability to omit impossible cases. You need to prove the following statement:
prop-old : ∀ x y → fun x y ≡ nothing → x < y
Introducing parameters in the context, you write the following line:
prop-old x y p = ?
Having written that line, you need to provide a proof of x < y with the elements in the context. x and y are just natural so you expect p to hold enough information for this result to be provable. But, in this case, p is just of type fun x y ≡ nothing which does not give you enough information. However, this type contains a call to function fun so there is hope ! Looking at the definition of fun, we can see that it yields two definitional equalities, which depend on the structure of x <? y. This means that adding this parameter to the proof by using with once more will allow Agda to make use of these equalities. This leads to the following code:
prop-old : ∀ x y → fun x y ≡ nothing → x < y
prop-old x y p with x <? y
prop-old _ _ p | yes q = ?
prop-old _ _ p | no q = ?
At that point, not only did Agda case-split on x <? y, but it also reduced the goal because it is able, in both cases, to use a specific definitional equality of fun. Let us take a closer look at both cases:
In the yes q case, p is now of type nothing ≡ nothing and q is of type x < y which is exactly what you want to prove, which means the goal is simply solved by:
prop-old _ _ p | yes q = q
I the no q case, something more interesting happens, which is somewhat harder to understand. After reduction, p is now of type just y ≡ nothing because Agda could use the second definitional equality of fun. Since _≡_ is a data type, it is possible to case-split on p which basically asks Agda: "Look at this data type and give me all the possible constructors for an element of type just y ≡ nothing". At first, Agda only finds one possible constructor, refl, but this constructor only builds an element of a type where both sides of the equality are the same, which is not the case here by definition because just and nothing are two distinct constructors from the same data type, Maybe. Agda then concludes that there are no possible constructors that could ever build an element of such type, hence this case is actually not possible, which leads to Agda replacing p with the empty pattern () and dismissing this case. This line is thus simply:
prop-old _ _ () | no _
In the more recent versions of Agda, as I explained earlier, some of these steps are done directly by Agda which allows us to directly omit impossible cases when the emptiness of a pattern can be deduced behind the curtain, which leads to the prettier:
prop : ∀ x y → fun x y ≡ nothing → x < y
prop x y _ with x <? y
... | yes p = p
But it is the same process, just done a bit more automatically. Hopefully, these elements will be of some use in your journey towards understanding Agda.

Updating a map with another map in Dafny

I'd like to write the following function in Dafny, which updates a map m1 with all mappings from m2, such that m2 overrides m1:
function update_map<K, V>(m1: map<K, V>, m2: map<K, V>): map<K, V>
ensures
(forall k :: k in m2 ==> update_map(m1, m2)[k] == m2[k]) &&
(forall k :: !(k in m2) && k in m1 ==> update_map(m1, m2)[k] == m1[k]) &&
(forall k :: !(k in m2) && !(k in m1) ==> !(k in update_map(m1, m2)))
{
map k | (k in m1 || k in m2) :: if k in m2 then m2[k] else m1[k]
}
I got the following errors:
Dafny 2.2.0.10923
stdin.dfy(7,2): Error: a map comprehension involved in a function definition is not allowed to depend on the set of allocated references; Dafny's heuristics can't figure out a bound for the values of 'k' (perhaps declare its type, 'K', as 'K(!new)')
stdin.dfy(7,2): Error: a map comprehension must produce a finite set, but Dafny's heuristics can't figure out how to produce a bounded set of values for 'k'
2 resolution/type errors detected in stdin.dfy
I don't understand the first error, and for the second, if m1 and m2 both have finite domains, then their union is certainly finite as well, but how can I explain that to Dafny?
UPDATE:
After applying James' fixes, it works:
function update_map<K(!new), V>(m1: map<K, V>, m2: map<K, V>): map<K, V>
ensures
(forall k :: k in m1 || k in m2 ==> k in update_map(m1, m2)) &&
(forall k :: k in m2 ==> update_map(m1, m2)[k] == m2[k]) &&
(forall k :: !(k in m2) && k in m1 ==> update_map(m1, m2)[k] == m1[k]) &&
(forall k :: !(k in m2) && !(k in m1) ==> !(k in update_map(m1, m2)))
{
map k | k in (m1.Keys + m2.Keys) :: if k in m2 then m2[k] else m1[k]
}
Good questions! You are running across some known sharp edges in Dafny that are under-documented.
In the first error, Dafny is basically saying that the type variable K needs to be constrained to not be a reference type. You can do that by changing the function signature to start with
function update_map<K(!new), V>...
Here, (!new) is Dafny syntax meaning exactly that K may only be instantiated with value types, not reference types. (Unfortunately, !new is not yet documented, but there is an open issue about this.)
In the second error, you are running afoul of Dafny's limited syntactic heuristics to prove finiteness, as described in this question and answer. The fix is to use Dafny's built-in set union operator instead of boolean disjunction, like this:
map k | k in m1.Keys + m2.Keys :: ...
(Here, I use .Keys to convert each map to the set of keys in its domain so that I can apply +, which works on sets but not maps.)
With those two type-checking-time errors fixed, you now get two new verification-time errors. Yay!
stdin.dfy(3,45): Error: element may not be in domain
stdin.dfy(4,59): Error: element may not be in domain
These are telling you that the statement of the postcondition itself is ill-formed, because you are indexing into maps using keys without properly hypothesizing that those keys are in the domain of the map. You can fix this by adding another postcondition (before the others), like this:
(forall k :: k in m1 || k in m2 ==> k in update_map(m1, m2)) && ...
After that, the whole function verifies.

Using :| in functional code -- recursion on sets

How might one recurse over a set, S, in Dafny when writing pure functional code? I can use :| in imperative code, having checked for non-emptiness, to select an element, s, then recurse on S - {s}. Not quite sure how to make :| deterministic and use it in functional code.
Good question! (I wish downvoters would have the courage to leave a comment...)
This is addressed in depth in Rustan's paper "Compiling Hilbert's Epsilon Operator".
In particular, see section 3.2, which describes how to write a deterministic function by recursion over sets. For reasons not entirely clear to me, the paper's Dafny code proving lemma ThereIsASmallest doesn't work for me in modern Dafny. Here is a version that works (but is ugly):
lemma ThereIsASmallest(S: set<int>)
requires S != {}
ensures exists x :: x in S && forall y | y in S :: x <= y
{
var y :| y in S;
if S != {y} {
var S' := S - {y};
assert forall z | z in S :: z in S' || z == y;
ThereIsASmallest(S');
var x' :| x' in S' && forall y | y in S' :: x' <= y;
var x := min2(y, x');
assert x in S;
}
}
Finally, as an aside, note that the technique of section 3.2 relies on having a total order on the type. If you are trying to do something fully polymorphic, then as far as I know it isn't possible.

Why Left Identity over "Addition" is trivial proof but Right Identity is not?

I am just learning the Agda, but I do not understand that when I am trying to prove Identity over Addition then, I see that Left Identity is trivial proof.
left+identity : ∀ n -> (zero + n) ≡ n
left+identity n = refl
But It is not true for Right Identity.
right+identity : ∀ n -> (n + zero) ≡ n
right+identity zero = refl
right+identity (suc n) = cong suc (right+identity n)
I can not understand the reason. Please explain. Thanks.
The problem is how dependent typed theories deal with equality. Usually, the definition of addition is:
_+_ : Nat -> Nat -> Nat
zero + m = m -- (1)
(suc n) + m = suc (n + m) -- (2)
Notice that equation one implies left identity. When you have:
forall n -> 0 + n = n
Agda's type checker can use equation (1) of addition to verify that the equality holds. Remember, the propositional equality constructor (refl) has the type
refl : x == x
So, when you use refl as an proof for the left identity, Agda will try to reduce both sides of equality (normalize them) and check if they are indeed equal. Using the definition of addition, left identity is immediate, by equation (1).
But for the right identity this does not hold by definition. Note that when we have
n + 0 == n
Agda's type checker cannot use addition equations in order to check that this equality indeed hold. The only way to prove this equality is using induction (or, if your prefer, recursion).
Hope that this can help you.

eliminating forall using unsat

We know that, we can prove validity of a theorem by saying :
let Demorgan(x, y) = formula1(x,y) iff formula2(x,y)
assert ( forall (x,y) . Demorgan(x,y) )
Alternatively we can eliminate the forall quantifier by saying that :
let Demorgan(x, y) = formula1(x,y) iff formula2(x,y)
( assert (not Demorgan(x,y) ) )
So if it returns unsat, then we can say the above formula is valid.
Now I want to use this idea to eliminate the forall quantifier from the following assertion:
assert ( exists x1,x2,x3 st .( forall y . formula1(x1,y) iff
formula2(x2,y) iff
formula3(x3,y) ) )
So is there any way in Z3(using C++ API or SMT-LIB2.0) that I can assert something like the following :
assert (exists x1,x2,x3 st. ( and ((not ( formula1(x1,y) iff formula2(x2,y) )) == unsat)
((not ( formula2(x2,y) iff formula3(x3,y) )) == unsat)))
Yes, when we can prove the validity of a formula by showing its negation to be unsatisfiable.
For example, to show that Forall X. F(X) is valid, we just have to show that not (Forall X. F(X)) is unsatisfiable. The formula not (Forall X. F(X)) is equivalent to (Exists X. not F(X)). The formula (Exists X. not F(X)) is equisatisfiable to the formula not F(X) where the bound variable X is replaced by a fresh constant X. By equisatisfiable, I mean that the first one is satisfiable iff the second one is. This step that removes existential quantifiers is usually called skolemization.
Note that these last two formulas are not equivalent.
For example, consider the interpretation { X -> 2 } that assigns X to 2. The formula Exists X. not (X = 2) still evaluates to true in this interpretation because we can choose X to be 3. On the other hand, the formula not (X = 2) evaluates to false in this interpretation.
We usually use the term quantifier elimination procedure for a procedure that given a formula F produces an equivalent quantifier-free formula F'. So, skolemization is not considered a quantifier elimination procedure because the result is not an equivalent formula.
That being said, we don't have to apply the skolemization step by hand. Z3 can do it for us. Here is an example (also available online here).
(declare-sort S)
(declare-fun F (S) Bool)
(declare-fun G (S) Bool)
(define-fun Conjecture () Bool
(forall ((x S)) (= (and (F x) (G x)) (not (or (not (F x)) (not (G x)))))))
(assert (not Conjecture))
(check-sat)
Now, let us consider a formula of the form Exists X. Forall Y. F(X, Y). To prove the validity of this formula, we can show the negation not Exists X. Forall Y. F(X, Y) to be unsatisfiable. The negation is equivalent to Forall X. Exists Y. not F(X, Y). Now, if apply skolemization to this formula, we obtain Forall X. not F(X, Y(X)). In this case, the bound variable Y was replaced with Y(X), where Y is a fresh function symbol in the resultant formula. The intuition is that the function Y is the "choice function". For each X, we can choose a different value to satisfy the formula F. Z3 performs all these steps automatically for us. We don't need to apply skolemization by hand. However, in this case, the resultant formula is usually harder to solve because it contains a universal quantifier after the skolemization step.

Resources