Differences between Agda and Idris - agda

I'm starting to dive into dependently-typed programming and have found that the Agda and Idris languages are the closest to Haskell, so I started there.
My question is: which are the main differences between them? Are the type systems equally expresive in both of them? It would be great to have a comprehensive comparative and a discussion about benefits.
I've been able to spot some:
Idris has type classes à la Haskell, whereas Agda goes with instance arguments
Idris includes monadic and applicative notation
Both of them seem to have some sort of rebindable syntax, although not really sure if they are the same.
Edit: there are some more answers in the Reddit page of this question: http://www.reddit.com/r/dependent_types/comments/q8n2q/agda_vs_idris/

I may not be the best person to answer this, as having implemented Idris I'm probably a bit biased! The FAQ - http://docs.idris-lang.org/en/latest/faq/faq.html - has something to say on it, but to expand on that a bit:
Idris has been designed from the ground up to support general purpose programming ahead of theorem proving, and as such has high level features such as type classes, do notation, idiom brackets, list comprehensions, overloading and so on. Idris puts high level programming ahead of interactive proof, although because Idris is built on a tactic-based elaborator, there is an interface to a tactic based interactive theorem prover (a bit like Coq, but not as advanced, at least not yet).
Another thing Idris aims to support well is Embedded DSL implementation. With Haskell you can get a long way with do notation, and you can with Idris too, but you can also rebind other constructs such as application and variable binding if you need to. You can find more details on this in the tutorial, or full details in this paper: http://eb.host.cs.st-andrews.ac.uk/drafts/dsl-idris.pdf
Another difference is in compilation. Agda goes primarily via Haskell, Idris via C. There is an experimental back end for Agda which uses the same back end as Idris, via C. I don't know how well maintained it is. A primary goal of Idris will always be to generate efficient code - we can do a lot better than we currently do, but we're working on it.
The type systems in Agda and Idris are pretty similar in many important respects. I think the main difference is in the handling of universes. Agda has universe polymorphism, Idris has cumulativity (and you can have Set : Set in both if you find this too restrictive and don't mind that your proofs might be unsound).

One other difference between Idris and Agda is that Idris's propositional equality is heterogeneous, while Agda's is homogeneous.
In other words, the putative definition of equality in Idris would be:
data (=) : {a, b : Type} -> a -> b -> Type where
refl : x = x
while in Agda, it is
data _≡_ {l} {A : Set l} (x : A) : A → Set a where
refl : x ≡ x
The l in the Agda defintion can be ignored, as it has to do with the universe polymorphism that Edwin mentions in his answer.
The important difference is that the equality type in Agda takes two elements of A as arguments, while in Idris it can take two values with potentially different types.
In other words, in Idris one can claim that two things with different types are equal (even if it ends up being an unprovable claim), while in Agda, the very statement is nonsense.
This has important and wide-reaching consequences for the type theory, especially regarding the feasibility of working with homotopy type theory. For this, heterogeneous equality just won't work because it requires an axiom that is inconsistent with HoTT. On the other hand, it is possible to state useful theorems with heterogeneous equality that can't be straightforwardly stated with homogeneous equality.
Perhaps the easiest example is associativity of vector concatenation. Given length-indexed lists called vectors defined thusly:
data Vect : Nat -> Type -> Type where
Nil : Vect 0 a
(::) : a -> Vect n a -> Vect (S n) a
and concatenation with the following type:
(++) : Vect n a -> Vect m a -> Vect (n + m) a
we might want to prove that:
concatAssoc : (xs : Vect n a) -> (ys : Vect m a) -> (zs : Vect o a) ->
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs
This statement is nonsense under homogeneous equality, because the left side of the equality has type Vect (n + (m + o)) a and the right side has type Vect ((n + m) + o) a. It's a perfectly sensible statement with heterogeneous equality.

Related

Complexity of finding a solution to SMT system with quantifier

I need to find a solution to a problem by generating by using z3py. The formulas are generated depending on input of the user. During the generation of the formulas temporary SMT variables are created that can take on only a limited amount of values, eg is its an integer only even values are allowed. For this case let the temporary variables be a and b and their relation with global variables x and y are defined by the predicate P(a,b,x,y).
An example generated using SMT-LIB like syntax:
(set-info :status unknown)
(declare-fun y () Int)
(declare-fun x () Int)
(assert
(forall (
(a Int) (b Int) (z Int) )
(let
(($x22 (exists ((z Int))(and (< x z) (> z y)))))
(=>
P(a, b, x, y)
$x22))))
(check-sat)
where
z is a variable of which all possible values must be considered
a and b represent variables who's allowed values are restricted by the predicate P
the variable 'x' and 'y' need to be computed for which the formula is satisfied.
Questions:
Does the predicate P reduce the time needed by z3 to find a solution?
Alternatively: viewing that z3 perform search over all possible values for z and a will the predicate P reduce the size of the search space?
Note: The question was updated after remarks from Levent Erkok.
The SMTLib example you gave (generated or hand-written) doesn't make much sense to me. You have universal quantification over x and z, and inside of that you existentially quantify z again, and the whole formula seems meaningless. But perhaps that's not your point and this is just a toy. So, I'll simply ignore that.
Typically, "redundant equations" (as you put it), should not impact performance. (By redundant, I assume you mean things that are derivable from other facts you presented?) Aside: a=z in your above formula is not redundant at all.
This should be true as long as you remain in the decidable subset of the logics; which typically means linear and quantifier-free.
The issue here is that you have quantifier and in particular you have nested quantifiers. SMT-solvers do not deal well with them. (Search stack-overflow for many questions regarding quantifiers and z3.) So, if you have performance issues, the best strategy is to see if you really need them. Just by looking at the example you posted, it is impossible to tell as it doesn't seem to be stating a legitimate fact. So, see if you can express your property without quantifiers.
If you have to have quantifiers, then you are at the mercy of the e-matcher and the heuristics, and all bets are off. I've seen wild performance characteristics in that case. And if reasoning with quantifiers is your goal, then I'd argue that SMT solvers are just not the right tool for you, and you should instead use theorem provers like HOL/Isabelle/Coq etc., that have built-in support for quantifiers and higher-order logic.
If you were to post an actual example of what you're trying to have z3 prove for you, we might be able to see if there's another way to formulate it that might make it easier for z3 to handle. Without a specific goal and an example, it's impossible to opine any further on performance.

Finding an equivalent LR grammar for the same number of "a" and "b" grammar?

I can't seem to find an equivalent LR grammar for:
S → aSbS | bSaS | ε
which I think recognize strings with the same number of 'a' than 'b'.
What would be a workaround for this? Is it possible to find and LR grammar for this?
Thanks in advance!
EDIT:
I have found what I think is an equivalent grammar but I haven't been able to prove it.
I think I need to prove that the original grammar generates the language above, and then prove that language is generated for the following equivalent grammar. But I am not sure how to do it. How should I do it?
S → aBS | bAS | ε
B → b | aBB
A → a | bAA
Thanks in advance...
PS: I have already proven that this new grammar is LL(1), SLR(1), LR(1) and LALR(1).
Unless a grammar is directly related to another grammar -- for example through standard transformations such as normalization, null-production eliminate, and so on -- proving that two grammars derivee the same language is very difficult without knowing what the language is. It is usually easier to prove (independently) that each grammar derives the language.
The first grammar you provide:
S → aSbS | bSaS | ε
does in fact derive the language of all strings over the alphabet {a, b}* where the number of as is the same as the number of bs. We can prove that in two parts: first, that every sentence recognized by the grammar has that property, and second that every sentence which has that property can be derived by that grammar. Both proofs proceed by induction.
For the forward proof, we proceed by induction on the number of derivations. Suppose we have some derivation S → α → β → … → ω where all the greek letters represent sequences of non-terminals and terminals.
If the length of the derivation is exactly zero, so that it starts and ends with S, then there are no terminals in any derived sentence so its clear that every derived sentence has the same number of as and bs. (Base step)
Now for the induction step. Suppose that every derivation of length i is known to end with a derived sentence which has the same number of as and bs. We want to prove from that premise that every derivation of length i+1 ends with a sentence which has the same number of as and bs. But that is also clear: each of the three possible production steps preserves parity.
Now, let's look at the opposite direction: every sentence with the same number of as and bs can be derived from that grammar. We'll do this by induction on the length of the string. Our induction premise will be that if it is the case that for every j ≤ i, every sentence with exactly j as and j bs has a derivation from S, then every sentence with exactly i+1 as and i+1 bs. (Here we are only considering sentences consisting only of terminals.)
Consider such a ssentence. It either starts with an a or a b. Suppose that it starts with an a: then there is at least one b in the sentence such that the prefix ending with that b has the same number of each terminal. (Think of the string as a walk along a square grid: every a moves diagonally up and right one unit, and every b moves diagonally down and right. Since the endpoint is at exactly the same height as the beginning point and there are no wormholes in the graph, once we ascend we must sooner or later descend back to the starting height, which is a prefix ending b.) So the interior of that prefix (everything except the a at the beginning and the b at the end) is balanced, as is the remainder of the string. Both of those are shorter, so by the induction hypothesis they can be derived from S. Making those substitutions, we get aSbS, which can be derived from S. An identical argument applies to strings starting with b. Again, the base step is trivial.
So that's basically the proof procedure you'll need to adapt for your grammar.
Good luck.
By the way, this sort of question can also be posed on cs.stackexchange.com or math.stackexchange.com, where the MathJax is available. MathJax makes writing out mathematical proofs much less tedious, so you may well find that you'll get more readable answers there.

how to parse Context-sensitive grammar?

CSG is similar to CFG but the reduce symbol is multiple.
So, can I just use CFG parser to parse CSG with reducing production to multiple terminals or non-terminals?
Like
1. S → a bc
2. S → a S B c
3. c B → W B
4. W B → W X
5. W X → B X
6. B X → B c
7. b B → b b
When we meet W X, can we just reduce W X to W B?
When we meet W B, can we just reduce W B to c B?
So if CSG parser is based on CFG parser, it's not hard to write, is it true?
But when I checked wiki, it said to parse CSG, we should use linear bounded automaton.
What is linear bounded automaton?
Context sensitive grammars are non-deterministic. So you can not assume that a reduction will take place, just because the RHS happens to be visible at some point in a derivation.
LBAs (linear-bounded automata) are also non-deterministic, so they are not really a practical algorithm. (You can simulate one with backtracking, but there is no convenient bound on the amount of time it might take to perform a parse.) The fact that they are acceptors for CSGs is interesting for parsing theory but not really for parsing practice.
Just as with CFGs, there are different classes of CSGs. Some restricted subclasses of CSGs are easier to parse (CFGs are one subclass, for example), but I don't believe there has been much investigation into practical uses; in practice, CSGs are hard to write, and there is no obvious analog of a parse tree which can be constructed from a derivation.
For more reading, you could start with the wikipedia entry on LBAs and continue by following its references. Good luck.

Defining custom quantifiers

I'm trying to get Z3 to verify some formal proofs that uses an iterated maximum in the notation. For example, for f a function (↑i: 0 ≤ i < N: f(i)) designates the highest value of f when it is applied to a value between 0 and N. It can be nicely axiomatized with:
(↑i: p(i): f(i)) ≤ x <=> (∀i: p(i): f(i) ≤ x)
with p a predicate over the type of i. Is there a way to define such a quantifier in Z3?
It is quite convenient for formulating my proofs so I'd like to keep it as close to this definition as possible.
Thanks!
There is no direct way to define such binders in Z3. Z3 is based on classical simply sorted first-order logic where the only binders are universal and exitential quantification. In particular, Z3 does not let you write lambda expressions directly. One approach for proving theorems using Z3 that include nested binders is to apply lambda-lifting first and then attempt to prove the resulting first-order formulation.
In your example, you want to define a constant max_p_f.
With the following properties:
forall i: p(i) => max_p_f >= f(i)
(exists i: p(i) & max_p_f = f(i)) or (forall i . not p(i))
say (assuming the supremum is defined on the domain, etc.)
You would have to create constants for each p,f combination where you want to apply the max function.
Defining such functions is standard in proof assistants for higher-order logic.
The Isabelle theorem prover applies transformations similar to the above when mapping
proof obligations to first-order backends (E, Vampire, Z3, etc).

Model-based Quantifier Instantiation and the St1 fragment of many-sorted logic

This is a follow-up to my previous question on Z3's Model-based
Quantifier Instantiation (MBQI) and the stratified sorts fragment (thanks
again to Leonardo de Moura for the quick answer).
In their paper on decidable fragments of many-sorted logic [Abadi et
al., Decidable fragments of many-sorted logic, LPAR 2007], the authors
describe a fragment St1 of many-sorted logic that is decidable with a
finite model property.
This fragment requires the sorts to be stratified and the formula F to be in (skolemized) prenex normal form as described in the Z3
documentation, but allows an additional atomic formula
y in Im[f]
to occur in F, which is a "shorthand" for
exists x1 : A1, ..., xn : An . y = f(x1,...,xn)
where f is a function with a signature f : A1 x ... x An -> B, and f must be the only function with range B. Thus, the St1 fragment allows (in a very restricted way) to violate the stratification, e.g., in order to assert that f is surjective.
I am not sure if this could be an open research question:
Does someone know whether the MBQI decision procedure for Z3 is complete
for the St1 fragment? Will Z3 produce (theoretically) either SAT or
UNSAT for F after a finite time?
First, one clarification, in principle, MBQI can decide the stratified multi-sorted fragment. The justification is given in Section 4.1 of http://research.microsoft.com/en-us/um/people/leonardo/ci.pdf (*). However, Z3 4.0 does not support implement the additional rules suggested in Section 4.1. So, Z3 4.0, may fail (return unknown) on formulas that are in this fragment. I just want to make clear a distinction between the algorithm and the actual implementation using the current Z3.
Regarding your question, yes MBQI framework can decide the stratified formulas containing the expanded predicate y in Im[f]. I'm assuming this predicate occurs only positively.
That is, we do not have not y in Im[f] which is equivalent to
forall x1:A1, ...,xn:An. y != f(x1, ... xn)
If y in Im[f] occurs only positively, then it can be expanded, and after skolemization we have a ground formula of the form y = f(k1, ..., kn).
MBQI is still a decision procedure because the set F* defined in (*) will still be finite. F* may become infinite only if the stratification is broken inside of a universal formula.

Resources