I've found primitives for 64-bit floating point and 32-bit signed integer operations. I've not found primitives for 64-bit unsigned integer operations. Without those, what is the correct way to deal with 64-bit uint arithmetic on Agda?
It seems to be present on 2.5.4. (Exactly one version above the one I was using.)
Added support for built-in 64-bit machine words.
These are defined in Agda.Builtin.Word and come with two primitive operations to convert to and from natural numbers.
Word64 : Set
primWord64ToNat : Word64 → Nat
primWord64FromNat : Nat → Word64
Converting to a natural number is the trivial embedding, and converting from a natural number gives you the remainder modulo 2^64. The proofs of these theorems are not primitive, but can be defined in a library using primTrustMe.
Basic arithmetic operations can be defined on Word64 by converting to natural numbers, peforming the corresponding operation, and then converting back. The compiler will optimise these to use 64-bit arithmetic. For instance,
addWord : Word64 → Word64 → Word64
addWord a b = primWord64FromNat (primWord64ToNat a + primWord64ToNat b)
subWord : Word64 → Word64 → Word64
subWord a b = primWord64FromNat (primWord64ToNat a + 18446744073709551616 - primWord64ToNat b)
These compiles (in the GHC backend) to addition and subtraction on Data.Word.Word64
Related
Hi I would need help from you, I got the following languages and need to determine if are regular or not.
Now I think that Y is not regular and I applied the Pumping Lemma to determine that.
For X I am not sure if is a regular language or not, I was thinking that X is the set of strings with an odd number of a's that can be easily represented with an NFA.
Can anyone help me with that ?
The first one (X) is regular, because you can construct a finite automaton for it:
(start) --- a --> (final) -- a --> (state)
^ |
\------ a -------/
The second one (Y) is not regular, because you cannot construct a finite automaton for it. It would require memory to store the number of a to be able to later find one more of b. That language is context-free, with a grammar:
S = T b
T = a T b
T = ε
Representing, for example, the STLC in Agda can be done as:
data Type : Set where
* : Type
_⇒_ : (S T : Type) → Type
data Context : Set where
ε : Context
_,_ : (Γ : Context) (S : Type) → Context
data _∋_ : Context → Type → Set where
here : ∀ {Γ S} → (Γ , S) ∋ S
there : ∀ {Γ S T} (i : Γ ∋ S) → (Γ , T) ∋ S
data Term : Context → Type → Set where
var : ∀ {Γ S} (v : Γ ∋ S) → Term Γ S
lam : ∀ {Γ S T} (t : Term (Γ , S) T) → Term Γ (S ⇒ T)
app : ∀ {Γ S T} (f : Term Γ (S ⇒ T)) (x : Term Γ S) → Term Γ T
(From here.) Trying to adapt this to the Calculus of Constructions, though, is problematic, because Type and Term are a single type. This means not only Context/Term must be mutually recursive, but also that Term must be indexed on itself. Here is an initial attempt:
data Γ : Set
data Term : Γ → Term → Set
data Γ where
ε : Γ
_,_ : (ty : Term) (ctx : Γ) → Γ
infixr 5 _,_
data Term where
-- ...
Agda, though, complains that Term isn't in scope on its initial declaration. Is it possible to represent it that way, or do we really need to have different types for Term and Type? I'd highly like to see a minimal/reference implementation of CoC in Agda.
This is known to be a very hard problem. As far as I'm aware there is no "minimal" way to encode CoC in Agda. You have to either prove a lot of stuff or use shallow encoding or use heavy (but perfectly sensible) techniques like quotient induction or define untyped terms first and then reify them into typed ones. Here is some related literature:
Functional Program Correctness Through Types, Nils Anders Danielsson -- the last chapter of this thesis is a formalization of a dependently typed language. This is a ton-of-lemmas-style formalization and also contains some untyped terms.
Type checking and normalisation, James Chapman -- the fifth chapter of this thesis is a formalization of a dependently typed language. It is also a ton-of-lemmas-style formalization, except many lemmas are just constructors of the corresponding data types. For example, you have explicit substitutions as constructors rather than as computing functions (the previous thesis didn't have those for types, only for terms, while this thesis have explicit substitutions even for types).
Outrageous but Meaningful Coincidences. Dependent type-safe syntax and evaluation, Conor McBride -- this paper presents a deep encoding of a dependent type theory that reifies a shallow encoding of the theory. This means that instead of defining substitution and proving properties about it the author just uses the Agda's evaluation model, but also gives a full syntax for the target language.
Typed Syntactic Meta-programming, Dominique Devriese, Frank Piessens -- untyped terms reified into typed ones. IIRC there were a lot of postulates in the code when I looked into it, as this is a framework for meta-programming rather than a formalization.
Type theory eating itself?, Chuangjie Xu & Martin Escardo -- a single file formalization. As always, several data types defined mutually. Explicit substitutions with explicit transports that "mimic" the behavior of the substitution operations.
EatEval.agda -- we get this by combining the ideas from the previous two formalizations. In this file instead of defining multiple explicit transports we have just a single transport which allows to change the type of a term to a denotationally equal one. I.e. instead of explicitly specifying the behavior of substitution via constructors, we have a single constructor that says "if evaluating two types in Agda gives the same results, then you can convert a term of one type to the another one via a constructor".
Type Theory in Type Theory using Quotient Inductive Type, Thorsten Altenkirch, Ambrus Kaposi -- this is the most promising approach I'd say. It "legalizes" computation at the type level via the quotient types device. But we do not yet have quotient types in Agda, they are essentially postulated in the paper. People work a lot on quotient types (there is an entire thesis: Quotient inductive-inductive definitions -- Dijkstra, Gabe), though, so we'll probably have them at some point.
Decidability of Conversion for Type Theory in Type Theory, Andreas Abel, Joakim Öhman, Andrea Vezzosi -- untyped terms reified as typed ones. Lots of properties. Also has a lot of metatheoretical proofs and a particularly interesting device that allows to prove soundness and completeness using the same logical relation. The formalization is huge and well-commented.
A setoid model of extensional Martin-Löf type theory in Agda (zip file with the development), Erik Palmgren -- abstract:
Abstract. We present details of an Agda formalization of a setoid
model of Martin-Löf type theory with Pi, Sigma, extensional identity
types, natural numbers and an infinite hiearchy of universe à la
Russell. A crucial ingredient is the use of Aczel's type V of
iterative sets as an extensional universe of setoids, which allows for
a well-behaved interpretation of type equality.
Coq in Coq, Bruno Barras and Benjamin Werner -- a formalization of CC in Coq (the code). Untyped terms reified as types ones + lots of lemmas + metatheoretical proofs.
Thanks to András Kovács and James Chapman for suggestions.
This question already has an answer here:
"Strictly positive" in Agda
(1 answer)
Closed 4 years ago.
Apparently, there is some feature in Agda called positivity checking which can apparently keep the system sound even if type-in-type is enabled.
I am curious to know what this is about, but the Agda Manual fails to answer the question, and only explains how to turn it off.
At a lunch table I overheard that this is about polarity in type theory, but that is about all I know. I am failing to find anything online which explains this concept and why it is useful in maintaining soundness. Any intelligible explanation would be appreciated.
First, I have to clear up a misconception: positivity checking does not guarantee soundness when type-in-type is enabled. Data types must thus satisfy both the positivity check and universe check to preserve soundness.
Now, to explain positivity checking, let's first look at a counterexample when we wouldn't have positivity checking:
-- the empty type
data ⊥ : Set where
-- a non-positive type
data Bad : Set where
bad : (Bad → ⊥) → Bad
Suppose this datatype was allowed, then you could easily prove ⊥:
bad-is-false : Bad → ⊥
bad-is-false (bad f) = f (bad f)
bad-is-true : Bad
bad-is-true = bad bad-is-false
boom : ⊥
boom = bad-is-false bad-is-true
Under the Curry-Howard correspondence, the definition of Bad says: Bad is true if and only if Bad is false. So it is not surprising that it leads to inconsistencies.
Positivity checking rules out datatypes such as Bad. In general, the (strict) positivity criterion says that each constructor c of a datatype D should have a type of the form
c : (x1 : A1)(x2 : A2) ... (xn : An) → D xs
where the type Ai of each argument is either non-recursive (i.e. it doesn't refer to D) or of the form (y1 : B1)(y2 : B2) ... (ym : Bm) → D ys where each Bj doesn't refer to D.
Bad doesn't satisfy this criterion because the argument of the constructor bad has type Bad → ⊥, which is neither of the two allowed forms.
The name 'positivity checking' comes (as many things in type theory do) from category theory, specifically the notion of a positive endofunctor. Each definition of a datatype that satisfies the positivity criterion is such a positive endofunctor on the category of types. This means we can construct the initial algebra of that endofunctor, which can be used to model the datatype when constructing a model of type theory (which is used to prove soundness).
I want to implement Dedekind's cut in Agda. I tried to represent real number first. But I am not able to define it in Agda. How to define it??
Real numbers can be constructed in a few different ways:
Following Erret Bishop's construction of real numbers in Constructive Analysis, real numbers can be formalized in Agda as a sequence of rational numbers along with a proof of convergence of this sequence:
-- Constructible Real numbers as described by Bishop
-- A real number is defined to be a sequence along
-- with a proof that the sequence is regular
record ℝ : Set where
constructor Real
field
f : ℕ -> ℚ
reg : {n m : ℕ} -> ∣ f n - f m ∣ ≤ (suc n)⁻¹ + (suc m)⁻¹
Checkout this repository for the formalized construction of an equivalence relation using this definition.
Another way to define real numbers is with Dedekind cuts, which as #vitrus mentioned, is discussed in chapter 11 in the Homotopy Type Theory book
Goal: find a way to formally define a grammar that recognizes elements from a set 0 or 1 times in any order. Subsequently, I want to parse it and generate an AST as well.
For example: Say the set of valid strings in my language is {A, B, C}. I want to define a grammar that recognizes all valid permutations of any number of those elements.
Syntactically valid strings would include:
(the empty string)
A,
B A, and
C A B
Syntactically invalid strings would include:
A A, and
B A C B
To be clear, defining all possible permutations explicitly in a CFG is unacceptable for my purposes, since larger sets would be impossible to maintain.
From what I understand, such a language fails the pumping lemma for context free languages, so the solution will not be context free or regular.
Update
What I'm after is called a "permutation language", which Benedek Nagy has done some theoretical work on as an extension to context free languages.
Regarding a parser generator, I've only found talk of implementing parsers with a permutation phase (link). Parsers evidently have an exponential lower bound on the size of resulting CFG, and I haven't found any parser generators that support it anyhow.
A sort-of solution to this problem was written in ANTLR. It uses semantic predicates to 'code around' the issue.
Assuming that the set of alternative strings is fixed and known in advance, say of size n, one can come up with a (non context-free) grammar of size O(n!). This is not asymptotically smaller than enumerating all permutations, so I suppose it cannot be considered a good solution. I believe that this grammar can be reformulated as a context-sensitive grammar (although in the form I'm suggesting below it is not).
For the example {a, b, c} mentioned in the question, one such grammar is the following. I'm using lower case letters for terminal symbols and upper case letters for non-terminals, as is customary. S is the initial non-terminal symbol.
S ::= XabcY
XabcY ::= aXbcY | bXacY | cXabY
XabY ::= ab | ba
XacY ::= ac | ca
XbcY ::= bc | cb
Non-terminals X and Y enclose the substring in the production which has not been finalized yet; this substring will eventually be replaced by a permutation of the terminals that are given between X and Y (in some arbitrary order).