[Nottingham Internship] Type Theory : a brief outline
I’ve begun my internship at the University of Nottingham, under the supervision of Prof. Thorsten Altenkirch, Nicolai Kraus, and Paolo Capriotti.
The main aim of this internship is to show that
Type Theory forms a weak $𝜔$groupoid
with a particular emphasis on formalization aspects in Agda.
I’ll use these blog posts to set down some of the ideas and concepts I’m supposed to learn.
\[\newcommand{\rec}{\mathop{\rm rec}\nolimits} \newcommand{\ind}{\mathop{\rm ind}\nolimits} \newcommand{\inl}{\mathop{\rm inl}\nolimits} \newcommand{\inr}{\mathop{\rm inr}\nolimits}\]Type Theory: Basics
Throughout the $𝜆$calculus course at the ENS ParisSaclay^{1}, we first used a type system  given by a set of inference rules  so as to ensure some nice computable properties of $𝜆$calculus, such as strong normalization and autoreduction^{2}.
Then, we observed that type disciplines had their logic counterparts via the CurryHoward correspondence, which enabled us to enrich $𝜆$calculus to study logic systems.
Typetheoretical concept  Logic counterpart 

Type discipline  Logical system 
Types  Formulas 
$𝜆$terms  Proofs 
$𝛽$reduction  Cut elimination 
The CurryHoward correspondence, also known as the proofasprograms/propositionsastypes interpretation
But what we didn’t quite insist on, was that Type Theory is a selfcontained deductive system that can serve as a foundation for mathematics. By selfcontained, I mean that it can be regarded as a bootstrapping system, in which everything ultimately boils down to types: the mathematical objects, as well as the logical propositions!
Whereas in set theory, for instance, you’d better make a clear distinction between mathematical objects (sets, numbers, groups, spaces, etc…) and the metatheory you use to talk about these objects: as a matter of facts, they’re two clearly distinct layers.
By the way, that might explain why showing something as intuitive as the coherence of Peano arithmetic $PA_1$^{3} is so nettlesome^{4} : to show that $PA_1$ is coherent, you might want to:
either show, from a semantic standpoint, that $PA_1$ has models: rather easy, right? (what about… let’s say… $ℕ$?)
or use syntactic arguments, resorting to $𝜆$$\nabla$$R$calculus, by showing that there exists no $𝜆$$\nabla$$R$term of type $\bot$.
But in either case, there might be a hidden fallacy:
One might unwittingly resort to inconsistent models, that is: models which already assume the coherence of the theory that we’re trying to demonstrate; as exemplified by naive set theory (the set of all sets: cf. Russel’s paradox)
Far more subtle: we use nonfinitist arguments (i.e. we implicitly resort to infinite sets: candidates of reducibility) to prove that $𝜆$$\nabla$$R$calculus is strongly normalizing, which we aren’t allowed to do if we’ve not already demonstrated that $PA_1$ is coherent, for the proof takes place in a larger theory, whose coherence is even more difficult to show (and that might be incoherent, as a result). So as long as the strong normalization of $𝜆$$\nabla$$R$calculus isn’t showed with resort to finite objects only, we can’t be quite sure that we haven’t already assumed the coherence of Peano for the proof to be valid (see these pages of Jean GoubaultLarrecq’s course (in French))
Now that we have the laid the background, let’s dive in the gist of the subject.
Judgements
Types are primitive objects that can be formed according to classic inference rules (cf. those of simply typed $𝜆$calculus, for the most straightforward ones) by means of which are derived judgments of the form
Judgement  Interpreted as 

$a:A$  $a$ is an object of type $A$ If $A$ is a type representing a proposition : then $a$ is called a witness (or evidence) of the truth of $A$ 
$a ≡ b:A$  $a$ and $b$ are definitionally equal (that is, by construction) objects of type $A$ 
Whenever $a, b: A$ can be derived, we have a type $a =_A b$.
 $a$ and $b$ are said to be propositionally equal:

whenever $a =_A b$ is inhabited by a witness.
Example :
If $a:A$, then:
\[f\underbrace{:≡}_{\rlap{\text{"is equal by definition"}}} 𝜆x, y. a ≡ f(\_,\_): X ⟶ Y ⟶ A\]can be derived, for any types $X, Y$.
Dependent types
How on earth to introduce a brand new type?
There’s a recurrent pattern:

 Formation rules:

how to construct this type

 Constructors / Introduction rules:

how to construct elements of that type

 Eliminators / Elimination rules:

how to use elements of that type

 Computation rules:

how eliminators are supposed to act on constructors

 [Optional] Uniqueness principle:


either every function into that type is uniquely determined: i.e. how constructors act on eliminators (every element is uniquely determined by what you get after having applied eliminators, and can then be reconstructed by applying constructors)

or every function from that type is uniquely determined

NB: when the uniqueness principle does not result in a judgmental equality, propositional equality can often be demonstrated instead.
Example: for function types (i.e. of the form $A ⟶ B$):

Formation rule:
If $A, B: {\cal U}$ (that is, $A$ and $B$ are types), then we can form the type:
\[A ⟶ B\] 
Constructor / Introduction rule: $𝜆$abstraction
If $A, X: {\cal U}$ and $u: A$, then we can derive:
\[\underbrace{𝜆}_{\rlap{\text{the only constructor for function types}}} x.u : X ⟶ A\] 
Eliminators / Elimination rules: function application
If $A, X: {\cal U}, \;\; u: A$, and $y: X$
then we can derive
\[(𝜆x.u)(y): A\] 
Computation rule: $𝛽$reduction
 Uniqueness principle: $𝜂$expansion (each function is uniquely determined by its values)
$\Pi$Types / Dependent function types
Let’s suppose that
 $A: {\cal U}$ (that is, $A$ is a type)
 $B : A ⟶ {\cal U}$
and
 $y:A$
 $u:B(x)$ for any $x:A$
Then we can derive
Introduction
\[𝜆x.u : \underbrace{\prod\limits_{x:A} B(x)}_{\rlap{\text{also written as } \prod x:A, B(x)}}\]Elimination, Computation
\[(𝜆x.u)(y) \underbrace{≡}_{\text{computation}} u[x:=y] : B(y)\]Boolean Type: $\mathbf{2}$
It comprises two elements:
\[0_\mathbf{2}, 1_\mathbf{2} : \mathbf{2}\]Introduction
We can derive
\[x \mapsto \begin{cases} c_0 \text{ if } x = 0_\mathbf{2} \\ c_1 \text{ else if } x = 1_\mathbf{2} \end{cases} : \mathbf{2} ⟶ C\]provided that $C: {\cal U}$ and $c_0, c_1 : C$
Recursor
It acts as an “ifthenelse” statement:
\[\rec_\mathbf{2} :≡ C, c_0, c_1, x \mapsto \begin{cases} c_0 \text{ if } x = 0_\mathbf{2} \\ c_1 \text{ else if } x = 1_\mathbf{2} \end{cases} : \prod\limits_{C: {\cal U}} C ⟶ C ⟶ \mathbf{2} ⟶ C\]Inductive constructor
\[\ind_\mathbf{2} :≡ C, c_0, c_1, x \mapsto \begin{cases} c_0 \text{ if } x = 0_\mathbf{2} \\ c_1 \text{ else if } x = 1_\mathbf{2} \end{cases} : \prod\limits_{C: \mathbf{2} ⟶ {\cal U}} C(0_\mathbf{2}) ⟶ C(1_\mathbf{2}) ⟶ \prod\limits_{x:\mathbf{2}} C(x)\]Product Type: $A×B$
Introduction
If $A, B : {\cal U}$ :
\[A × B :≡ \prod\limits_{x:\mathbf{2}} \rec_\mathbf{2}({\cal U}, A, B, x)\]Pairs
\[(a, b) :≡ \ind_\mathbf{2}(\rec_\mathbf{2}({\cal U}, A, B, \_), a, b, \_)\]Projections
\[𝜋_1 :≡ p \mapsto p(0_\mathbf{2}) \\ 𝜋_2 :≡ p \mapsto p(1_\mathbf{2})\]NB: we could define the product type from scratch in this way:
\[\rec_{A×B} :≡ C, f, (a, b) \mapsto f \; a \; b: \prod\limits_{C: {\cal U}} (A ⟶ B ⟶ C) ⟶ A×B ⟶ C\\ \ind_{A×B}:≡ C, f, (a, b) \mapsto f \; a \; b: \prod\limits_{C: A×B ⟶ {\cal U}} \bigg( \prod\limits_{x: A}\prod\limits_{y: B} C((x,y)) \bigg) ⟶ \prod\limits_{p: A×B} C(p)\\ 𝜋_1 :≡ \rec_{A×B}(A, 𝜆a, b. a, \_ ) \\ 𝜋_2 :≡ \rec_{A×B}(B, 𝜆a, b. b, \_ ) \\\]
$\Sigma$Types / Dependent pair types
Let’s suppose that
 $A: {\cal U}$
 $B : A ⟶ {\cal U}$
and
 $a:A$
 $b:B(a)$
Then we can derive
Introduction
\[(a, b) : \underbrace{\sum\limits_{x:A} B(x)}_{\rlap{\text{also written as } \sum x:A, B(x)}}\]Recursor
\[\rec_{\sum\limits_{x:A} B(x)} :≡ C, g, (a, b) \mapsto g \, a \, b : \prod\limits_{C : {\cal U}} \bigg( \prod\limits_{x : A } B(x) ⟶ C \bigg) ⟶ \bigg( \sum\limits_{x : A } B(x) \bigg) ⟶ C\]Inductive constructor
\[\ind_{\sum\limits_{x:A} B(x)} :≡ C, g, (a, b) \mapsto g \, a \, b : \prod\limits_{C : \sum\limits_{x : A } B(x) ⟶ {\cal U}} \bigg( \prod\limits_{a : A}\prod\limits_{b : B(a)} C((a,b)) \bigg) ⟶ \prod\limits_{p : \sum\limits_{x : A} B(x)} C(p)\]Product Type $A×B$
If $B$ is constant:
\[A×B ≡ \sum\limits_{x:A} B\]Projections
\[𝜋_1 :≡ (a,b) \mapsto a : \bigg(\sum\limits_{x:A} B\bigg) ⟶ A \\ 𝜋_2 :≡ (a,b) \mapsto b : \prod\limits_{p: \sum\limits_{x:A} B(x)} B(𝜋_1(p))\]Coproduct Type / Disjoint union: $A+B$
Introduction
If $A, B : {\cal U}$ :
\[A + B :≡ \sum\limits_{x:\mathbf{2}} \rec_\mathbf{2}({\cal U}, A, B, x)\]Left and Right Injections
\[\inl :≡ a \mapsto (0_\mathbf{2}, a) \\ \inr :≡ b \mapsto (1_\mathbf{2}, b)\]
the handout of which can be found on Jean GoubaultLarrecq’s personal website, as well as some notes on this website (in French) ↩

the fact that types are conserved by $𝛽$reduction ↩

whose axioms are:

those of the elementary arithmetic $Q$, that is:

Theory of equality: $=$ is an equivalence relation (reflexivity, symmetry, transitivity) which is compatible with the ${\cal F, P}$structure

Definition of $+$ with regards to the left argument:

$∀x, 0+x =x$

$∀x, ∀y, S(x)+y = S(x+y)$

 Definition of $×$ with regards to the left argument:

$∀x, 0*x = 0$

$∀x, ∀y, S(x)*y = (x*y)+y$

 $S$ is inductive: $∀x, ∀y, S(x) = S(y) ⇒ x=y$
 Every nonzero element is a successor: $∀x, x≠0 ⇒ ∃ y, x=S(y)$
 Zero is not a successor: $∀x, ¬(S(x)=0)$


Firstorder axiom schema of induction : for all formula $𝜑(x)$ : \(∀x. (𝜑(0) ∧ 𝜑(x) ⟶ 𝜑(S(x))) ⟶ ∀x. 𝜑(x)\)


see Hubert Comon’s course for further information (in French) ↩
Leave a comment