Lecture 5: Language-oriented MELL
Teacher: Delia Kesner
From $λ$-terms to MELL Proof nets
If you directly go from $λ$-calculus to Proof nets, you can’t stick Girard’s original Proof nets (it’s more complicated), so to use Girard’s nets, we’ll go through an intermediate low-level (substitutions are performed in details) language
$λ$-calculus ⟶ Intermediate language ⟶ MELL Proof nets
Intermediate languages:
- $λs$-calculus
- $λlxr$-calculus: explain proof nets in a more algebraic way
$α$-conversion may become problematic in and by itself: it’s subtle (to avoid troubles: rename variables every time)
ND for minimal intuitionistic logic: Curry-Howard isomorphism
\[(λx.t)u ⟶_β t \lbrace x x ← u\rbrace\]Here, substitutions happen in the meta-language: but if you want to implement it, you need to make it explicit
Properties expected from a functional programming language:
-
confluence for $β$-reduction
- we want to have it as well for $λs$-calculus, etc…
-
subject reduction: type preservation
- key point: in $λ$-calculus, type preservation takes place in the same environment (in LL: this situation will change)
-
strong normalization: every reduction sequence from $t$ terminate
- NB: you may have a weaker notion: normalization for a fixed reduction strategy
- we want to keep strong normalization in our implementations
Let construction added to the $λ$-calculus grammar: $t[x/u]$ (that can be thought of as let x=u in t
)
Operational Semantics:
\[(λx.t)v ⟼_B t[x/u]\\ t[x/u] ⟼_{subs} t{x ← v}\]Substitutions not performed right away, but it’s not as silly as it seems: in LL,
- the first one is multiplicative cut-elimination
- the second one is exponential cut-elimination
$α$-conversion happens everywhere:
\[λx.xy =_α λx'.x'y\\ (yx)[x/z] =_α (yx')[x'/z]\]NB: $α$-conversion is crucial:
\[(λx. xy) \lbrace y / x\rbrace ≠_α λx.xx\\ (λx'. x'y) \lbrace y / x\rbrace =_α λx'.x'x\\\]Milner calculus (linear substitution calculus)
- Contexts:
-
a term with a “hole” $\square$, where another term can be plugged
Linear: one replacement of variable at a time
\[xyx = (xy\square)⟦x⟧\\ = (\square yx)⟦x⟧\\\]and
\[((xy)x) = (\square x)⟦xy⟧\]Example:
\[(λx. xyx)u ⟼_B (xyx)[x/u]\\ \underbrace{⟼_{cont}}_{\text{contraction}} (xyu)[x/u]\\ ⟼_{cont} (uyu)[x/u]\\ \underbrace{⟼_{gc}}_{\text{garbage collector}} (uyu)\\\]$λs$-calculus
Syntax: still the same
But not only work modulo $α$-conversion, but also modulo:
\[t[x/v][y/u] ≡ t[y/u][x/v]\]if $x ∉ fv(u)$ and $y ∉ fv(v)$
Recall the
Substitution lemma: \(t \overbrace{\lbrace x / v\rbrace}^{\text{meta-operation}} \lbrace y / u\rbrace = t \lbrace y / u \rbrace \lbrace x/v \lbrace y / u\rbrace \rbrace\)
If $y ∉ fv(v)$: \(t \lbrace x / v\rbrace \lbrace y / u\rbrace = t \lbrace y / u \rbrace \lbrace x/v \rbrace\)
So in a way: we’re encoding the substitution lemma in the language. Actually, we’re splitting it into 3 cases:
- $y ∉ fv(v)$: not implemented as a rewriting rule because it wouldn’t be terminating (commutative)
- $y ∈ fv(v)$ and $y ∈ fv(t)$
- $y ∈ fv(v)$ and $y ∉ fv(t)$
One-step reduction relation: can happen inside terms
Full Composition (FC): A calculus with explicit substitutions $R$ has the Full Composition property iff \(t[x/u] ⟶^\ast_R t \lbrace x ← u \rbrace\) for all terms $t,u$
NB:
-
$λσ$-calculus (by Curien, Levy, Amadio) has not this property
-
$t[y/v] \lbrace x / u\rbrace = t \lbrace x/u\rbrace [y / v \lbrace x / u\rbrace]$
-
example of FC: \(t = (y[y/x])[x/u] ⟶^\ast y[y/u]\) no need to destruct $y[y/x]$! Whereas in $λx$ for instance: \((y[y/x])[x/u] ⟶_{λx} x[x/u] ⟶_{λx} u ≠ (y[y/x]) \lbrace x/u \rbrace\)
Sketch of the proof:
\[t[x/u] ⟶^\ast\]by induction on $t$: only interesting case:
-
$t = (t_1 t_2)[x/u]$: then three cases
- $x ∈ t_1$: $⟶ t_1 [x/u] t_2 ⟶_{IH}^\ast t_1 \lbrace x/u\rbrace t_2 = (t_1 t_2) \lbrace x/u\rbrace$
- $x ∈ t_2$: $⟶ t_1 t_2[x/u] ⟶_{IH}^\ast t_1 t_2 \lbrace x/u\rbrace = (t_1 t_2) \lbrace x/u\rbrace$
- $x ∈ t_1$ and $x ∈ t_2$: $⟶ t_1[x/u] t_2[x/u]$: similar
-
$t = t_1 [y/t_2] [x/u]$: then three cases
- $x ∉ t_2$: $\quad ≡ t_1 [x/u][y/t_2] ⟶_{IH}^\ast t_1 \lbrace x/u \rbrace [y/t_2] = (t_1 [y/t_2]) \lbrace x/u\rbrace$
- $x ∈ t_2, x ∈ t_1$: $\quad t_1 [x/u] [y/t_2 [x/u]] ⟶_{IH}^\ast t_1 \lbrace x/u\rbrace [y/ t_2 \lbrace x/u\rbrace] = ⋯$
- $x ∈ t_2, x ∉ t_1$: $\quad t_1 [y/t_2 [x/u]] ⟶_{IH}^\ast t_1 [y/ t_2 \lbrace x/u\rbrace] = ⋯$
Rewriting rules: $s$ (confluent) and $B$.
- $s$ is confluent and terminating ⟹ exists a unique normal form
- $B$ is terminating
Problem: putting two terminating systems together doesn’t always yield a terminating system! (Toyama counter-example)
Here, with $s$ and $B$, we get $λ$-calculus, and it’s not terminating either!
Simulation:
\[t ⟶_β t'\\ (λx. u)v ⟶_β u \lbrace x/v \rbrace\\ (λx.u)v ⟶_B u[x/v] \underbrace{⟶^\ast_s}_{\text{FC}} u \lbrace x/ v\rbrace\]Projection:
\[t ⟶_{λs} t' ⟹ \texttt{proj } t ⟶^\ast_β \texttt{proj } t'\]-
$\texttt{proj } t = \texttt{proj } t’$:
- if $t ⟶_s t'$: $\texttt{proj } t = \underbrace{s(t)}_{s \text{ normal form}} = s(t') = \texttt{proj } t'$
- if $t ⟶_B t’$: \(\underbrace{x[z/(λy. u)v]}_{t} ⟶_B \underbrace{x[z/u[y/v]]}_{t'}\\ ⟹ \texttt{proj } t = x = \texttt{proj } t'\)
-
$\texttt{proj } t ⟶ \texttt{proj } t’$
- $t = (λy.u)v ⟶_B u[y/v] = t’$: \(\texttt{proj } t = (λy. \texttt{proj } u) \texttt{proj } v ⟶_β (\texttt{proj } u) \lbrace y / \texttt{proj } v\rbrace = \texttt{proj } t'\)
-
$\texttt{proj } t ⟶^+ \texttt{proj } t’$
- $t = (xx)[x/ (λz.z.)w] ⟶_B (xx)[x/ z[z/w]] = t’$: \(\texttt{proj } t = ((λz.z)w)((λz.z)w) ⟶_β ⟶_β w w = \texttt{proj } t'\)
With these two properties, we can show confluence, with the interpretation method (to show confluence, we used the confluence of another calculus):
Suppose \(t ⟶_{λs}^\ast u, v\)
Then, by confluence of $λ$-calculus:
\[\begin{xy} \xymatrix{ & \texttt{proj } t \ar[ld]^{β}_\ast \ar[rd]_{β}^\ast & \\ \texttt{proj } u \ar@{.>}[rd]_{β}^\ast & & \texttt{proj } v \ar@{.>}[ld]^{β}_\ast\\ & s & } \end{xy}\]So:
\[\begin{xy} \xymatrix{ & t \ar[ld]^{λs}_\ast \ar[rd]_{λs}^\ast & \\ u \ar@{->}[dd]^{s}_\ast & & v \ar@{->}[dd]_{s}^\ast\\ & \texttt{proj } t \ar[ld]^{β}_\ast \ar[rd]_{β}^\ast & \\ \texttt{proj } u \ar@{.>}[rd]_{β}^\ast \ar@/_2pc/[rd]_{λs}^\ast & & \texttt{proj } v \ar@{.>}[ld]^{β}_\ast \ar@/^2pc/[ld]^{λs}_\ast\\ & s & } \end{xy}\]because $s(u) = \texttt{proj } u$
Preservation of Strong Normalization
Typable ⟹ Strongly Normalizing
Ex: $λx. xx$ untypable
\[t ⟶^\ast_β λx.xx\]So we can’t use “typable ⟹ Strongly Normalizing”, but if $⟶_{λs}$ preserves $SN(⟶_β)$, then we can ensure that $t$ is SN for $λs$.
Proposition: if $t ∈ SN(β)$, then $t ∈ SN(λs)$
NB:
-
Melliès showed that this is not true for $λσ$-calculus
-
$λx$ has preservation of normalization, but has not confluence on open terms (important for proof assistants)
Define $λxc ≝ λx ∪ \lbrace Comp \rbrace$, where:
\[t[x/u][y/v] ⟶ t[x/u[y/v]] \text{ if } y ∉ fv(t)\]- $λx$ enjoys PSN
-
BUT $λxc$ does not enjoy PSN: there exists $t$ st
- $t ∈ SN(β)$
- $t ∉ SN(λxc)$
Perpetual reduction strategy: if a term is not strongly normalizing, then the perpetual strategy doesn’t terminate on this term.
Typing rule for the $λs$-calculus:
\[\cfrac{x:c ⊢ x:c}{x:c ⊢ λy.x: D → C}\]
Leave a comment