Lecture 1: Introduction

Teacher: Paul-André Melliès

What happens when you plug different pieces of programs together?

  • Semantics: Looks like chemistry: take a programming language ⟶ decompose it in many meaningful pieces

  • Blurring/Obfuscating information not to have access to personal data ⇐ comes from algorithmics

Semantics: versatile field where the idea of programming is turned into algebra-based scientific methods

Ex: Haskell has a lot to do with functors, monads, etc… ⟹ category theory

Teaching plan: how categories are related to linear logic? (unified picture → purpose = to unify areas of theoretical CS)

Linear logic: changed the landscape of how we understand programs. Linear logic vs. Separation logic

Separation logic:

proving that low-level programs are corrects (with locks, etc…)


types are interpreted either as cells/domains/coherence spaces of programs


mathematical investigation of prog. languages and of their compilation schemes

Functional and imperative languages: based on $λ$-calculus

PCF $λ$-calculus, higher order typing recursion
Algol states
ML exceptions, references
OCamL modules, objects, …

Rust: you have more information about the data you manipulate so that you can prove properties of programs

Aim: preservation of meaning during compilation ⇐ compositional/modular techniques

For big programs: very hard to use verification ⟹ study little chunks and then glue them (composition) together

Weak memory models:

describe how microprocessors work

A programming language = lines of codes & meaning

\[\underbrace{\begin{xy} \xymatrix{ \cdot \ar[r]^b \ar@{<-}[d]_a & \cdot \ar@{<-}[d]^a \\ \cdot \ar[r]_b & \cdot } \end{xy}}_{\text{Syntax (triangulation)}} ⟼ \underbrace{\text{ a torus }}_{\text{Semantics}}\]

what corresponds – for programming – to relativity in physics (change the world representation).

⟶ homology: detect in programs the structure that emerges from lines of codes

Girard in the 80’s: we can interpret programs based on their terminal states (ex: in automata: terminal states to check if words are accepted)

⟶ procedure to turn a program into its terminal states

What the syntax is computing = an intersection between the terminal states of the program and those of the environment (describing interactions as intersections of trajectories ⟶ very geometric: a program becomes a manifold of possible traces of executions)

Other example: speaking about recursion in PL (programming languages)

Recursion looks like a feedback loop (fixed point)

Matrix trace has a lot to do with feedback loops

Trace in linear algebra can be generalized:

\[\rm Tr_{A, B}^{U} \quad \cfrac{A \otimes U ⟶ B \otimes U}{A ⟶ B}\]

In the cartesian closed category $Set$:

\[\begin{xy} \xymatrix{ A \ar[r]^f \ar[d]_{Δ_B} & B \ar[d]^{Δ_A} \\ A × A \ar[r]_{f × f} & B } \end{xy}\]

where $Δ$ is the diagonal:

\[\begin{cases} A ⟶ A×A \\ a \mapsto (a, a) \end{cases}\]
digraph {
    a1[label="" shape=none];
    b1[label="" shape=none];
    b2[label="", shape=none];
    a1 -> f[label="A"]
    f -> "Δ_B"[label="B"]
    "Δ_B" -> b1[label="B"];
    "Δ_B" -> b2[label="B"];


digraph {
    a1[label="" shape=none];
    b1[label="" shape=none];
    b2[label="", shape=none];
    a1 -> "Δ_A"[label="A"];
    "Δ_A" -> f1[label="A"];
    "Δ_A" -> f2[label="A"];
    f1 -> b1[label="B"];
    f2 -> b2[label="B"];

The diagonal is a duplicator

NB: this the way contractions duplicate boxes in linear logic

If we have duplication and feedbacks, then we have fixed-points

cf. Christian Kassel’s “Quantum Groups” book (Springer)

Domain Semantics

Domain theory: how finite “beings” interact with infinite ones

Girard’s story: when you’re in Roma and you wait for the bus: is it worth waiting more, or had you better going back home by foot? (⇒ semi-decidable)

Key idea: the semantics of a program

\[P: A ⟶ B\]

is a function

\[[P]: [A] ⟶ [B]\]

from the domain of inputs to the domain of outputs.

Game Semantics

Key idea: A program $P: A ⟶ B$

is interpreted as an interactive strategy:

\[[P]: [A] \multimap [B]\]

which plays on the input game $A$ and output game $B$.

The meaning of a program is an automaton!

Game semantics = idealized and compositional compilation

Example: Game to be won: $A ∨ ¬ A$: how is it won?

Logic is interactive: using games, the negation of a fomula is the formula your opponent sees.

$∨$: we only want to win in one of the boards $A$ or $¬A$ ($⅋$ in linear logic). From our opponent’s point of vue, he/she wants to win on both boards ($\otimes$ in linear logic). All we have to do is to play the opponent’s move on the other board ⟹ you’ll win in one of the boards (copycat strategy = the identity).

cf. Borelian games in set theory

The evaluation of a program $P$ against its environment $E$ may be understood as the interactive exploration of $P$ (resp. $E$) by $P$ (resp. $E$).

Evaluation = interactive pattern matching

$β$-reduction is just a sequence of interactions between two programs trying to “know each other”.

Revolving about cut elimination  
$λ$-calculus and natural deduction Geometry
Cartesian closed categories Algebra
Scott models Static
Concrete data structure Dynamic
Krivine machine and Categorical Abstract Machines Compilation
Sequent calculus Syntax

Idea of Curien and Béry (giving birth to CAM): Instead of interpreting $λ$-terms as functions, interpret them as compiled code

Frege: natural numbers are inherent to us, as they mean “doing a certain number of times something” ⟶ gave birth to Church numerals

  • zero: erases the argument
  • one: linear
  • two: duplicates the argument

Linear $λ$-calculus

Idea: Extracts the syntactic atoms of the existing molecules and build a Mendeleiev table out of it.

  • $A ⊸ B$: use the input only once
  • $!A ⊸ B$: use a “infinite bag” of $A$’s as an input

Linear shift:

Revolving about cut elimination  
$λ$-calculus and natural deduction ⟶ Proof nets Geometry
Cartesian closed categories ⟶ Monoidal categories Algebra
Scott models ⟶ Denotational semantics Static
Concrete Data Structure (CDS) ⟶ Game semantics Dynamic
Krivine machine and CAM ⟶ Abstract machines Compilation
Sequent calculus Syntax

$λ$-terms interpretation:

  • static: domains seen as coherence spaces
  • dynamic: CDSs seen as dialogue games
\[A ⊸ B = \underbrace{B^⊥ ⊸ A^⊥}_{\text{program from continuations/environments of } B \text{ to cont. of } A}\]
  • $A ⊸ B$: \(1 ⟶ A \overset{f}{⟶} B\)
  • $B^⊥ ⊸ A^⊥$: \(A \overset{f}{⟶} B \overset{K}{⟶} ⊥\)
\[A ⊸ b ≅ A^⊥ ⅋ A ≅ (A ⊗ B^⊥)^⊥\]

Semantics today

  • semantics of low-level languages
  • concurrency
  • resource allocation, side effects, complexity
  • new generation of languages and proof assistants:

    • Dream: being able to use effects in proof assistants (ex: when you prove sth using the board, you use effects)
  • Realizability models of Zermelo-Frankel set theory
  • Homotopy Type Theory
  • Knot theory and Physics

Leave a comment