Final presentation: Bayesian Machine Learning via Category Theory

Main article: Bayesian Machine Learning via Category Theory (J. Culbertson & K. Sturtz)

Outline

Category of conditional probabilities $𝒫$:
  • objects: countably generated measurable spaces $(X, Σ_X)$

  • morphisms: Markov kernels $T: \underbrace{(X, Σ_X)}_{\text{for brevity, will be denoted by } X} ⟶ (Y, Σ_Y)$, i.e. functions \begin{cases} Σ_Y × X &⟶ [0, 1] \\ (B, x) &⟼ T(B \mid x) \end{cases} such that:

    1. for all $B∈ Σ_Y$, T_B ≝ T(B \mid \bullet): X ⟶ [0, 1] is measurable
    2. for all $x∈X$, T_x ≝ T(\bullet \mid x): Σ_Y ⟶ ℬ([0, 1]) is a perfect probability measure on $Y$, i.e. a probability measure $ℙ: Σ_Y ⟶ ℬ([0, 1])$ such that for any measurable function $f: Y ⟶ ℝ$, there exists a measurable set $E ⊆ f(Y)$ such that $ℙ(f^{-1}(E)) = 1$
  • composition of arrows: if $X \overset{T}{⟶} Y \overset{U}{⟶} Z$, U \circ T : \begin{cases} Σ_Z × X &⟶ [0, 1] \\ (C, x) &⟼ (U \circ T)(C \mid x) ≝ 𝔼_{T_x}(U_C) = \int_{y∈Y} U(C \mid y) \, {\rm d}T_x \end{cases}

Category of measurable spaces $Meas$:
  • objects: measurable spaces $(X, Σ_X)$

  • morphisms: measurable functions $f: X ⟶ Y$

Leave a Comment