Lecture 6: Graphical Models and Inference
Lecturer: Pantelis Leptourgos
Dominant theories for the brain:
 ⟶ bayesian brain hypothesis based on these ideas  same for hidden Markov models
 sampling hypothesis for brain (the brain uses samples to approximate to posterior)
 predictive coding theory: the brain update based on the discrepancy between the evidence and the prediction error
Generative models  Inference
You can do anything you want by just using the graphical model. But let us recall the inference generative model.
 Bayes theorem:

posterior $\propto$ likelihood × prior
We only have some lowlevel evidence about the objects in the world (sensory data): based on that, the brain tries to predict what could have caused this input, by creating and internal model ⇒ generative model (learnt by the brain). Then the brain does inference, i.e. inverse this model
Today, we’ll focus on the graphical representation of this graphical model:
digraph {
rankdir=TB;
X > S[label=" P(S  X)"];
}
You can represent a whole bunch of problems with graphs, as above.
Graphical model: it’s a detailed representation of the joint probability:
Graphical models:
 Bayesian networks
 Markov Random Fields
 Factor graphs
Probabilistic Graphical Models
 Graphical model:

it’s a graph, whose nodes (= variables) and edges represent statistical dependencies.
NB: you can represent any distribution as a graphical model.
 Conjugate prior:

when multiplied by the likelihood, the posterior is of the same “kind” than the prior (ex: Guaussian distributions).
NB: we’ll most often use Gaussian and Discrete random variables.
Why are they useful?
 for better visualization

properties of joint distribution/computations made easier
 when it comes to computation: used wisely: graphical models can make you go from exponential computations to linear ones
 biologically plausible solutions
Graphical models: Bayesian networks (BN)
Directed Acyclic Graphs (DAG) representing causality where
means that $x_1$ causes $x_2$
Warning!: you mustn’t have loops! (otherwise: circular argument)
Ex: used for generative models
Constructing a Bayesian Network:
digraph {
rankdir=LR;
a > b, c;
b > c;
}
NB:
 it’s indeed acyclic
 we could have used a different factorization
 intersting properties: when we start removing links
Factorization
Given a BN:
The problem with fully connected graphs is that they have no intersting property. If you remove some links:
 you restrict the class of distributions
 you reduce the number of parameters
Ex:
digraph {
rankdir=LR;
x_1 > x_2;
}
Fractorization: \underbrace{P(x_1 \mid x_2)}_{K_1 (K_2  1)}\underbrace{P(x_2)}_{K_2} = P(x_1, x_2) ⟶ K_1 (K_2  1)+K_2 \text{ parameters}
digraph {
rankdir=LR;
x_1; x_2;
}
Fractorization: \underbrace{P(x_1)}_{K_1}\underbrace{P(x_2)}_{K_2} ⟶ K_1+K_2 \text{ parameters}
Likewise:

Fully connected graph with $M$ variables: $K^M  1$ parameters

Chain $x_1 ⟶ ⋯ ⟶ x_M$: $O(K)$ parameters
Conditional independence
Removing links introduces conditional independences:
EX1:
digraph {
rankdir=TB;
c > a, b;
}
Are $a$ and $b$ independent? Not in general.
But for a given $c$, they are conditionall independent: $P(a, b\mid c) P(c) = P(a, b, c) = P(a \mid c) P(b \mid c) P(c)$
EX2:
digraph {
rankdir=LR;
a > c > b;
}
Are $a$ and $b$ independent? No:
Is there independence for a fixed $c$? Yes:
Ex: $a$= tree, $c$=leaf, $c$=green
EX3:
digraph {
rankdir=LR;
a > c;
b > c;
}

$a$ and $b$ are independent

For a fixed $c$: $a$ and $b$ become dependent with repsect to $c$
⟹ Dseparation theorem
Notion of Markov Blanket
Graphical models: Markov Random Fields
Undirected Graphs where you represent softconstraints:
knowing $x_1$ incur a constraint on $x_2$
We have theorems analogous to BN.
Ex: in computer vision: image denoising
Your MRF is a graph where each node is a pixel of the original image, on top of which you have the noisy image
Link with BN
becomes
Inference: message passing algorithms
Inference on a chain
Inference = marginalization (since the posterior is a marginal given an observation)
⇓
⟹ computational nightmare
Exercise (cf. Exercise Sheet)
digraph {
rankdir=TB;
m, a > r;
r > i;
}
1. Factorize the BN
2. If none of the variables is observed, show that a mosquito bite is independent of an alien abduction. What happens if we observe an itching sensation?

No variable observed: headtohead link ⟹ the path $m ⟶ r ⟶ a$ is blocked ⇒ independence

Itching sensation observed: $r$ and $a$ are not independent wrt to $r$ anymore
3. Consider a particular instance of such a graph. A mosquito bite and an alien abduction might have happened or not (\lbrace 1,0 \rbrace), independently of each other, and with prior probabilities:
Given the state of the MB and AA, a red spot appears with probabilities given by
a. What is the probability that an alien abduction really happened, if we observe a red spot?
it’s larger because now we have some evidence
Factor graph:
graph {
f_a[shape=box];
f_m[shape=box];
f_am[shape=box];
f_r[shape=box];
a  f_a;
m  f_m;
m, a  f_am;
f_am  r;
r  f_r;
f_r  i;
}
Leave a comment