Lecture 8: Coding and computing with balanced spiking networks
Lecturer: Sophie Denève
Cortical spike trains
Spike trains: highly variable ⇒ really hard to guess if there has been a stimulus based on ONE spike train
Count variance vs. Count means ⟶ ≃ linear
\[\text{Proba to fire:} \underbrace{f}_{\text{firing rate}} dt\]-
But where does the variability come from? External noise, internal dynamics?
-
How does the brain deal with such a variability? ⟹ What matters is not spikes, but rather firing rates
Integrate-and-Fire
Very naive neuron model:
\[τ V = - V + I_{exc} + I_{inh}\]where
- $I_{exc}$: excitatory synapse current (Poisson)
- $I_{inh}$: inhibition synapse current (Poisson)
In practice: there’s also a noise term ⟶ but if there is a large number of inputs at the synapse, the noise averages out.
⇒ How does Poisson-like variability survives?
-
One possibility: a lot of excitatory synaptic weights, but it’s rare that there is a spike, and several of them have to accumulate for the output neuron to spike
-
Other possibility: $I_{inh}$ and $I_{exc}$ almost compensate one another ⇒ then, random walk → the variance increases over time, until it reaches the threshold potential, and then the neuron fires
In this case: exponentially-distributed interspike interval → indicate Poisson process
E/I balance
Stimulus driven response
But in practice: not as simple as that: sometimes both excitation and inhibition increase, and then it results in the overall input current increasing ⟹ spike
Spontaneous activity
Two neighboring cells have very correlated inh/exc currents ⟶ you can measure
- exc on one
- inh on the other
as if it was for the same cell.
Observation: each time the neuron receives an exc current, it receives a strongly correlated inh current at the same time.
Two types of balanced E/I
-
feedforward inhibition: input (from the thalamus for ex) ⟶ exc and inh population neurons, then from inh to exc (delay for inhibition, as there are two links)
-
recurrent inhibition: exc and inh population neurons are linked
Balanced neural networks generate their own variability
Constant $I_{ext}$ ⟶ Balance in the network:
\[J_{EE} ν_E - J_{IE} ν_I = 0\\ J_{EI} ν_E - J_{II} ν_I = 0\]And then integrate-and-fire:
\[\frac{dV_E^i}{dt} = - EV_E + J_{EE} \sum\limits_{ j ∈ \lbrace \text{input spikes} \rbrace } o_j^E + + J_{JE} \sum\limits_{ j ∈ \lbrace \text{input spikes} \rbrace } o_j^J\](the firing rates $ν$ are computed out of the spikes $o$)
Asynchronous irregular regime: if you shift one spike by $0.1$ ms, it changes everything else!
⟹ Chaotic system dynamics: not satisfactory, as any slight change in initial condition leads to completely different results ⟹ very hard to code information
I/E variability: the system dynamics is located in a low-dimensional submanifold (Lorentz attractor: 2D-manifold)
Efficient coding by sensory neurons
Example: real image $\textbf{x}$, reconstructed one: $\hat{\textbf{x}}$
\[\hat{x}_i = \sum\limits_{ j } Γ_{ij} r_j\]- One neuron ≃ one feature
- You want to minimize the cost
⟹ Neural network: linear decoder
A pure top-down approach
Two types of constraints:
- bilogical ones (synpactic, etc…)
- optimization one: reduce cost
where
- $\textbf{x}$ is an internal state variable governed by an unknown dynamical system
- $\textbf{c(t)}$ is the input (or command variable) ⟶ controlled externally
Ex:
- $\textbf{c(t)}$: Motor stimuli / Motor command
- $\textbf{x}$: Direction of motion / State of your arm
The state variable is decoded linearly from the output spike trains:
\[\hat{\textbf{x}} = D \textbf{r}\]where
- $D$: decoding weights
-
$\textbf{r}$: filtered spike trains
\[\dot{\textbf{r}} = - \textbf{r} + \textbf{s}\\ ⟹ \textbf{r}_j = \textbf{s}_j \exp(-t)\] - $\textbf{s}$ is a spike train ($\textbf{s}_j ∈ \lbrace 0, 1 \rbrace$)
Goal: minimize:
\[L = 𝔼(\underbrace{\Vert \textbf{x} - \hat{\textbf{x}}\Vert^2}_{\text{error}} + \underbrace{μ \Vert \textbf{r} \Vert^2}_{\text{cost}})\]Example of a dynamical system
\[\dot{\textbf{x}} = - \textbf{x} + \text{c(t)}\]So in if you multiply by $D$ the $\textbf{r}$-defining equation:
\[\underbrace{D\dot{\textbf{r}}}_{= \dot{\hat{\textbf{x}}}} = -\underbrace{D\textbf{r}}_{=\hat{\textbf{x}}} + D \textbf{s}\]So with the input:
\[I = \textbf{c} - D\textbf{s} = \textbf{c} - (\dot{\hat{\textbf{x}}} + \hat{\textbf{x}})\]Integrate-and-fire:
\[\dot{V} = -V + I\]
Leave a comment