Lecture 8: Coding and computing with balanced spiking networks
Lecturer: Sophie Denève
Cortical spike trains
Spike trains: highly variable ⇒ really hard to guess if there has been a stimulus based on ONE spike train
Count variance vs. Count means ⟶ ≃ linear

But where does the variability come from? External noise, internal dynamics?

How does the brain deal with such a variability? ⟹ What matters is not spikes, but rather firing rates
IntegrateandFire
Very naive neuron model:
where
 $I_{exc}$: excitatory synapse current (Poisson)
 $I_{inh}$: inhibition synapse current (Poisson)
In practice: there’s also a noise term ⟶ but if there is a large number of inputs at the synapse, the noise averages out.
⇒ How does Poissonlike variability survives?

One possibility: a lot of excitatory synaptic weights, but it’s rare that there is a spike, and several of them have to accumulate for the output neuron to spike

Other possibility: $I_{inh}$ and $I_{exc}$ almost compensate one another ⇒ then, random walk → the variance increases over time, until it reaches the threshold potential, and then the neuron fires
In this case: exponentiallydistributed interspike interval → indicate Poisson process
E/I balance
Stimulus driven response
But in practice: not as simple as that: sometimes both excitation and inhibition increase, and then it results in the overall input current increasing ⟹ spike
Spontaneous activity
Two neighboring cells have very correlated inh/exc currents ⟶ you can measure
 exc on one
 inh on the other
as if it was for the same cell.
Observation: each time the neuron receives an exc current, it receives a strongly correlated inh current at the same time.
Two types of balanced E/I

feedforward inhibition: input (from the thalamus for ex) ⟶ exc and inh population neurons, then from inh to exc (delay for inhibition, as there are two links)

recurrent inhibition: exc and inh population neurons are linked
Balanced neural networks generate their own variability
Constant $I_{ext}$ ⟶ Balance in the network:
And then integrateandfire:
(the firing rates $ν$ are computed out of the spikes $o$)
Asynchronous irregular regime: if you shift one spike by $0.1$ ms, it changes everything else!
⟹ Chaotic system dynamics: not satisfactory, as any slight change in initial condition leads to completely different results ⟹ very hard to code information
I/E variability: the system dynamics is located in a lowdimensional submanifold (Lorentz attractor: 2Dmanifold)
Efficient coding by sensory neurons
Example: real image $\textbf{x}$, reconstructed one: $\hat{\textbf{x}}$
 One neuron ≃ one feature
 You want to minimize the cost
⟹ Neural network: linear decoder
A pure topdown approach
Two types of constraints:
 bilogical ones (synpactic, etc…)
 optimization one: reduce cost
where
 $\textbf{x}$ is an internal state variable governed by an unknown dynamical system
 $\textbf{c(t)}$ is the input (or command variable) ⟶ controlled externally
Ex:
 $\textbf{c(t)}$: Motor stimuli / Motor command
 $\textbf{x}$: Direction of motion / State of your arm
The state variable is decoded linearly from the output spike trains:
where
 $D$: decoding weights

$\textbf{r}$: filtered spike trains
\dot{\textbf{r}} =  \textbf{r} + \textbf{s}\\ ⟹ \textbf{r}_j = \textbf{s}_j \exp(t)  $\textbf{s}$ is a spike train ($\textbf{s}_j ∈ \lbrace 0, 1 \rbrace$)
Goal: minimize:
Example of a dynamical system
So in if you multiply by $D$ the $\textbf{r}$defining equation:
So with the input:
Integrateandfire:
Leave a comment