Lecture 3: Population Coding and DecisionMaking
Population Coding: how neural activities relate to an animal’s behavior
How do we relate neural activity to the behavior of the animal?
Recall the motion discrimination task: a monkey looks at a bunch of moving dots and is supposed to predict if they go on the right or on the left.
⟶ Decoding with one MT neuron with decision threshold
Neurometric curves closely match what we observe in the monkey’s behavior (some neurons are even more informative about the stimulus than the animal behavior)
How to make decision based on the relative strength of responses of two neurons?
Let’s say you have 2 neurons: plot their activity (spikes/sec) of the second one depending on the first one.
If dots move to the right/left ⟶ we have nicely separated clouds of plotted dots. The decision threshold is the identity line.
We can measure how much the clouds of points are separated: compute a normal/colinear (=tangent) vector for the decision line.
Ex for the identity line:
Normal vector:
Tangent vector:
Projection on the normal vector ⟶ gives distributions of the form
So that we have a decision rule to decide wether it’s the right or left motion:
PROBLEM: this works if the two neurons are independent from one another ⟶ but about correlated neurons?
If the two neurons are correlated (ex: responses of neuron 1 are anticorrelated with the ones of neuron 2) ⟹ then if we do our projections, we’ll have a lot of errors.
 $r_{k, i}$: firing rate of the $k$th neuron in $i$th trial
 Average:

\bar{r}_k ≝ \frac 1 N \sum\limits_{ i } r_{k, i}
 Variance:

Var(r_k) ≝ \frac 1 {N1} \sum\limits_{ i } (r_{k,i} \bar{r}_k)^2
 Covariance:

Cov(r_1, r_2) ≝ \frac 1 {N1} \sum\limits_{ i } (r_{1,i} \bar{r}_1) (r_{2,i} \bar{r}_2)
 Correlation coefficient:

r ≝ \frac{Cov(r_1, r_2)}{\sqrt{Var(r_1)Var(r_2)}}
 Covariance matrix:

Σ ≝ (Cov(r_i, r_j))_{i, j}
Warning: just because $r = 0$ doesn’t mean that there’s no correlation at all between the two neurons: $r$ only captures linear correlation, you could say!
Key question: how do you find a hyperplane separating the clusters of data?
Linear decoding with $N$ neurons
We want to find $\textbf{a}$ such that the decision boundary is the hyperlane orthogonal to $\textbf{a}$.
How to find it?

Linear Discriminant Analysis

Support Vector Machines
Centroids of the two distributions (for $N = 2$):
 $\textbf{r}_{right}$
 $\textbf{r}_{left}$
then:
The cercal system of the cricket
Record from 4 sensory neurons (in the last ganglion) of crickets ⟶ blow wind on its sensory organs and record the response fields of these neurons
 the fiels are shifted
 the 4 neurons cover 360° circle (so you could record direction of any wind)
 the tuning of these neurons is uniformly spaced (which pretty miraculous, it’s exactly what you would want for deconding system)
 the tuning curves have a “bell shape” (unbiased)
 Tuning curves:

f_a(s) ≝ r_{max} [\cos(s  s_a)]_+
where $s$ is the vertical wind direction in degrees
You have vector representations of the the four preferred directions:
Then, for any vector $\textbf{v}$:
So that:
Reconstruction:
Alternative Reconstruction:
For any wind direction $\textbf{v}$:
Therefore, for a population vector:
this an example of a simple linear decoder!
Simple DecisionMaking: how neural activities relate to an animal’s behavior
Again the motion discrimination task
Fixed viewing duration paradigm ⟶ how much time the has to make the decision, enables us to control how much information the monkey gets before deciding
The monkey needs to:
 extract the motion information
 make a decision based on that
 execute a motor command at the required time
From sensory to motor systems
Between the moment the monkey sees the dots and the moment it makes a decision:
Early sensory stages ⟹ Medial Temporal cortex ⟹ Lateral Intraparietal cortex ⟹ later motor stages
Plot the mean response of MT neurons depending on the stimulus correlation: the bigger the stimulus correlation, the higher the preferred direction, and the lower the null direction (inhibition).
 Likelihood ratio:

l(x) = \frac{p(x \mid +)}{p(x \mid )} \overset{?}{>} 1
LogLikelihood: the logarithm theoreof
Reaction time task
How much does the animal take to make a decision?
Driftdiffusion model
where y_N = \log l(\textbf{x})
Or in continuoustime:
Area LIP encodes the decision variables between preferred and antipreferred direction
LIP: where the drift diffusion model lives
LIP neurons integrate the difference
Leave a Comment