Lecture 5: The Binary Neuron model
Teacher: Grégory Dumont
“Information” in neurons ⟺ receiving/producing action potential ⟶ spikes
At the synapse, neurotransmitters are released
→ We’ll try to describe the neural dynamics
The Binary Neuron
Binary Neuron
- $x_1, ⋯, x_n$: synaptic inputs
- $w_1, ⋯, w_n$: synaptic weights
- $b$: biais (assumed to be $0$ here)
Output:
\[y = \underbrace{H}_{\text{Heaviside function}}\left(\sum\limits_{ k=1 }^N w_k x_k - b\right)\]⟹ Binary classifier: Hyperplane separation
Ex: Alternating two choices tasks (monkey VS moving dots) ⟶ in the temporal region of the brain: there are motion-sensitive neurons
What does the binary neuron in this case? There is no proof that it happens like that in the brain, and in fact ⟶ experimentally: at the beginning, the monkey is making a lot of mistakes, but it gets better and better afterwards.
⇒ How can we make the binary neuron behave like that?
Synpatic plasticity: the brain learns by modifying the synapse weights.
Rosenblatt (1958): Perceptron learning rule
Training set of patterns:
\[\lbrace (x^{(0)}, d_0), ⋯, (x^{(p)}, d_p) \rbrace\]where
- $x^{(k)}$
- $d_k = 0$ or $1$
At every step: for each pattern $k$
-
compute the output
\[y_k = H\left(\sum\limits_{ i=1 }^N w_i x_i^{(k)} - b\right)\] -
If $y_k ≠ d_k$: update the weights:
\[w_i(t+1) = w_i(t) + (d_k - y_k) x_i^{(k)}\]
⟹ Convergence in a finite number of steps if there is a solution
\[D_{blue} = \lbrace (1, 2), (1, 3), (3, 4), (2, 4) \rbrace \qquad \text{label: } 1\\ D_{red} = \lbrace (4,0), (6, 2), (2, -3), (6, -2) \rbrace \qquad \text{label: } 0\]
Then if you initialize $\textbf{w} ≝ (0, 0)$:
- as $\textbf{w} \cdot \underbrace{(1, 2)}{∈ D{blue}} = 0 ≠ 1$
- \[\textbf{w} → \textbf{w} + (1-0) (1, 2) = (1, 2)\]
Can we compute anything with a perceptron?
Can any function:
\[f: \lbrace 0, 1 \rbrace^n ⟶ \lbrace 0, 1 \rbrace\]be computed by a perceptron?
⟹ Multilayer perceptron are Turing-complete
Network dynamics in discrete time
\[y_i(t+1) = H\left[\sum\limits_{ j=1 }^N w_{i,j} y_j(t) + x_i(t)\right]\]- $y_i(t+1)$: activity of neuron 1 at time step $t$
- $\sum\limits_{ j=1 }^N w_{i,j} y_j(t)$: total input from the network
- $x_i(t)$: external input (assumed to be $0$ from now on)
Dynamics:
\[\vec{y}(t+1) = sign[W \vec{y}(t)]\]Fixed points = Ouput of a network
Attractor: when you always converge to a given fixed point, no matter the starting activity ⟶ this pattern of activity is said to be memorized by the pattern.
Hopfield learning rule
Set of $p$ desired outcomes:
\[\lbrace ξ^{(1)}, ⋯, ξ^{(p)} \rbrace\](i.e. desired fixed points)
Set weights to (Hopfield):
\[w_{i,j} = \frac 1 N \sum\limits_{ k=1 }^p ξ_i^{(k)} ξ_j^{(k)}\]When you memorize a pattern: the inverse one is also!
Leave a comment