Lecture 3: Neurorobotic models of spacial cognition
Teacher: Anfelo Arleo
Introduction
Stimulus ⟶ Response
-
Experimental approach: tuning curves: plotting activity (spikes/sec) as a function of the stimulus (ex: the angle)
-
How to extrapolate? ⇒ Model of the system
- Model of the system: set of equations underlying the $S-R$ relation and provide testable predictions
-
Parallel aim: reduce the complexity of the model
Equations:
- analytical solutions, but very rare
- simulations (95% of the time)
- implement the model on a real robot, and make sure it works in practice, in the real world (not just the simulated one)
Computational neuroscientist: not necessarily a good programmer, nowadays, we have a myriad of librairies/tools to help with the simulation
Hardness ⟶ in the model
Complex system ⟶ Experimental protocol ⟶ Experimental data
Then compare with the mathematical model ⟶ simulations
NB: the aim is to make the model good at predicting, not just describing
Neural activity and neural coding
Stimulus ⟶ Encoding ⟶ Response
Encoding: the process that encodes the stimulus in the brain
What we want is to be able to decode: going from brain to inferring the stimulus
Nowadays: we can record activity of $≃ 200$ neurons at the same time (with one electrode)
Decoding
Two approaches:
1. Firing rate decoding approaches ⟶ been done for ages
Problem with computing the average, variance, moments ⟹ we lose the time-related information
Ex: 4 spikes fired at different times carry different information, a priori!
1968 experiment ⟶ neurons in the $V_1$ area are sensitive to orientation
You might
- get all the tuning curve, for each neuron, one by one
- use the “winners take all” approach ⟶ with one electrode, you record the simultaneous activity of $≃ 200$ neurons, then plot a “combined tuning curve”, then keep only the most firing ones
-
“population vector” decoding ⟶ weighted sum (cooperative)
Ex: if you do the average of the vector responses (in the polar plane: argument/angle = preferred direction, norm = firing rate), then you can guess the direction the agent is focusing on ⇒ neuroprostheses
Neural feedback: you get the feedback, to adjust your thinking
Now we even do the same for vision, to overcome blindness! Intead of deconding, we encode (ex: light from the eyes), then send the information to the brain
Firing rate neuronal model
For each neuron $i$
- $V_i(t)$: membrane potential
- $I_i(t)$: synaptic input
- $f(V_i)$: transfer function
- $r_i(t)$: firing rate
where
\[I_i(t) = \sum\limits_{ j } w_{i, j} r_iε\]Integrate & Fire: more advanced model, takes timing into account (up to 3 differential equations per neuron)
Hodgkin & Huxley: even more complex, up to 25 equations per neuron
Types of learning
-
Supervized: at first, not thought to happen at a neuronal level, but it has been shown recently to happen in the cerebellum
-
Reinforcement: happens all the time
-
Unsupervized: clustering, happens a lot
Hebbian model: «Neuron that fire together wire together» ⟹ you can get a simple model of memory (associative memory)
Spatial cognition
Ex: put a mouse in a pool, where there is
- a visible platform ⟶ the mouse goes directly to the platform
- an invisible platform ⟶ takes more time to find the platform, then learn faster and faster
Spatial cognition: neuron may fire only on specific positions/locations in space.
Neural basis of spatial cognition
Experiment: identify neurons that fire in
-
specific locations ⟶ then you can reconstruct a map of movement of the animal based on these neurons activity
-
specific directions ⟶ these neurons act as a compass, independently of the position of the animal
Grid cells: discovered in 2005 ⟶ neuron maps a “grid” hardcoded in the brain
And before experimental evidence in 2005, the existence of these grids was already predicted by many models (path integration ⟶ when you keep your closed when moving ⟹ path integration)
Border cells: neurons that respond to the agent’s distance to walls
- Allothetic sensory inputs: come from the external environment
- Idiothetic sensory inputs: come from the agent’s body
Environemental landmarks: very precise, you have visual cues to find your way
Path integration: far less precise, you quickly lose track of where you came from
Berthoze: vection illusion (when you think your going backwards in train at the beginning when you see another going in the opposite direction)
Experiment: have a planetarium with light dots confuse the animal’s direction, which thinks the lights are not moving and it is.
⇒ visual cues overtook motor cues
Use Hebbian learning/STDP to make a robot move in space ⟹ combine visual and motor cues to strike a balance and prevent ambiguity
Books:
- Neural Dynamics
- Theoretical Neuroscience
- Spikes
Leave a comment