Lecture 6: Perception et émotion pour l’interaction
Teacher: Philippe Gaussier
Uncanny valley: when a robot almost looks like a human, it’s even more disturbing than non-human-looking ones
Two competing approaches to AI:
-
functional approach: one function/algorithm per problem
- but problem: in case of the brain, the brain is not tuned to solve one particular problem
- weak approach: one algorithm per problem ⟶ how to combine them?
-
Rachid Alami (LAAS): studying human-robot interaction.
Ex: robot performing handshaking ⟶ sometimes: non-natural behaviors (handshaking behind the back to use shortest path)
Learning with resort to logic rules (cf. the researcher Lena) ⟶ problem: more and more rules, to such a point that we end up with a tremendous amount of rules
Importance of double functions:
- Emotions: meta-controlling, but also to communicate
- Gazing: to collect visual information, but also to communicate
Does an isolated robot need emotions?
To interact with humans, a robot had better have emotions (fake ones to communicate at least)
But: when the robot is alone?
Selection of behavior
Ex: a rabbit and its carrots
-
Reactive system: when the rabbit is just in front of the carrot ⟶ directly head to it
-
Motivated system: The carrot aren’t seen by the rabbit: how can it conceive a plan to reach the carrots?
-
Emotional system: if there is a predator near the carrots: escape quickly! (reflex-based mechanisms + a meta-control system deciding which plan to follow)
Emotional mechanisms can be helpful to decide which plan to use ⟶ gating for some particular kinds of behaviors
And beware of deadlocks! ⟶ we should be able to detect the deadlocks, to get out of it
Humans shouldn’t be the ones to control the learning (even turining on and off, building the database, etc…)
Need for emotions:
- to react quickly
- modify plans
- meta-control mechanism
- communication
Proprioception (feedback from the sensors) ⟶ Motor control
Log-transformation: $\ln\Big(\underbrace{\sqrt{(x-x_0)^2 + (y-y_0)^2}}_{≝ ρ}\Big)$ as a function of $θ = \arctan \frac{y-y_0}{x-x_0}$
Imitation to learn
When there is reinforcement with rewards ⟶ too much human monitoring
Imitation could be a good trick to teach a robot. There’s still a big debate in psychology as to whether animals are able to imitate?
-
perfect prescriptive training (computer teacher)
- trying to teach the perfect trajectory: not good, deviates a lot
-
proscriptive training (computer teacher)
- teaching to correct the trajectory when it’s wrong (learning happens only when something is going wrong)
-
interactive training (human teacher)
- much better results
⟹ better to learn to avoid to do things, than to learn to do the perfect things
Bottom line: be able to detect what is wrong at some point
For humans: imitation is present from birth on! (newborn babies are able to imitate)
-
mirror neurons can emerge from the sensori-motor learning, so that the former may be the consequence the latter
-
is imitation so much important for learning?
Is the ability to inhibit the action part of the architecture?
Baby communicating with his mother via a videocall ⟶ if there is too much video delay, the baby starts crying, as he sees his mother is not responding to him in real time.
Leave a comment