AI Robotics
Summary
- .:.Introduction.:.
- .:.From teleoperation to autonomy.:.
- .:.The hierarchical paradigm.:.
- .:.The biological/reactive paradigm.:.
- .:.Perception.:.
Introduction
Main coursebook : Introdcution to AI Robotics, ed. MIT Press.
This course will focus on algorithms related to robotics, from the standard sense-plan-act paradigm to the behaviorist approach sense-act, and the hybridations between categories.
We will not build a robot in this course, this is not the goal and not intended. Instead, we will program and use experimental robots, such as the .:.Khepera.:. robot.
But first, what is a robot in the definition of our course ?
A robot is a machine able to get information and use knowledge to achieve its goals. Mobile robots, as the one we will work with, perform in the "real world", move and have a high degree of autonomy.
A mobile robot is made of sensors (infrared, camera, bumpers...), actuators (legs, arms, wheels...), and of course a brain to hold everything together.
If we look a bit behind us, we see the first "mobile robots" were created as early as in the 50s : in 1953, Grey Walter, a neurologist interested in electronics, built a machine, which is today known as Grey Walter turtle that was able of producing fairly complex behaviour in a suitable environnment, and this with a very simple logic involved (there were no transistors by that time, kids !).
The principles deduced were : simplicity, attraction/repulsion, and discernment.
The field evolved slowly towards the 80. This first behavioural approach was left in profit of the standard plan-act approach, but it came back at the end of the eighties in the name of "reactive robotics". Nowadays the field is aimed towards an hybrid approach : react when you can, plan when you must.
Behavioural machines are simple reflex agents mapping inputs to outputs, using inhibitory or excitatory signals.
Startign in 1995, a Generic Robot Architecture was researched, following the principle of abstraction : planning at top-level, reaction at low-level.
From Teleoperation to autonomy.
Teleoperation means distance control, usually by a human being. It has a lot of drawbacks : requires feedback (hard to tell when the machine is stuck), it needs an external viewpoint for location, the transfers between human and machine are usually low-bandwith, which leads to problems for the human like cognitive fatigue or dropouts.
Several degrees of autonomy exist :
- a self-contained robot is independent of power supply or computer.
- automatic robots work well when non-predictable events never occur.
- semi-autonomous robots have goals fixed by the operator, but make decisions themselves.
- autonomous robots use self-govern.
The hierarchical paradigm.
A hierarchical robot usually builds an internal representation of the world in which it evolves. This world model contains the physical world, as well as the machine opinion about it (dangerous areas, for example).
The main drawback of this approach is the cost of world modelling. As the planning is done inside the world model (and not "in the real world"), one of the most important goals is to reduce the difference between both.
The biological/reactive paradigm.
This approached is based on the study of animal behaviour.
Animal behaviour can be divided into three different classes :
- Reflexive behaviour (a stimuli creates a response),
- Reactive behaviour (moves made unconsciously to get to a goal, like walking (muscle memory)),
- Conscious behaviour (lion's hunt techniques, for example).
The origin of the behaviour can be innate, or the result of a sequence, come from memory, or be learnt.
Often animals chain behaviours (like for a cockroach exposed to light : flee, then when meet a wall follow it, then hide, then wait until not scared, then go out again), but this chaining is not to be considered as a sequential program - threads would be more appropriate.
So, if we want to code a "robot-animal", we have to :
- Break its comportment into behaviours,
- Find "releasers" for the behaviour (can be stimuli or some form of memory),
- Design an action and sensing sequence,
- And organize it as a sequence (or threads).
Perception.
Ah, perception. This has been the field of cognitive psychologists way before the computer scientists came to steal the ideas :). It deals with the problem of the sensing/stimuli.
For example, if we take a travelling pigeon, the animal uses the sun and its biological clock to locate itself in the world. But if the day is cloudy, it doesn't get lost : it uses magnetic fields to get through. And sometimes it also uses light polarization, smell, and storm detection to find its way.
Perception is used to release behaviour and to guide accomplishment. It is done by extracting information specific to the task from the environnment. The perceivable potentialities for action are then called accordance.
Direct perception doesn't require any understanding of the environnment, for example a robot following the left wall of maze doesn't need to know that it is in a maze, and doesn't need either to know what it follows is a wall.
Recognition, on the other hand, requires a bit of "understanding". We use recognition, for example, to find our car in a parking lot - which requires memory about its immediate vicinity as well as memory about the car itself.
We can consider behaviours as schemas mapping input to outputs, using releasers to fire the behaviour. Perception produces percepts (inputs), which can be recursively defined, and can even be contradictory.
The conflict between percepts often causes what we call an "emergent behaviour", i.e. an action unforeseen by the designer, consequence of the collusion between the perceptions.