Center for Cognitive and Neural Studies
Autonomous artiﬁcial intelligent agents
azvan V. Florian
Center for Cognitive and Neural Studies (Coneural)
Str. Saturn 24, 3400 Cluj-Napoca, Romania
Technical Report Coneural-03-01
February 4, 2003
This paper reviews the current state of the art in the research concerning the
development of autonomous artiﬁcial intelligent agents. First, the meaning
of speciﬁc terms, like agency, automaticity, autonomy, embodiment, situat-
edness, and intelligence, are discussed in the context of this domain. The
motivations for conducting research in this area are then exposed. We focus,
in particular, on the importance of autonomous embodied agents as support
for genuine artiﬁcial intelligence. Several principles that should guide au-
tonomous agent research are reviewed. Of particular importance are the em-
bodiment and situatedness of the agent, the principle of sensorimotor coor-
dination, and the need for epigenetic development and learning capabilities.
They ensure the adaptability, ﬂexibility and robustness of the agent. Several
design and evaluation considerations are then discussed. Four approaches to
the design of autonomous agents—the subsumption architecture, evolution-
ary methods, biologically-inspired methods and collective approaches—are
presented and illustrated with examples. Finally, a brief discussion men-
tions the possible role of autonomous agents as a framework for the study
of computational applications of the far-from-equilibrium systems theory.
What is an autonomous intelligent agent?
Agency, automaticity, autonomy
. . . . . . . . . . . . . . . .
Situatedness . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reasons for studying artiﬁcial autonomous agents
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Autonomous agents as support for genuine artiﬁcial intelligence
Classical artiﬁcial intelligence . . . . . . . . . . . . . .
Limits of classical AI . . . . . . . . . . . . . . . . . . .
Fundamental problems of classical AI
. . . . . . . . .
Embodiment as a condition for learning and adaptability
Embodied, interactivist-constructivist cognitive science 10
Biological modelling . . . . . . . . . . . . . . . . . . . . . . .
Design principles for autonomous agents
The three–constituents principle
. . . . . . . . . . . . . . . .
Autonomy, embodiment, situatedness . . . . . . . . . . . . . .
Emergence, self-organization . . . . . . . . . . . . . . . . . . .
Epigenesis, online learning . . . . . . . . . . . . . . . . . . . .
Parallel, loosely coupled processes
. . . . . . . . . . . . . . .
Sensorimotor coordination . . . . . . . . . . . . . . . . . . . .
Goal directedness . . . . . . . . . . . . . . . . . . . . . . . . .
Cheap design . . . . . . . . . . . . . . . . . . . . . . . . . . .
Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.10 Ecological balance . . . . . . . . . . . . . . . . . . . . . . . .
4.11 Grounded internal representation . . . . . . . . . . . . . . . .
4.12 Grounded symbolic communication . . . . . . . . . . . . . . .
4.13 Interdependencies between the principles . . . . . . . . . . . .
Evaluation and analysis
Approaches in autonomous agent research
The subsumption architecture . . . . . . . . . . . . . . . . . .
Evolutionary methods . . . . . . . . . . . . . . . . . . . . . .
Biologically inspired, engineered models . . . . . . . . . . . .
Collective behavior, modular robotics
. . . . . . . . . . . . .
Embodied agents as far-from-equilibrium systems
Autonomous intelligent agent research is a domain situated at the forefront
of artiﬁcial intelligence. As shown below, it was argued that genuine in-
telligence can emerge only in embodied, situated cognitive agents. It is a
highly interdisciplinary research area, connecting results from theoretical
cognitive science, neural networks, evolutionary computation, neuroscience,
and engineering. Besides its scientiﬁc importance, there are also important
applications of this domain in the development of robots used in industry,
defense and entertainment.
We will ﬁrst attempt to delimit the scope covered by the term “au-
tonomous artiﬁcial agent”. The scientiﬁc importance of the study of em-
bodied agents will then be stressed. The paper will continue with the pre-
sentation of the principles used in the design of artiﬁcial autonomous agents.
Design and evaluation considerations will be also discussed. Several design
methods will be then illustrated. Finally, we will brieﬂy discuss the possible
role of autonomous agents as a framework for the study of computational
applications of the theory of far-from-equilibrium systems.
What is an autonomous intelligent agent?
Agency, autonomy, and intelligence are notions that are all fuzzy and hard
to deﬁne. Also, agency is tightly connected to qualities like autonomy, situ-
atedness, and embodiment. Most authors refrain to give precise deﬁnitions,
as such deﬁnitions are inevitably either too extended or too narrow. For
example, Russell and Norvig (1995) consider: “The notion of an agent is
meant to be a tool for analyzing systems, not an absolute characterization
that divides the world into agents and non-agents.” Moreover, the diﬀerent
deﬁnitions available in the literature are often not consistent with the others.
Without attempting to explain precisely these terms, we will outline here
their meaning, in order to delineate the scope of this paper.
Agency, automaticity, autonomy
We generally consider humans and most other animals as being agents. Sci-
entists and engineers have also built robots, systems and software programs
that can be considered to be artiﬁcial agents. But what really distinguishes
an agent from other artiﬁcial systems?
Luc Steels (1995), a preeminent artiﬁcial intelligence researcher, consid-
ers that the essence of agency is that “an agent can control to some extent
its own destiny”. This requires automaticity—the agent to have mechanisms
that allow the agent to sense the environment and act upon it and do not
require the intervention of other agents to be executed. A thermostat or a
virus can be thus considered to be an agent.
Autonomy is a characteristic that enhances the viability of an agent in
a dynamic environment. For autonomous agents, “the basis of self-steering
originates (at least partly) from the agent’s own capacity to form and adapt
its principles of behavior. Moreover, the process of building up or adapting
competence is something that takes place while the agent is operating in
the environment” (Steels, 1995). Autonomy requires automaticity, but goes
beyond it, implying some adaptability. However, autonomy is a matter of
degree, not a clear cut property (Smithers, 1995; Steels, 1995). Most animals
and some robots can be considered autonomous agents.
Other authors consider that agents are implicitly autonomous.
study seeking to draw the distinction between software agents and other
software systems, Franklin and Graesser (1996) have made a short survey of
the meaning of “agent” in the computer science and artiﬁcial intelligence lit-
erature. In the papers surveyed there, agency is considered to be inseparable
As a conclusion their survey, Franklin and Graesser attempt a deﬁnition:
“An autonomous agent is a system situated within and a part of an envi-
ronment that senses that environment and acts on it, over time, in pursuit
of its own agenda and so as to eﬀect what it senses in the future.”
Ordinary computer applications, such as an accounting program, could
be considered to sense the world via their input and act on it via their
output, but they are considered not to be agents because their output would
not normally eﬀect on what it senses later. All software agents are computer
programs, but not all programs are agents (Franklin & Graesser, 1996).
Agents are diﬀerent from the objects in object-oriented computer pro-
grams by their autonomy and ﬂexibility, and by having their own control
structure. They are also diﬀerent from the expert systems of classical arti-
ﬁcial intelligence by interacting directly with an environment, and not just
processing human-provided symbols, and also by their autonomous learning
(Iantovics & Dumitrescu, in press).
Pattie Maes from MIT Media Lab, one of the pioneers of agent research,
also deﬁnes artiﬁcial autonomous agents (Maes, 1995), as “computational
systems that inhabit some complex dynamic environment, sense and act
autonomously in this environment, and by doing so realize a set of goals or
tasks for which they are designed.”
From the deﬁnitions above, situatedness—the quality of a system of being
situated in an environment and interacting with it—seems to be regarded
as an implicit property of most agents.
Embodiment is an important quality of many autonomous agents. It refers
to their property of having a body that interacts with the surrounding en-
vironment. This property is important for their cognitive capabilities, as
we will see below. While this generally refers to a real physical body, like
those of animals and robots, several studies (Quick, Dautenhahn, Nehaniv,
& Roberts, 1999; Riegler, 2002; Oka et al., 2001) have argued that the im-
portance of embodiment is not necessarily given by materiality, but by its
special dynamic relation with the environment. A body can both be inﬂu-
enced by the environment and act on it. Some of its actions can change the
environment, thus changing the inﬂuence of the environment over it, in a
closed loop structural coupling. This can also happen in environments other
than the material world, such as computational ones. The environment can
be a simulated physical environment, or a genuinely computational one, such
as the internet or an operating system. Embodiment is thus deﬁned extend-
edly by Quick et al. (1999): “A system X is embodied in an environment
E if perturbatory channels exist between the two. That is, X is embodied
in E if for every time t at which both X and E exist, some subset of E’s
possible states have the capacity to perturb X’s state, and some subset of
X’s possible states have the capacity to perturb E’s state.” This is closely
related to the biologically inspired idea of structural coupling from the work
of Maturana and Varela (1987).
Ziemke (2001a, 2001b) also discusses other forms of embodiment: “or-
ganismoid” embodiment, i.e.
organism-like bodily form (e.g., humanoid
robots), and the organismic embodiment of autopoietic, living systems. He
also notes that embodiment may be considered a historical quality, in the
sense that systems may not only be structurally coupled to their environ-
ment in the present, but that their embodiment is in fact a result or reﬂection
of a history of agent-environment interaction. In our interpretation of the
term, embodiment must be historical, but not necessarily organismoid nor
Embodiment is tightly connected to situatedness—a body is not suﬃ-
cient for embodiment, if it is not situated in an environment. Moreover,
the body must be adapted to the environment, in order to have a mutual
interaction. In this interpretation, a robot standing idle on a shelf, a robot
having only visual sensors but inhabiting an environment without light, or
a robot which does not perceive its environment, acting according to a pre-
deﬁned plan or remotely controlled, are not considered to be embodied nor
Intelligence is another hard to deﬁne notion, and even a controversial one.
Various authors consider it to be an ability to learn from experience, to
adapt to new situations and changes in the environment, or to carry on
abstract thinking (Pfeifer & Scheier, 1999). The MIT Encyclopedia of Cog-
nitive Science states: “An intelligent agent is a device that interacts with its
environment in ﬂexible, goal-directed ways, recognizing important states of
the environment and acting to achieve desired results” (Rosenschein, 1999).
However, in practice intelligence is a relative attribute, and evaluated in
connection with human capabilities. For example, we would not normally
consider a rat being intelligent (implying a comparison to a human), but we
would recognize it to be more intelligent than a cockroach.
In agreement to these considerations, in this paper we would consider an
agent as being intelligent if it is capable of performing non-trivial, purposeful
behavior that adapts to changes in the environment. However, the evalua-
tion of the behavior is arbitrarily done by a human and thus intelligence is
a subjectively assigned property.
Many artiﬁcial agents are developed for performing physical tasks that
directly serve human purposes.
Scientists and engineers are trying to
build robots that can relieve people of dangerous, physically demanding,
or monotonous jobs. Many robots automate work in the manufacturing in-
dustries; however, they are usually not autonomous nor intelligent. Other
robots, with various degrees of autonomy, are used for exploring remote or
inaccessible locations. For example, they might investigate distant plan-
ets (like the Mars Sojourner1) or the ocean ﬂoor, they might inspect oil
pipelines (like iRobot’s MicroRig system2) or sewer pipes (like MAKRO;
Kolesnik & Streich, 2002). Their autonomy may eliminate the need for ex-
pensive remote control equipment (like kilometers of cable and machines
for manipulating the cable, in the case of pipe or sewer inspection), or for
human surveillance operators. Autonomy may also protect them in the case
of unexpected events, when the remote controlling operator is not capable
to respond fast enough to these events due to delays in communication, like
in planetary exploration. Research is being carried for creating robots that
can rescue people from crushed buildings or for demining operations.
Consumer robotics is estimated as a huge market, especially in the con-
text of the increasing number of aged people in the developed countries.
iRobot’s Roomba3, launched in 2002, is the ﬁrst consumer automatic robotic
Artiﬁcial intelligent agents are also used for entertainment, as virtual
companions or in movies and graphics. For example, the computer game
Creatures4 features artiﬁcial characters that grow, learn from the user, and
develop their own personality. The Sony Aibo robotic dog5 behaves like an
artiﬁcial pet, entertaining their owners and even emotionally attaching to
them, through their interactive behavior. Artiﬁcially evolved neural network
controllers for computer simulated ﬁsh were used for generating realistic
computer graphics (Terzopoulos, 1999).
There are thus important possible applications for autonomous intelli-
gent agents. However, the degree of autonomy and intelligence of current
artiﬁcial agents is quite low, in comparison with biological ones, like mam-
mals. Research is being carried for improving the autonomy and intelligence
of artiﬁcial agents. This paper will present next some principles, many of
which are biologically inspired, that should be followed for developing more
competent artiﬁcial intelligent agents (Section 4).
Autonomous agents as support for genuine artiﬁcial in-
Autonomous agents research is not only interesting for its immediate appli-
cations for physical tasks, but also for the more general purpose of developing
genuine artiﬁcial intelligence. As we will show next, it is currently consid-
ered that genuine intelligence can emerge only in situated, embodied agents,
which can interact directly with an environment.
Classical artiﬁcial intelligence
At the beginning of these disciplines, starting with the 50’s, most of the
researchers in artiﬁcial intelligence (AI) and even in cognitive science, in
general, considered reasoning a disembodied process. These ﬁrst years of
cognitive studies were particularly marked by the inﬂuence of the computer,
that was a relatively new technology at that time. Intelligent behavior was
often viewed as computation. It was thought that human intelligence is
achieved by symbolizing external and internal situations and events and
by manipulating these symbols according to syntactic rules (Fodor, 1975;
Pylyshyn, 1980; Simon & Kaplan, 1989). The supporters of this so-called
cognitivist or functionalist approach sustained that once the good algorithms
and ways of representing knowledge in symbols would be found, intelligence
can be implemented in any kind of computing machines, like computer soft-
ware, regardless of the hardware implementation. In this framework, the
body of the cognitive agent is not regarded to have a particular relevance:
it may provide symbolic information for input, or act out the result of the
computation, like a peripheral device, or it may be lacking at all. The only
important process is considered to be the symbol manipulation in the central
Until the 80’s, most of the models in cognitive science and cognitive
psychology were inspired by the functioning of the computer and phrased
in computer science and information processing terminology; some of these
models continue to be backed today by their supporters. Representational
structures such as feature lists, schemata and frames (knowledge structures
that contain ﬁxed structural information, with slots that accept a range of
values), semantic networks (lists and trees representing connections between
words) and production systems (a set of condition-action pairs used as rules
in the execution of actions) were used to explain and simulate on computers
cognitive processes (Anderson, 1993; Newell, 1990). It was proposed that
problem solving is accomplished by humans through representing achievable
situations in a branching tree and then searching in this problem space
(Newell & Simon, 1972). It was also proposed that objects are recognized
by by analysis of discrete features or by decomposing them in primitive
geometrical components (Biederman, 1987).
In robotics, the eﬀorts were directed towards building internal models of
the world, on which the program could operate to produce a plan of action
for the robot.
Perception, planning and action were performed serially.
Perception updated the state of the internal model, which was predeﬁned by
the designer of the robot. Because of this, perception recovers predetermined
properties of the environment, rather than exploring it. The environments in
which robots evolved were often ﬁxed, otherwise the internal model would
have failed to represent reality. Planning was achieved through symbolic
manipulation in the internal world model. A classical example of this sense-
model-plan-act (SMPA) approach (Brooks, 1995, p. 28) is the robot Shakey
built in the 60’s at the Stanford Research Institute6.
Limits of classical AI
The methods of this so-called Good Old Fashioned Artiﬁcial Intelligence
(GOFAI) had some impressive successes in certain domains; however, these
successes are limited. Based on those methods, programs were built that
solved problems and proved theorems from logic and geometry. However,
they depend on humans for converting the problem in a representation suit-
able for them and are conﬁned to domains where knowledge can be easily