Criteria for Consciousness in Artificial Intelligent Agents
Universidad Carlos III de Madrid
Universidad Carlos III de Madrid
Universidad Carlos III de Madrid
Avda. Universidad, 30
Avda. Universidad, 30
Avda. Universidad, 30
28911 Leganés. Spain
28911 Leganés. Spain
28911 Leganés. Spain
+34 91 6249111
+34 91 6249110
+34 91 6249423
inner feeling) are conceivable, although probably not possible .
Accurately testing for consciousness is still an unsolved problem
Some approaches have been proposed in order to overcome the
when applied to humans and other mammals. The inherent
issue of scientific proof of consciousness. From a philosophical
subjective nature of conscious experience makes it virtually
standpoint, Dennett has proposed the heterophenomenology
unreachable to classic empirical approaches. Therefore,
method, which consists on the application of the scientific method
alternative strategies based on behavior analysis and
to both third-person behavior analysis and first-person self report
neurobiological studies are being developed in order to determine
. From the neuroscience perspective, Seth, Baars and
the level of consciousness of biological organisms. However,
Edelman propose a set of criteria for consciousness in humans and
these methods cannot be directly applied to artificial systems. In
other mammals . A number of these criteria are based on
this paper we propose both a taxonomy and some functional
neurobiological aspects. If the neuronal structures and the
criteria that can be used to assess the level of consciousness of an
associated activity pattern that gives place to consciousness are
artificial intelligent agent. Furthermore, a list of measurable levels
identified, then we can look for them in animals endowed with a
of artificial consciousness, ConsScale, is defined as a tool to
central nervous system . Analogously, if some behavior
determine the potential level of consciousness of an agent. Both
patterns are identified as uniquely produced by a conscious
the mapping of consciousness to AI and the role of consciousness
subject, we can design experiments where these behaviors are
in cognition are controversial and unsolved questions, in this
tested. However, when it comes to artificial agents, most of the
paper we aim to approach these issues with the notions of I-
assumptions mentioned above cannot be directly applied. The
Consciousness and embodied intelligence.
following are the main reasons why we think the former criteria
should not be used for evaluating artificial agents:
Categories and Subject Descriptors
Artificial agents have different underlying machinery. At
I.2 [Artificial Intelligence]: General – cognitive simulation,
the biological level, behavior of mammals is controlled by
the endocrine and nervous systems. Even though some
artificial agents are inspired or try to simulate the biological
nervous system, their design is quite far from a realistic
emulation. Therefore, it does not make sense, for instance, to
look for a strong connection between thalamus and cortex as
a possible sign of an underlying mechanism for
consciousness in an artificial implementation (given the case
Machine Consciousness, Artificial Consciousness, cognitive
that the implementation under study is endowed with a
agents, Cognitive Modeling.
simulated thalamocortical structure).
Artificial agent’s behavior produces different patterns.
Moving the observer’s point of view from the biological
Determining the level of consciousness of a living organism is a
level to the behavioral level, human behavior can be seen as
hard problem. One could think that some sort of Turing test
regulated by cultural rules. Human ontogeny gives place to
might be a plausible solution . It is indeed what we do
different behavioral patterns as a subject develops situated in
everyday when we perceive other subjects as conscious beings.
a cultural environment. Given that the development of
These kinds of test that we are used to perform unconsciously are
artificial agents differs from that, their behavior should not
based on verbal report and observed behavior. We perceive other
be analyzed following the same criteria that are applied to
humans acting as if they were conscious and thus we infer they
actually are. However, we do not have any scientific proof that
Lack of verbal report. This is one of the key differences
others experiment any subjective life because we cannot perceive
between human’s behavior and artificial agents’ behavior.
it directly . Therefore, from a pure scientific point of view,
Accurate verbal report (AVR) is probably the main way we
zombies (organisms behaving as conscious beings but without any
can find out about the inner life experienced by a human
subject. Given the lack of this kind of communication skills
in artificial systems, AVR as we know it cannot be used to
ALAMAS+ALAg 2008 – Workshop at AAMAS 2008,
evaluate artificial agents.
Estoril, May, 12, 2008, Portugal.
Taking into account the reasons mentioned above and the fact that
phenomenology and access . While the access dimension (A-
human culture strongly determine the production of consciousness
Consciousness) refers to the accessibility of mind contents for
in humans , we argue that the kind of consciousness that could
conscious reasoning and volition, phenomenology (P-
be potentially produced in artificial agents would be of a different
Consciousness) is related to the subjective experience or qualia,
nature (although we think it still could be called consciousness,
i.e. how does it feel to be thinking about something, or what is it
machine consciousness or artificial consciousness).
like to be someone else, as Nagel would formulate it .
Consequently, we believe that criteria for machine consciousness
Understanding how P-Consciousness is produced by biological
should be studied from the perspective of a specifically defined
organisms is a controversial problem usually regarded as the
taxonomy of artificial agents. Even though some of the classes of
explanatory gap , which still remains to be closed (if ever
artificial agents defined in this taxonomy cannot be directly
possible). While the access dimension of consciousness has an
compared with a corresponding example of biological organisms,
obvious function, namely guiding conscious reason and action;
both biological phylogenetic and human ontogenetic analogies
the phenomenal dimension lacks a generally accepted function
can often be used to better understand the level of consciousness
(see  and  for a detailed discussion on the matter). Qualia
that can be associated to a particular class of agents, e.g. .
could be just a side effect produced by access mechanisms, or it
could play a key role in the integration of multimodal perception
In the next section we aim to provide a comprehensive description
of the main aspects of consciousness and their basic roles in
. What is generally accepted is that rather than binary
cognition. Additionally, we redefine the dimensions of
properties, both access and phenomenal aspects of consciousness
consciousness in terms of artificial intelligent agents, and
come in various degrees. Therefore, we think it is possible to
therefore we characterize machine consciousness by analyzing the
represent a range of degrees of consciousness in a bi-dimensional
fundamental building blocks required in an agent’s architecture in
space defined by phenomenal and access dimensions (see Figure
1). The access dimension represents the informational aspect of
order to produce the functionality associated with consciousness.
Subsequently, in section 3, we discuss key particular functions of
consciousness, while the phenomenal dimension represents the
consciousness and their interaction in agent’s cognitive processes.
In section 4, we have taken into account both the key functions of
consciousness and agent’s basic architectural features to propose a
taxonomy for artificial agents, where a concrete level of machine
consciousness is assigned to each agent category. Section 5
provides a framework for classifying agents under the light of the
proposed taxonomy. Finally, we conclude in section 6 with a brief
discussion of current state of the art in terms of our proposed
2. CHARACTERIZING MACHINE
Setting aside the discussion about whether or not a categorical
implementation of an artificial form of consciousness is possible,
we have adopted an incremental approach in which we consider
that certain aspects of consciousness can be successfully modeled
Figure 1. Consciousness bi-dimensional space in biological
in artificial agents; while other aspects might be still out of the
reach given the current state of the art in the field of machine
consciousness. In this scenario, we need to define which are the
The questions of having A-Consciousness without P-
conceptual building blocks integrated in a possible machine
Consciousness and vice versa are typically controversial issues in
consciousness implementation. Then we could test the presence of
the study of consciousness. In the present work, we have adopted
these functional components and their interrelation within a given
the assumption that machine consciousness and ‘biological’
system in order to assess its potential level of machine
consciousness are actually different phenomena. Therefore,
consciousness. However, the definition of these components
different kinds of consciousness could be present in artificial
would require a complete understanding of ‘natural’
agents, and these new versions of machine consciousness could
consciousness, and given that the quest for consciousness has not
follow different rules in terms of the conceptual link between A-
yet come to a successful end, a more modest framework has to be
Consciousness and P-Consciousness. While we assume that both
established in the realm of artificial systems. But, what are the
A-Consciousness and P-Consciousness increase uniformly at the
components of consciousness that we are not able to explain or
same rate in biological phylogeny (as depicted in Figure 1), we
concretely define so far? We need to decompose, or at least
consider that all combinations are a priori possible in artificial
conceptually decouple consciousness dimensions in order to be
agents. We believe that the evolutionary forces involved in the
able to answer this question.
design of biological organisms have always produced functionally
coherent machinery; hence zombies or P-Unconscious (A-
2.1 The Dimensions of Consciousness
Consciousness without P-Consciousness) and A-Unconscious (P-
An extremely complex phenomenon like consciousness can be
Consciousness without A-Consciousness) do not naturally exist.
seen as a whole, or more conveniently, it can be analyzed as if it
Nevertheless, there exist cases of individuals that after suffering
was composed of two interrelated dimensions. A conceptual
cerebral vascular accidents or traumatic brain injury become P-
division can be outlined when a distinction is made between
Unconscious or A-Unconscious in some respects and degrees. For
instance, brain-injured patients who have developed
modules: sensors, sensorimotor coordination, internal state, and
prosopagnosia are unable to consciously recognize faces despite
effectors. These modules implement the following processes:
being able to recognize any other visual stimuli. Even though
perception, reason, and action. Consequently, the following
prosopagnosic patients are unable to experience any feeling of
abstract architectural components can be identified:
familiarity at the view of faces of their closest relatives (loss of P-
Body (B). Embodiment is a key feature of a situated agent
Consciousness), other cognitive operations are still performed
. Agent’s body can be physical or software simulated (as
with the perceived faces – a covert face recognition takes place –
well as its environment). A boundary is established between
but their output fails to reach consciousness (a disorder of A-
agent’s body and its environment (E). The rest of
Consciousness). However, some A-Consciousness capability
components are usually located within this boundary. We
remains in many patients as they are usually able to implicitly
believe that it is important to make a distinction between
access to knowledge derived from ‘P-Unconsciously’
agent’s body (or plant if we take a control theory standpoint)
unrecognized faces .
and the environment, as the first is directly controlled while
It is also important to distinguish between consciousness as it is
the latter is indirectly controlled. The definition of the body
applied to creatures and consciousness as it is applied to mental
of an agent is important as it determines what sensors it can
states . Essentially, a conscious subject can have conscious
use, how its effectors work, and ultimately how its
and unconscious mental states. In the prosopagnosia example
perception and behavior is affected by its physical
discussed above, conscious individuals fail to have P-
embodiment. Owning an active body is essential for the
Consciousness of faces at view and their A-Consciousness is also
acquisition of consciousness.
impaired to that respect. However, these subjects can perfectly be
Sensory Machinery (S). Agent’s sensors are in charge of
A-Conscious and P-Conscious of the voice and speech of their
retrieving information from the environment (exteroceptive
relatives or any other person. In this paper, we generally refer to
sensors) or from the agent’s own body (propioceptive
creature consciousness, hence evaluating the potential level of
consciousness of individuals as per their ability to have P-
Conscious and A-Conscious states. The particular contents of the
Action Machinery (A). In order to interact with the
mental states will be analyzed later as part of the method to
environment the agent uses its effectors. Agent’s behavior is
establish a taxonomy for machine consciousness.
composed of the actions ultimately performed by this
2.2 A Computational Approach to
Sensorimotor Coordination Machinery (R). From purely
Consciousness in Intelligent Agents
reactive agents to deliberative ones, the sensorimotor
The possible functionality of P-Consciousness and the possibility
coordination module is in charge of producing a concrete
of effectively having one dimension of consciousness without
behavior as a function of both external stimuli and internal
another remain unanswered questions. Therefore, the interrelation
between access and phenomenology remains highly unclear and
Memory (M). Internal agent’s state is represented both by its
controversial. Some authors even consider P-Consciousness as an
own structure and stored information. Memory is the mean to
epiphenomenal process, hence independent of behavior (see for
store both perceived information and new generated
instance , while others tend to identify a key functional role in
knowledge. We consider that even agents that do not
maintain state can be said to have a minimal state
Following a pure computational approach we could consider both
represented by its own structure, i.e. preprogrammed
A-Consciousness and P-Consciousness as being the same
sensorimotor coordination rules.
functional process, thus neglecting the possibility of subjective
As Wooldridge has pointed out , different classes of agents
experience in artificial agents. However, we think that a different
could be obtained depending on the concrete implementation of
dimensional decomposition is to be made in the realm of machine
the abstract architecture. Following the notation that we have
consciousness (see Figure 2). Although the nature and required
adopted, we could say that different sensorimotor coordination
underlying machinery for qualia are not known, we believe that
functions give place to different classes of agents. For instance,
some functional characterization of P-Consciousness can be
reactive agents or BDI agents . While sensorimotor
made. Therefore, we have adopted a functional point of view, in
coordination of reactive agents is characterized by a direct
which we introduce a redefined dimension of consciousness
mapping from situation to action, BDI agents decision making is
called Integrative Consciousness (I-Consciousness). In our
based on internal state representing beliefs, desires, and
conception of machine consciousness, we have taken the
assumption that I-Consciousness represents the functional aspect
of P-Consciousness that exists in conscious biological organisms.
In computational terms, consciousness can be regarded as a
unique sequential thread that integrates concurrent multimodal
In order to characterize consciousness as a property of agents we
sensory information and coordinates voluntary action. Hence,
need to formally define the basic components of an artificial
consciousness is closely related with sensorimotor coordination.
situated agent. Such an agent interacts with the environment by
Our aim is to establish a classification of agents according to the
retrieving information both from its own body and from its
realization of the functions of consciousness in the framework of
surroundings, processing it, and acting accordingly. Following
agent’s sensorimotor coordination.
Wooldridge’s definition of abstract architectures for intelligent
agents , and taking into account the embodiment aspect of
situated agents, we have identified a set of essential architectural
process of making this narrative could be based on a kind of
pandemonium, where different narrative versions suffer
reiterative edition and review until they are presented as the
official published content of the mind, i.e. they become conscious
contents of the mind.
In order to determine the functionality that has to be included as
part of I-Consciousness we have analyzed the very basic functions
that need to be considered in the making of a story out of sensory
information. Note that different functions can be considered
depending on the problem domain, agent physical capabilities,
and internal state representation richness. In fact, each specific
class of organism is designed to perceive different realities from
the world, thus limiting what can be available to consciousness.
For instance, while some animals (including humans) have the
ability to perceive social relations, other animals endowed with
similar senses are unable to internally represent such complex
Figure 2. Machine Consciousness bi-dimensional space.
In this work, we have adopted the assumption that single modality
According to the Global Workspace Theory , loads of
percepts acquired by the agent are combined using
information are acquired by the senses continuously, and many
contextualization in order to form complex multimodal percepts
interim coalitions of specialized processors run concurrently
. Understanding how this process is performed in the brain,
collaborating and competing for space in the working memory,
subsequently giving place to a unique version (or story) of
which is the arena where the serial mechanism of attention selects
conscious perception is known as the binding problem . From
the contents that will be conscious at any given time. In this
a machine consciousness perspective, the binding problem is
scenario, A-Consciousness refers to the accessibility of contents
solved functionally by applying a contextualization mechanism.
for their usage in conscious processing. In accordance with the
This contextualization process alone can generate multiple
Global Access Hypothesis , the output of unconscious
complex percepts. However, it is the combination of A-
processors, like for instance a face recognition module, can be
Consciousness and I-Consciousness which permits the
accessed by other processors, and be finally used to form the
construction of coherent and adaptive complex percepts. The set
conscious contents of the mind. Baars argues that the aggregation
of finally accepted percepts form a unique and coherent stream of
of processors is produced by the application of contexts.
consciousness, which the agent exploits to develop other higher
However, access is not the only feature that is required to form a
level cognitive functions.
conscious experience. Coherent context criteria need to be
selected and applied adaptively. We argue that I-Consciousness is
Out of the set of cognitive functions that an intelligent agent could
the mechanism that allows the formation of coherent contents of
potentially exhibit, the following list of functions specifically
characterizes the behavior of a conscious agent: Theory of Mind
(ToM) and Executive Function (EF). ToM is the ability to
A coherent content of consciousness is one that provides a desired
attribute mental states to oneself and others. From a human
functionality which successfully adapts to current environment
developmental standpoint, Lewis suggests four stages in the
situation. For example, given the access to the recognition of a
acquisition of ToM: (1) “I know”, (2) “I know I know”, (3) “I
face, a conscious content should be formed including a feeling of
know you know”, and finally (4) “I know you know I know” .
familiarity (or a familiarity flag setting aside the phenomenal
The term EF includes all the processes responsible for higher
dimension) if the face belongs to a known person. This is a
level action control, in particular those that are necessary for
desired functionality for a social agent, and the access property
maintaining a mentally specified goal and for implementing that
alone cannot provide it. Basically, we argue that I-Conscious
goal in the face of distracting alternatives . Attention is an
dimension of machine consciousness represents the functionality
essential feature of EF. It represents the ability of the agent to
that caused qualia to be selected by evolution in biological
direct its perception and action, i.e. selecting the contents of the
working memory out of the entire mind’s accessible content.
Planning, coordination, and set shifting (the ability to move back
3. FUNCTIONS OF CONSCIOUSNESS
and forth between tasks) are also key processes included in EF.
As mentioned above, the question of what do qualia do in
We argue that the integration of all of these cognitive functions
biological organisms is a controversial one. In this paper we
could build an artificial conscious mind. However, each of the
propose that a naturalistic approach on the origin of consciousness
mentioned functions could also be implemented independently or
can be applied to machine consciousness, and therefore identify
partly integrated with other cognitive functions, thus giving place
the functions that can render an agent conscious (in the sense of
to different levels of implementation of artificial consciousness as
Artificial Consciousness). In a vast ocean of information where
discussed in the next section.
A-Consciousness provides access to virtually any content of the
agent’s mind, I-Consciousness provides the mechanism for the
emergence of a unique and coherent story out of the chaos. This
story is the stream of consciousness, the metaphorical movie that
is playing within our heads. As Dennett has pointed out , the
4. LEVELS OF MACHINE
Therefore, these classes cannot be defined as situated agents.
Level 2, Reactive, defines a classical reactive agent which lacks
any explicit memory or learning capabilities. From level 2
Table 1 describes ConsScale, which is a list of potential levels of
onwards the agents make use of the environment as the mean to
consciousness for artificial agents. This scale has been defined in
close the feedback loop between action and perception. Hence, all
terms of reference agent abstract architectures and characteristic
agent types above level 1 can be regarded as situated agents.
behaviors. The characteristic behavior assigned to each level has
been derived from the functionality of consciousness discussed
Although we are explicitly focusing in individual agent
above. As illustrative analogy, machine consciousness levels are
evaluation, it is important to note that additional learning or
assigned a comparable level of consciousness in biological
adaptation processes could exist at an evolutionary plane
phylogenics and human ontogeny.
(assuming that agents are able to replicate, mutate, and evolve).
For instance, although reactive rules are fixed for a level 2
The first level in the scale, Disembodied, refers to a ‘proto-agent’
individual, adaptation of reactive responses in a population of
and serves as an initial reference that remarks the importance of a
agents could take place over the generations.
defined body as a requirement for defining a situated agent. The
rest of the scale comprises a set of twelve ranks, where lower
Level 3, Rational, can be identified as the simplest form of a
levels are subsumed by higher ones. Therefore, each stage of the
classical deliberative agent. At this level, the agent’s internal state
incremental development of an artificial agent could be identified
is maintained by a memory system and sensorimotor coordination
by a concrete level. Levels 0 and 1, Isolated and Decontrolled
is a function of both perceived and remembered information.
respectively, are also conceptual references which help
Propioceptive sensing can be present at this level; however, it is
characterize situatedness in terms of the relation with the
not producing any self-awareness. The next level, Attentional, is
environment. Both classes represent inert bodies lacking any
characterized by an attention mechanism, which allow the agent
functionality or interaction with the medium except the inevitable
to select specific contents both from the sensed and stored state
derived from the physical properties of their inactive bodies.
Table 1. Artificial Agents Consciousness Scale (ConsScale)
Level of Machine
Boundaries of the agent
Amino acid as
are not well defined. It can
None. It is not a situated
part of a
be confounded with the
between body and
None. It is not a situated
environment, but no
Presence of sensors and/or
None. It is not a situated
actuators, but no relation
Fixed reactive responses.
No higher function.
R establishes an output of
A as a predetermined
based on reflexes.
function of S.
Actions are a dynamic
Basic ability to learn and
function of both memory
and current information
allow orientation and
acquired by S.
Ability to direct attention
selects Ei contents from S
toward selected Ei allows
and M. Primitive
attack and escape
Multiple goals can be
Set shifting capability
interleaved as they are
allows multiple goal
explicitly represented in
Stable and complex
Complex emotions provide
emotions. Support for
a self-status assessment
ToM stage 1: “I know”.
and influence behavior.
Support for ToM stage 2:
“I know I know”.
planning. Use of tools.
Support for ToM stage 3:
Making of tools.
“I know you know”.
Support for ToM stage 4:
“I know you know I
Able to develop a culture.
Human like consciousness.
Accurate verbal report.
Behavior modulated by
Ability to synchronize and
Several streams of
coordinate several streams
consciousness in one self.
A level 5 agent, Executive, includes a more complex internal state
social interaction. The next step is represented by level 9, Social,
representation, which provides set shifting capabilities. The
where ToM is fully supported. Level 10, Human-Like, represents
achievement of multiple goals is sought thanks to a higher
the sort of agent that is endowed with the same level of
coordination mechanism that shifts attention from one task to
consciousness as a healthy adult human has. Therefore, the
another. Level 6, Emotional, is the first level in which an agent
formation of a complex culture is a feature of this level. Finally,
can be to certain extend regarded as conscious in the sense of self-
level 11 or Super-Conscious, refers to a kind of agent able to
awareness. The main characteristic of this level is the support for
internally manage several streams of consciousness, while
ToM stage 1, “I know”. Complex emotions are built as a
coordinating a single body and physical attention. A mechanism
combination of basic emotions and they are not only used to
for coordination between the streams and synchronized access to
evaluate external objects but to assess the internal agent status.
physical resources would be required at this level.
Level 7, Self-Conscious, corresponds to the emergence of self-
consciousness. At this level the agent is able to develop higher
5. CLASSIFYING AGENTS USING
order thoughts , i.e. thoughts about thoughts, and more
specifically thoughts about itself. Consequently it presents
The levels of artificial consciousness defined in ConsScale are
support for ToM stage 2, “I know I know”. Progressing to the next
characterized by abstract architectural components and agent’s
level, Empathic, the internal representation of the agent is
behavior. The architecture components represent functional
enriched by inter-subjectivity. In addition to the model of the self,
modules whose integration makes possible the emergence of a
others are also seen as selves; hence, they are consequently
characteristic behavior. Therefore, at least one behavior-based test
assigned a model of subjectivity. This is the seed for a complex
can be associated to each level in order to assess if a particular
agent fulfills the minimum required behavioral pattern for that
means that the agent generalizes the learned lessons to its general
level. In fact, an agent can only be assigned a concrete level if and
behavior, furthermore, emotions are also assigned to the self and
only if it is able to show the behavioral pattern of that level as
self-status monitoring and evaluation gives place to a sense of “I
well as the behavioral patterns of all lower levels, e.g. even
know” (support for ToM stage 1). Even though a representation of
though an agent is able to pass ConsScale level 7 behavior test, it
the self is considered as an input of the sensorimotor coordination
does not necessarily imply that it can be regarded as Self-
function, this is an implicit symbol. However, level 7 (Self-
Conscious in terms of ConsScale. It would also need to comply
Conscious) is described by an explicit symbol for the self, which
with all lower levels.
enables self-recognition. The reference behavior test for this level
would be the mirror test, which although originally applied to
As discussed above, the three first reference levels (Disembodied,
Isolated, and Decontrolled) are a special case as they do not
primates , has also been adapted to other mammals and even
actually describe situated agents. Therefore, there are no
artificial agents. Takeno et al. have proposed a specific
behavioral tests associated to any of these first three levels. A
experiment design to test whether a robot is able to recognize its
given agent could be assigned either of these initial reference
own image reflected in a mirror . Planning capabilities are
levels just by analyzing its architectural components. In contrast,
extended as the self is integrated both in the current state
representation and future state estimation. Behavior at this level is
from level 2 onwards a characteristic behavior pattern is defined
per ConsScale level. This characteristic pattern should be taken as
also illustrated by the ability to use tools (see for instance ).
the base of any behavior test that can be assigned to a particular
ConsScale Level 8 (Empathic) is achieved by an agent when it
level. Reference behavior patterns for levels 2 to 11 are discussed
shows that it maintains a model of others, and therefore it
collaborates accordingly with other agents in the pursuit of a
common goal. In fact, joint goals require this, and the need for
The characteristic behavior of level 2, Reactive, is the reflex,
hence an agent able to autonomously react to any given
socially aware plans in BDI agents has been considered some
environment situation is said to comply with level 2. When the
time ago .
response to a given environment state is not fixed, but it is a
In level 9, Social, the internal model of other selves is enhanced
function of both the information acquired by S and agent’s
with a full support of ToM. This means that characteristic
internal state, then the agent is said to comply with level 3,
behavior of this level is defined by sophisticated Machiavellian
Rational (note that some propioceptive sensing mechanism is
strategies (or social intelligence) involving social behaviors like
required to make agent’s internal state available in R, so it can be
lying, cunning, and leadership. In other words, an agent A could
an input of the sensorimotor coordination function). Most BDI-
be aware that another agent B could be aware of A’s beliefs,
type agents () could be classified as level 3 in terms of
intentions, and desires. Advanced communication skills are the
characterization of this level behavior, where, for the first time, an
If the agent is able to direct attention to a selected subset of the
agent would be able to purposely tell lies. There exist
mathematical models of the dynamics of Machiavellian
environment state (Ei) while other environmental variables are
also sensed but ignored in R, and the selected perception is
intelligence that could be used to test these sort of behaviors with
evaluated in terms of agent’s goals so subsequent responses are
artificial agents .
adapted (primitive emotions), then the agent is said to comply
While, the obvious test for level 10, Human-Like, is the Turing
with level 4, Attentional. Level 4 agents are able to show specific
test , also accurate communications skills (language) and the
attack or escape behaviors and trial and error learning. The ability
creation of a culture would be a clear feature of level 10. Other
to pay attention toward specific objects or events gives place to
key characteristics are that the agent is able to profoundly modify
the formation of directed behavior, i.e. agent can develop
its environment and society. The fluidity between social and
behaviors clearly related to specific targets, like following or
technical intelligence permits the extension of its own knowledge
running away. Additionally, level 4 agents can have primitive
using external media (like written communication) and
emotion mechanisms in the sense that the objects to which
technological advances are also possible.
attention is paid are elementally evaluated as positive or negative.
Finally, we cannot envisage any conclusive behavior test for level
A positive emotion triggers decrease of distance behavior or
11 due to the lack of known exemplifying references.
bonding to selected object, while negative emotion triggers
increase of distance and reinforcement of boundaries toward
selected object .
We have proposed ConsScale as a machine consciousness
If an agent that can be successfully classified as Attentional in
taxonomy for artificial agents, which can be used as a conceptual
terms of ConsScale also exhibits set shifting and basic emotional
framework for evaluating the potential level of consciousness of a
learning capabilities, then it can be regarded as Executive
given agent. Most of current implementations of artificial agents
(ConsScale level 5). In addition to advanced planning, emotional
fall between levels 2 and 4 inclusive. The classification of any
learning is another characteristic that can be observed in some
current implementation as fully belonging to level 5 could be
degree at this level, as the most emotionally rewarding tasks are
thoughtfully discussed elsewhere; nonetheless, we think these
assigned more time and effort.
kinds of agents are within current technology possibilities.
By basic emotional learning we mean that the agent is able to
Identifying consciousness by means of interpreting behavior
learn basic rules from one task and adapt its behavior
remains an open problem that is being currently addressed
consequently in the performance of that particular task. In
primarily in mammals, cephalopods, and birds [12, 29]. However,
contrast, Emotional (ConsScale level 6) agents are characterized
more effort should be put in the domain of artificial agents.
by complex emotions and complex emotional learning. This
 Levine J. Materialism and Qualia: The Explanatory Gap.
This research has been supported by the Spanish Ministry of
Pacific Philosophical Quarterly, 64 (1983).
Education and Science under project TRA2007-67374-C02-02.
 Lewis M. The Emergence of Consciousness and Its Role in
Human Development. Ann NY Acad Sci, 1001, 1 (2003),
 Arp R. The Environments of Our Hominin Ancestors, Tool-
 Manson N. State consciousness and creature consciousness: a
usage, and Scenario Visualization. Biology and Philosophy,
real distinction. Philosophical Psychology, 13 (2000), 405-
21, 1 (2006), 95-117.
 Arrabales R., Ledezma A. and Sanchis A. Modeling
 Nagel T. What Is It Like To Be a Bat? The Philosophical
Consciousness for Autonomous Robot Exploration. In
Review, 83, 4 (1974), 435-450.
IWINAC 2007. 2007.
 Nichols S. and Grantham T. Adaptive Complexity and
 Baars B. J. The conscious access hypothesis: Origins and
Phenomenal Consciousness. Philosophy of Science, 67, 4
recent evidence. Trends in Cognitive Science, 6, (2002), 47-
 Perner J. and Lang B. Development of theory of mind and
 Baars B. J. A Cognitive Theory of Consciousness. Cambridge
executive control. Trends in Cognitive Sciences, 3, 9 (1999),
University Press, New York, 1993.
 Bauer R. In Search of a Neuronal Signature of Consciousness
 Rao A. S. and Georgeff M. P. Modeling Rational Agents
– Facts, Hypotheses and Proposals. Synthese, 141, 2 (2004),
within a BDI Architecture. In James A., Fikes R. and
Sandewall E. eds. Proceedings of the 2nd International
Conference on Principles of Knowledge Representation and
 Block N. On a Confusion about a Function of Consciousness.
Reasoning. Morgan Kaufmann publishers Inc.: San Mateo,
Behav. Brain Sci., 18, (1995), 227-287.
CA, USA, 1991, 473-484.
 Chalmers D. Consciousness and its place in nature. In
 Rao A. S., Georgeff M. P. and Sonenberg E. A. Social Plans:
Chalmers D. ed. Philosophy of Mind: Classical and
A Preliminary Report. In Proceedings of the Third European
Contemporary Readings. Oxford University Press, New
Workshop on Modelling Autonomous Agents in a Multi-
Agent World. Elsevier Science B.V., 1992, 57-76.
 Chalmers D. Moving forward on the problem of
 Revonsuo A. and Newman J. Binding and Consciousness.
consciousness. Journal of Consciousness Studies, 4, 1
Consciousness and Cognition, 8, 2 (1999), 123-127.
 Rosenthal D. M. Metacognition and Higher-Order Thoughts.
 Ciompi L. Reflections on the role of emotions in
Consciousness and Cognition, 9, 2 (2000), 231-242.
consciousness and subjectivity, from the perspective of
affect-logic. Consciousness & Emotion, 4, 2 (2003), 181-
 Schacter D. L. On the relation between memory and
consciousness. In Anonymous Varieties of memory and
consciousness: Essays in honor of Endel Tulving. Erlbaum
 Dennett D. C. Consciousness Explained. Little, Brown and
Associates, Hillsdale, NJ, 1989, 355-389.
Co, Boston, 1991.
 Sergent J. and Signoret J. Implicit Access to Knowledge
 Dobbyn C. and Stuart S. The Self as an Embedded Agent.
Derived from Unrecognized Faces in Prosopagnosia. Cereb.
Minds and Machines, 13, 2 (2003), 187-201.
Cortex, 2, 5 (1992), 389-400.
 Edelman D. B., Baars B. J. and Seth A. K. Identifying
 Seth A., Baars B. and Edelman D. Criteria for consciousness
hallmarks of consciousness in non-mammalian species.
in humans and other mammals. Consciousness and
Consciousness and Cognition, 14, 1 (3 2005), 169-187.
Cognition, 14, 1 (2005), 119-139.
 Gallup G. G. Self-recognition in primates: A comparative
 Takeno J., Inaba K. and Suzuki T. Experiments and
approach to the bidirectional properties of consciousness.
examination of mirror image cognition using a small robot.
American Psychologist, 32 (1977), 329-337.
CIRA 2005, (2005), 493-498.
 Gavrilets S. and Vose A. The dynamics of Machiavellian
 Turing A. Computing Machinery and Intelligence. Mind,
intelligence. PNAS, 103, 45 (2006), 16823-16828.
 Jack A. and Roepstorff A. Why Trust the Subject? Journal of
 Wegner D. M. and Wheatley T. Apparent mental causation:
Consciousness Studies, 10, 9-10 (2003).
Sources of the experience of will. American Psychologist,
 Kitamura T., Otsuka Y. and Nakao T. Imitation of Animal
54, 7 (1999), 480-492.
Behavior with Use of a Model of Consciousness-Behavior
 Wooldridge M. Intelligent Agents. In Weiss G. ed.
Relation for a Small Robot. In 4º IEEE International
Multiagent Systems: A Modern Approach to Distributed
Workshop on Robot and Human Communication. Tokyo,
Artificial Intelligence. The MIT Press, 1999, 27-78.