Elsevier

Consciousness and Cognition

Volume 45, October 2016, Pages 210-225
Consciousness and Cognition

Review article
Artificial consciousness and the consciousness-attention dissociation

https://doi.org/10.1016/j.concog.2016.08.011Get rights and content

Highlights

  • A dissociation between human attention and consciousness has implications for AI.

  • Attention routines can be programmed in AI, but emotional routines cannot.

  • AI are only able to simulate human emotion and will never be able to have empathy.

  • Human motivation and agency cannot be used to determine routine halting in AI.

  • It is unlikely that artificial consciousness will ever be realized.

Abstract

Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness.

Introduction

One of the most compelling topics currently debated is how it may be possible to develop consciousness in machines. While related questions have been discussed academically in the cognitive sciences for some time now, the idea of artificial consciousness has received more attention in popular culture recently. There is a growing number of articles in magazines and newspapers that discuss related advances in Artificial Intelligence (AI), from self-driving cars to the “internet of things” where common household objects can be intelligently connected through a centralized control system. Most recently, Google’s DeepMind group developed an artificial agent capable of winning the game of Go against humans, which is considered to be a huge accomplishment in AI since it goes beyond brute-force search heuristics and uses deep learning models (Silver et al., 2016). This advance promises more impressive innovations in the future.

Along with these advancements is a growing fear that we may be creating intelligent systems that will harm us (Rinesi, 2015). This topic has been addressed in many settings, from international conferences to popular books (e.g., Bostrom, 2014, Brooks et al., 2015). Films and television shows, such as Ex Machina, Her, and Battlestar Galactica, present scenarios where AI systems go rogue and threaten humans. While these fictional accounts remain unachievable with today’s technology, they are beginning to feel more and more possible given how fast computers are “evolving”.

The increase of these discussions in the mainstream media is telling, and one could say that Artificial Intelligence is truly at a turning point. Some think that the so-called ‘singularity’ (the moment in which AI surpasses human intelligence) is near. Others say there is now a Cambrian explosion in robotics (Pratt, 2015). Indeed, there is a surge in AI research across the board, looking for breakthroughs to model not only specific forms of intelligence through brute-force search heuristics but by truly reproducing human intelligent features, including the capacity for emotional intelligence and learning. Graziano (2015), for example, has recently claimed that artificial consciousness may simply be an engineering problem—once we overcome some technical challenges we will be able to see consciousness in AI. Even without accomplishing this lofty goal of machine sentience, it still is easy to see many examples of human-like rational intelligence implemented in computers with the aim of completing tasks more efficiently.

What we will argue, however, is that phenomenal consciousness, which is associated with the first-person perspective and subjectivity, cannot be reproduced in machines, especially in relation to emotions. This presents a serious challenge to AI’s recent ambitions because of the deep relation between emotion and cognition in human intelligence, particularly social intelligence. While we may be able to program AI with aspects of human conscious cognitive abilities, such as forms of ethical reasoning (e.g., “do not cause bodily harm”, “do not steal”, “do not deceive”), we will not be able to create actual emotions by programming certain monitoring and control systems—at best, these will always be merely simulations. Since human moral reasoning is based on emotional intelligence and empathy, this is a substantial obstacle to AI that has not been discussed thoroughly.

Before proceeding, however, it is crucial to distinguish the present criticism of AI from a very influential one made by Searle (Searle, 1980, Searle, 1998). Searle has famously criticized AI because of its incapacity to account for intentionality (i.e., the feature of the mental states that makes them about something, essentially relating them to semantic contents), which he takes to be exclusively a characteristic of conscious beings. He also argues that a consequence of this criticism is that phenomenal consciousness (i.e., what it is like to have an experience) is necessarily a biological phenomenon. Searle, therefore, takes the limitations of AI to be principled ones that will not change, regardless of how much scientific progress there might be.

Critics have argued that the intuition that only biological beings can have intentional minds may be defeated (e.g., see Block, 1995a) and that cyborg systems or an adequate account of how the brain computes information could refute the Chinese room thought experiment (Churchland and Churchland, 1990, Pylyshyn, 1980). These criticisms have merit, and we largely agree with them, but only with respect to the kind of consciousness that Block (1995b) calls ‘access consciousness’. Thus, we believe there is a very important ambiguity in this debate. While we agree with Searle that phenomenal consciousness is essentially a biological process and that AI is severely limited with respect to simulating it, we agree with his critics when they claim that AI may be capable of simulating and truly achieving access consciousness. This is why the consciousness and attention dissociation is crucial for our purposes, because it states that attention is essentially related to accessing information (see Montemayor & Haladjian, 2015).

Our criticism of AI, therefore, is more nuanced than Searle’s in three important respects. First, we limit our criticism exclusively to the type of consciousness that is characteristic of feelings and emotions, independently of how they are related to semantic contents or conceptual categories (i.e., phenomenal consciousness). Second, the limitations of AI with respect to simulating phenomenal consciousness will be independent of considerations about understanding the meaning of sentences. The limitations of AI that we outline will extend to other species, which do not manifest the capacity for language but which very likely have phenomenal consciousness. Thus, our criticism of AI is more truly based on biological considerations than Searle’s. Third, and quite importantly, we grant that AI may simulate intelligence, rationality, and linguistic behavior successfully, and that AI agents may become intelligent and competent speakers just like us; however, we challenge the idea that they will experience feelings or emotions in the same way as humans. This has the interesting implication that AI agents lack moral standing, assuming that experiencing emotions and feelings is a necessary condition for moral standing.

Our criticism of AI assumes views that are not entirely uncontroversial. For example, some would object to the distinction between access and phenomenal consciousness, or like Searle, to separating intentionality from phenomenality. But we hope to show that our assumptions and criticism have several advantages over other views, including Searle’s. One advantage is avoiding the ambiguity aforementioned. Another advantage is the emphasis on empathy. Empathy and the intensity of emotions have not been considered as central when challenging AI. This is a puzzling situation, given the importance of phenomenal consciousness for empathy, moral standing, and moral behavior. A contribution of this paper is to improve this situation by taking the intrinsic moral value of consciousness as fundamental.

To fully appreciate how the intrinsic moral value of phenomenal consciousness matters to AI’s limitations, consider the fact that although semantic information can be easily copied, and programs with syntactic features may be reproduced and copied several times, it seems that the way a subject experiences the intensity of an emotion cannot be replicated. This non-semantic uniqueness might be the most important aspect of phenomenal consciousness. It certainly seems to be more important than the fact that the mind relates to semantic contents—however way those contents are defined (e.g., see Aaronson, 2016). Having made these clarifications, we now turn to our criticism of AI, not based on semantics, but on the importance of emotions and their intrinsic normative value.

While the idea that phenomenal consciousness cannot be realized in machines seems like an obvious conclusion to make, there are reasons to explore this issue further and more carefully. Advances in AI are quickening in pace, and as software and hardware technologies continue to progress there will be increased accessibility to more powerful machines that can perform more sophisticated computing. In the field of biocomputers, there are even developments of using enzymes to create “genetic logic gates” (Bonnet, Yin, Ortiz, Subsoontorn, & Endy, 2013) that could be used to build biological microprocessors for potentially controlling biological systems (Moe-Behrens, 2013). A related fear is that if we use living materials to build and run software, how are we certain that such organic-based technologies are not going to be conscious eventually? Of course, this is a compelling topic in science fiction that is not likely to be realized any time soon, but a proper discussion of this potential situation is important at this stage.

A key issue is that AI has expanded its goals beyond the original Turing test for intelligence and now tries to include more complex functions such as perception and emotion. This is a natural progression, given that perception and emotion modulate and give rise to many forms of cognitive activities associated with human intelligence (Pessoa, 2013). One may think that if this project succeeds and artificial agents pass not only Turing intelligence tests but also emotional Turing tests (Picard, 1997, Reichardt, 2007), artificial agents may achieve a level of conscious awareness similar to human beings. In fact, according to the most optimistic interpretation of AI research (e.g., Kurzweil, 1999), artificial agents may become sources of ethical and rational flourishing because they would not be subject to the biological constraints that humans inherit from their genetic lineage, thereby enhancing the possibilities for improvement in ways that are impossible for us mortals.

As mentioned, to clarify the potential for AI systems, it is helpful to frame this issue by considering how human consciousness and attention are dissociated, or what we call CAD for the “consciousness-attention dissociation” (Montemayor & Haladjian, 2015). Since human visual attention is now increasingly used to examine the nature of conscious experience, it is critical to understand how they are related. When you examine this relationship you find that there is a strong case for dissociation between attention and consciousness in humans. That is, the basic forms of attention do not require consciousness to operate successfully. Perception is supported by many mechanisms that operate outside of phenomenal consciousness, such as attention routines (Cavanagh, 2004). AI systems, like those associated with computer vision, can be said to possess forms of intelligence related to attention-based selective information processing for monitoring or search routines. According to CAD, such attention routines would not entail conscious awareness in humans. This means that even if AI reached similar or superior levels of intelligence based on attention routines, machines would still lack consciousness (since consciousness is unnecessary for these functions). Moreover, even if consciousness could be identified unambiguously in machines—which is not an easy task—there is the possibility that there are different types of phenomenal consciousness (Kriegel, 2015), perhaps only some of which could be susceptible of AI reductive computing. These would be related to how attention occurs without phenomenal consciousness.

In support of a dissociation between consciousness and attention, consider that the sort of phenomenal consciousness that is experienced by humans must be a more recent advancement in evolutionary terms—although it is related to visual attention, the two are generally separate processes as the more basic forms of attention developed prior to those associated with conscious attention (Haladjian & Montemayor, 2015). Abilities related to the selective processing of visual information, such as color, shape, and motion, are basic abilities found in animals and humans. These can be thought of as modules of perception that can be activated based on the environmental and task demands (Pylyshyn, 1999), and can be described as attentional routines (Cavanagh, 2004). From a computer science perspective, the halting problem (i.e., the termination of a function when its purpose is complete) is not an issue for abilities such as these basic forms of attention. There are computer programs to execute shape detection, object tracking, and face recognition. In contrast to these attention routines, phenomenally conscious experience does face the halting problem since consciousness is not something that clearly ends once a task is executed—it is an integrated unified experience that runs at varying degrees of activation (though its activation can be reduced when asleep or under anesthesia).

Another point related to evolution is that dexterous complex actions, which were genetically designed from millennia of evolution, are notoriously difficult for AI and machines to simulate. Using a familiar example, one can program a computer to beat any human in the game of chess, but it is very difficult to program a robot that could dexterously move the pieces of the chessboard like a human. Thus, even the attention routines associated with unconscious or implicit motor control will not be easy to reproduce, let alone their integration with conceptual knowledge and the kind of conscious contents that humans manifest in language.

This idea is related to Moravec’s paradox, which is the problem that while abstract and complex thought is easy to compute, basic motor skills are very hard to model computationally. Hans Moravec (1988) explained this puzzling asymmetry precisely by appealing to evolution. Our species had millions of years to develop finely tuned skills, which operate unconsciously or automatically, while complex rational thought is a recent addition to our cognitive abilities. This line of reasoning must be carefully considered. One critical consequence of developing this point is that conceptual conscious attention must have evolved later than basic perceptual attention (Haladjian & Montemayor, 2015).

Yet, one does not need to accept such evolutionary arguments to appreciate how CAD makes the unqualified AI proposal problematic. The main issue to consider is that while simulated intelligence may be intelligence, simulated emotion cannot be emotion (Turkle, 2005/1984). This is because intelligence is basically computation and is not necessarily dependent on phenomenal consciousness, but human feelings are dependent on it—a distinction that has not been properly appreciated in the AI literature. This issue, as mentioned, is likely related to the problem that while programs can be easily copied, the phenomenal experience of a subject cannot. There is a divide between people that say simulated thinking will be enough to produce a human mind, while others argue that although simulated thinking can be considered thinking, simulated emotion can never be considered to truly be emotion (e.g., these ideas are presented in an episode of the Radiolab podcast, WNYC Studios, 2012). In essence, this is the distinction between access and phenomenal consciousness as discussed in the philosophical literature. In addition, human intelligence has an evolving purpose with ever-changing goals, something that AI has not yet achieved on its own, since it is always reliant on its programmer. These motivation and goal-setting abilities also face the problem of when to stop operating (e.g., a halting function), which is certainly not well-defined in human consciousness. Regardless of what one thinks about evolution and Moravec’s paradox, these problems confront future research. Ultimately, we believe that there is no possibility to create an artificial phenomenal consciousness because of the empirically grounded implications of CAD for AI.

In the next sections, we will outline the challenges for creating an artificial intelligence system that has any sort of conscious experience. Henceforth, we simply refer to phenomenal consciousness as ‘consciousness’. This discussion will require an examination of what AI systems are capable of now, how they are related to human abilities (particularly attention, emotions, and motivation), and the implications of the human consciousness-attention dissociation for AI.

Section snippets

The art of human intelligence

Since the age of digital computing, the human brain has been compared to the computer in attempts to better understand how it works. The field of cognitive science grew out of this tradition (e.g., see Pylyshyn, 1984). Scientists continue to explore the relationship between the brain and computers, which has generally taken the form of computational models of brain processes and has led to a better understanding of perception and cognition (e.g., see Reggia, 2013). On the flip side, this

The art of artificial intelligence

The challenge of modeling emotions in AI and the dissociation between consciousness and attention do not mean that AI systems will not be incredibly transformative and useful. On the contrary, artificial intelligence, in terms of storing facts and performing rule-based reasoning, has already changed our world through sophisticated computing systems. Following advancements in the mid-twentieth century, which include Alan Turing’s proposal for a general computing machine (Turing, 1950),

Conclusion

As Sherry Turkle argues, simulated thinking is (may be) thinking, but simulated feeling is not (can never be) feeling (Turkle, 2005/1984). Intelligence is essentially computation—something that machines were designed to do. Emotions that generate feelings, on the other hand, must involve phenomenal subjective experience. Machines may be able to self-reference and reason based on computational procedures, thus performing intelligently without conscious awareness. But to have empathy and social

Acknowledgements

We would like to thank an anonymous reviewer for suggestions that significantly improved this paper. HHH received postdoctoral research support from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC Grant agreement No. AG324070 awarded to Patrick Cavanagh.

References (132)

  • D. Kahneman et al.

    The reviewing of object files: Object-specific integration of information

    Cognitive Psychology

    (1992)
  • R.W. Kentridge et al.

    Attended but unseen: Visual attention is not sufficient for visual awareness

    Neuropsychologia

    (2008)
  • C. Koch et al.

    Attention and consciousness: Two distinct brain processes

    Trends in Cognitive Sciences

    (2007)
  • C. Koch et al.

    Attention and consciousness: Related yet different

    Trends in Cognitive Sciences

    (2012)
  • V.A.F. Lamme

    Separate neural definitions of visual consciousness and visual attention; a case for phenomenal awareness

    Neural Networks

    (2004)
  • J.E. LeDoux

    Rethinking the emotional brain

    Neuron

    (2012)
  • M. Lisi et al.

    Dissociation between the perceptual and saccadic localization of moving objects

    Current Biology

    (2015)
  • J.H. Maunsell et al.

    Feature-based attention in visual cortex

    Trends in Neurosciences

    (2006)
  • L. Mudrik et al.

    Information integration without awareness

    Trends in Cognitive Sciences

    (2014)
  • M. Mulckhuyse et al.

    Unconscious attentional orienting to exogenous cues: A review of the literature

    Acta Psychologica

    (2010)
  • M.J. Pauers et al.

    Changes in the colour of light cue circadian activity

    Animal Behaviour

    (2012)
  • L. Pessoa

    To what extent are emotional visual stimuli processed without attention and awareness?

    Current Opinion in Neurobiology

    (2005)
  • L. Pessoa et al.

    Attentional control of the processing of neutral and emotional stimuli

    Cognitive Brain Research

    (2002)
  • Z.W. Pylyshyn

    Situating vision in the world

    Trends in Cognitive Sciences

    (2000)
  • J.A. Reggia

    The rise of machine consciousness: Studying consciousness with computational models

    Neural Networks

    (2013)
  • P. Rochat

    Five levels of self-awareness as they unfold early in life

    Consciousness and Cognition

    (2003)
  • P. Rochat et al.

    Perceived self in infancy

    Infant Behavior and Development

    (2000)
  • S. Aaronson

    Can computers become conscious?

  • Ahn, H.-I., & Picard, R. W. (2014b). Modeling subjective experience-based learning under uncertainty and frames. Paper...
  • H.-I. Ahn et al.

    Measuring affective-cognitive experience and predicting market success

    IEEE Transactions on Affective Computing

    (2014)
  • G.A. Alvarez et al.

    The representation of simple ensemble visual features outside the focus of attention

    Psychological Science

    (2008)
  • R.M. Axelrod

    The evolution of cooperation

    (1984)
  • H. Bauer et al.

    Moore’s law: Repeal or renewal?

  • T. Bayne

    Conscious states and conscious creatures: Explanation in the scientific study of consciousness

    Philosophical Perspectives

    (2007)
  • N. Block

    On a confusion about a function of consciousness

    Behavioral and Brain Sciences

    (1995)
  • N. Block

    The mind as the software of the brain

  • J. Bonnet et al.

    Amplifying genetic logic gates

    Science

    (2013)
  • N. Bostrom

    Superintelligence: Paths, dangers, strategies

    (2014)
  • Bradshaw, T. (2016). Apple buys emotion-detecting AI start-up. The financial times....
  • Breazeal, C., & Scassellati, B. (1999). A context-dependent attention system for a social robot. Paper presented at the...
  • Brooks, R., Gupta, A., McAfee, A., & Thompson, N. (2015) Artificial intelligence and the future of humans and robots in...
  • B. Bruya

    Effortless attention: A new perspective in the cognitive science of attention and action

    (2010)
  • E. Burke

    A philosophical enquiry into the origin of our ideas of the sublime and beautiful

    (1757)
  • P. Cavanagh

    Attention routines and the architecture of selection

  • Z. Chen

    Object-based attention: A tutorial review

    Attention, Perception, & Psychophysics

    (2012)
  • P.S. Churchland et al.

    Could a machine think?

    Scientific American

    (1990)
  • Cisco Systems Inc.

    The Zettabyte Era—Trends and analysis

  • L. Cosmides et al.

    Evolutionary psychology: New perspectives on cognition and motivation

    Annual Review of Psychology

    (2013)
  • M. Csikszentmihalyi

    Finding flow: The psychology of engagement with everyday life

    (1997)
  • A.R. Damasio

    Descartes’ error: Emotion, reason, and the human brain

    (1994)
  • Cited by (41)

    • Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory

      2020, Technology in Society
      Citation Excerpt :

      Due to these differences, the following differences in connotation exist between AI anxiety and computer anxiety: First, there are concerns that AI may produce artificial consciousness [30], which will give rise to a condition where AI exists independently and may not be controlled by humans. By contrast, consciousness has not been discussed in previous computer anxiety studies.

    • A cellular and attentional network explanation of consciousness

      2020, Consciousness and Cognition
      Citation Excerpt :

      My (potential) sudden realization that the evidence indicates that my cancer is terminal would be by far the most salient event and would command my attention, thus becoming conscious. As Haladjian and Montemayor (2016) point out, this is an introspective form of attention related to self-knowledge and detection of inner sensations that is often overlooked by researchers, in spite of its importance to consciousness. This issue is also related to “source monitoring” (Mitchell & Johnson, 2009).

    • The Role of Information in Consciousness

      2023, Psychology of Consciousness: Theory Research, and Practice
    View all citing articles on Scopus
    View full text