Skip to main content

Advertisement

Log in

AI Assistants and the Paradox of Internal Automaticity

  • Original Paper
  • Published:
Neuroethics Aims and scope Submit manuscript

Abstract

What is the ethical impact of artificial intelligence (AI) assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes advantage of this paradox of internal automaticity to downplay the threats of external automaticity to our autonomy. We respond in this paper by challenging the legitimacy of the paradox. While Danaher assumes that internal and external automaticity are roughly equivalent, we argue that there are reasons why we should accept a large degree of internal automaticity, that it is actually essential to our sense of autonomy, and as such it is ethically good; however, the same does not go for external automaticity. Therefore, the similarity between the two is not as powerful as the paradox presumes. In conclusion, we make practical recommendations for how to better manage the integration of AI assistants into society.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Because Robin trusts the AI assistant in the moment, we deny that it helps to object that the AI assistant is clearly doing something wrong from an objective point of view and therefore should not be trusted. Objective errors have been made because the goal is not always to give the most geographically efficient directions but, e.g., to fulfill a contract to direct the driver to a certain business, as one of the authors can personally attest. Reiner and Nagel [1, p., 115] also worry about examples of this kind where there are conflicts of interest in which a corporation is being paid by some business to direct a person to that business. They go on to argue that “it becomes harder to accept the premise that as our devices become TEMs [technologies of extended minds], autonomy violations become less likely; if anything, they become more insidious” [1, p., 115]. We agree: the more that AI assistants are integrated into our cognition, the greater their threat to our autonomy. (Thanks to an anonymous reviewer for prompting this footnote.)

  2. There are cases where the human driver overtrusts an AI assistant (trusts despite contravening reason)—e.g., drivers in Australia followed their GPS navigator into the ocean! [2]. However, in the case of Robin, they were at least initially warranted in trusting the AI (and they didn’t have disconfirming evidence like an ocean in front of them). (Thanks to an anonymous reviewer for prompting this footnote.)

  3. Clark and Chalmers [4] give the seminal argument in favor of the extended mind thesis. For an illuminating discussion concerning the ramifications of the extended mind thesis on our conception of the mind and persons—and an argument that one’s mind is not identical to the person, which we will discuss in the “Two Objections” section—see Buller [5]. For helpful discussions of the implications of the extended mind thesis based on AI technologies, see Reiner and Nagel [1] and Hernández-Orallo and Vold [6].

  4. If an AI device is implanted in the brain, this obviously raises a challenge to the internal/external distinction, but that’s an outstanding problem we will not address here.

  5. This doesn’t mean that we shouldn’t carefully weigh the risks and benefits of AI assistants and sometimes favor their use. We agree with Danaher [7, pp., 650, 651] that we should employ principles of risk/reward in our reasoning about the acceptability of AI assistants, vis-à-vis their impact on three key areas of our lives: our cognitive capacities, autonomy, and interpersonal interactions. However, we are inclined to see more risk than reward. We will explore this theme more in the “Two Objections” section.

  6. To those inclined to see significant moral trouble with E-automaticity, it is not clear that they would necessarily maintain that this implies that I-automaticity is problematic too. However, Carr [11] for instance does argue at length against the over-automatization of our natural cognition—he thinks that completing tasks with intentional, non-automatic cognitive power is more acceptable.

  7. There could be other responses, such as that neither I- nor E-automaticity pose significant threats, or that E-automaticity does not genuinely represent a cognitive process.

  8. Control over decision-making and action is at least a necessary condition for possessing autonomy and is inherently connected to automaticity. (Thanks to an anonymous reviewer for prompting us to clarify the relationship between control and autonomy.)

  9. These points reflect the idea that different features or facets of I-automaticity are justified in different ways: some through metacognitive approval, some through socialization, and some through evolutionary heritage.

  10. Reiner and Nagel [1, p. 110, 114] suggest that perception is essential to whether some technological device counts as part of the extended mind. For them, the brain is a thing whereas the mind is a concept [1, p., 109], and the concept has some fluidity: so it seems, then, that as one’s concept changes, what one perceives as technologies of the extended mind (or TEMs, as they call them) will change, and vice versa. They implicitly give some support for Buller’s argument that the mind is not identical to the person. Not every algorithmic function, operating in some external device, counts as a TEM. What matters is “that there is a relatively seamless interaction between brain and algorithm such that a person perceives of the algorithm as being a bona fide extension of a person’s mind” [1, p., 110]. Notice that on their implied view, a person has a mind the parameters of which can change based on the person’s perception: the person has to perceive the TEM as something extending their mind. In a sense, we suggest, the person (and the internal mind, the mind within the skull) is ontologically primary, whereas the extended mind has a secondary existence, so they cannot be identical.

  11. See Hernández-Orallo and Vold [6] for an illuminating, broad discussion about the many types and implications of AI technologies that extend our cognition.

  12. While ethics bots don’t necessarily prevent us from using our brains for decision-making and self-discovery, as an anonymous reviewer pointed out, we contend that there remains a significant worry that in using ethics bots, we could become over-reliant on them as we’ve seen with other technology. For example, when people rely on global positioning systems for navigation, there is a worry that their basic navigation skills and sense of direction will diminish (see Carr [11, pp., 126–137] for discussion), although the extent of this may depend on the individual’s subjective evaluation of their navigation skills [19, pp., 21, 23]. So, overuse of ethics bots could mask our cognitive abilities and therefore weaken them over time. The same reviewer usefully points out that ethics bots could prompt us to metacognize, thus actually enhancing our autonomy. This is possible but has its downsides too, as it could diminish our spontaneity, moments when our core desires manifest in a process of surprising self-discovery. Overall, we recognize that ethics bots could be helpful but they have their own set of risks and rewards, as with all AI assistants, requiring further evaluation.

  13. We recommend taking seriously the proposal to implement the “society-in-the-loop” model proposed by Rahwan [20]. This model is grounded simultaneously on the ideas of a human-in-the-loop (from modeling and simulation) and social contract theory; it is geared towards embedding “the general will into an algorithmic social contract” [20, p., 8] in order for society—groups of humans with shared interests—to better exert their collective will over AI technology. This or some similar framework is necessary, we contend, to ensure society’s normative expectations are implemented into our governance structures. Of course, we actually have “to build institutions and tools” [20, p., 13] that implement and promote the society-in-the-loop model. The time for such action is now.

  14. See, for example, Hernández-Orallo and Vold [6], who warn us to “be careful about only regulating autonomous systems (e.g., a ban on autonomous weapons)” when other forms of AI (particularly of the kind that can extend our cognition) can be just as (potentially negatively) impactful on human lives.

  15. In the USA, for instance, private businesses are more or less cognizant of the need for greater transparency and ethical reflection on the impact of AI—e.g., entities like the Partnership on AI are positive—though likely insufficient [22]—steps in the right direction.

References

  1. Reiner, P.B., and S.K. Nagel. 2017. Technologies of the extended mind: defining the issues. In Neuroethics: Anticipating the Future, ed. J. Illes, 108–122. New York: Oxford University Press.

    Google Scholar 

  2. Fujita, A. 2012. GPS tracking disaster: Japanese tourists drive straight into the Pacific. ABC News. https://abcnews.go.com/blogs/headlines/2012/03/gps-tracking-disaster-japanese-tourists-drive-straight-into-the-pacific/ (Accessed 24 May 2019.)

  3. Etzioni, A., and O. Etzioni. 2016. AI assisted ethics. Ethics and Information Technology 18: 149–156. https://doi.org/10.1007/S:10676-016-9400-6.

    Article  Google Scholar 

  4. Clark, A., and D. Chalmers. 1998. The extended mind. Analysis 58 (1): 7–19.

    Article  Google Scholar 

  5. Buller, T. 2013. Neurotechnology, invasiveness and the extended mind. Neuroethics 6 (3): 593–605.

    Article  Google Scholar 

  6. Hernández-Orallo, J. & Vold, K. (2019). AI extenders: the ethical and societal implications of humans cognitively extended by AI. Association for the Advancement of Artificial Intelligence.

  7. Danaher, J. 2018. Toward an ethics of AI assistants: an initial framework. Journal of Philosophy and Technology 31: 629–653. https://doi.org/10.1007/s13347-018-0317-3.

    Article  Google Scholar 

  8. Dubljević, V. 2013. Autonomy in neuroethics: Political and not metaphysical. AJOB Neuroscience 4 (4): 44–51.

    Article  Google Scholar 

  9. Bell, E., V. Dubljević, and E. Racine. 2013. Nudging without ethical fudging: clarifying physician obligations to avoid ethical compromise. American Journal of Bioethics 13 (6): 18–19.

    Article  Google Scholar 

  10. Dubljević, V. 2016. Autonomy is political, pragmatic and post-metaphysical: a reply to open peer commentaries on ‘Autonomy in Neuroethics’. AJOB Neuroscience 7 (4): W1–W3.

    Article  Google Scholar 

  11. Carr, N.G. 2014. The glass cage: Automation and us. New York: W.W. Norton.

    Google Scholar 

  12. Krakauer, D. (2016). Will A.I. harm us? Better to ask how we’ll reckon with our hybrid nature. Nautilus. http://nautil.us/blog/will-ai-harm-us-better-to-ask-how-well-reckon-with-our-hybrid-nature. (Accessed 31 July 2018.)

  13. Raz, J. 1986. The morality of freedom. New York: Oxford University Press.

    Google Scholar 

  14. Ellis, B. 2013. The power of agency. In Powers and capacities in philosophy, ed. R. Groff and J. Greco, 186–206. New York: Routledge.

    Chapter  Google Scholar 

  15. Rawls, J. 1985. Justice as fairness: political not metaphysical. Philosophy & Public Affairs 14 (3): 223–251.

    Google Scholar 

  16. Nagel, S.K. 2013. Autonomy—a genuinely gradual phenomenon. AJOB Neuroscience 4 (4): 60–61.

    Article  Google Scholar 

  17. Dubljević, V., S. Sattler, and E. Racine. 2018. Deciphering moral intuition: how agents, deeds and consequences influence moral judgment. PLoS One. https://doi.org/10.1371/journal.pone.0204631.

  18. Vohs, K.D., R.F. Baumeister, B.J. Schmeichel, J.M. Twenge, N.M. Nelson, and D.M. Tice. 2008. Making choices impairs subsequent self-control: a limited-resource account of decision making, self-regulation, and active initiative. Journal of Personality and Social Psychology 94 (5): 883–898.

    Article  Google Scholar 

  19. Hejtmánek, L., I. Oravcová, J. Motýl, J. Horáček, and I. Fajnerov. 2018. Spatial knowledge impairment after GPS guided navigation: eye-tracking study in a virtual town. International Journal of Human-Computer Studies 116: 15–24. https://doi.org/10.1016/j.ijhcs.2018.04.006.

    Article  Google Scholar 

  20. Rahwan, I. 2017. Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology 20: 5–14.

    Article  Google Scholar 

  21. European Union [EU] (2016). Regulation 2016/679 of the European parliament and of the council. Official Journal of the European Union.

  22. Metz, C. Is ethical A.I. even possible? (2019.) The New York Times. https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html. (Accessed 29 March 2019.)

Download references

Acknowledgments

The authors would like to thank two anonymous reviewers for detailed, helpful comments, and audience members at a presentation at the North Carolina Philosophical Society meeting in Greensboro, NC (March 8, 2019), for helpful questions and comments. Special thanks to Sean Douglas and Matthew Ferguson for valuable research assistance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William A. Bauer.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bauer, W.A., Dubljević, V. AI Assistants and the Paradox of Internal Automaticity. Neuroethics 13, 303–310 (2020). https://doi.org/10.1007/s12152-019-09423-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12152-019-09423-6

Keywords

Navigation