Skip to main content

Advertisement

Log in

Ethics for things

  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

This paper considers the ways that Information Ethics (IE) treats things. A number of critics have focused on IE’s move away from anthropocentrism to include non-humans on an equal basis in moral thinking. I enlist Actor Network Theory, Dennett’s views on ‹as if’ intentionality and Magnani’s characterization of ‹moral mediators’. Although they demonstrate different philosophical pedigrees, I argue that these three theories can be pressed into service in defence of IE’s treatment of things. Indeed the support they lend to the extension of moral status to non-human objects can be seen as part of a trend towards the accommodation of non-humans into our moral and social networks. A number of parallels are drawn between philosophical arguments over artificial intelligence and information ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • A. Adam. Artificial Knowing: Gender and the Thinking Machine. Routledge, London and New York, 1998.

    Google Scholar 

  • A. Adam. Cyborgs in the Chinese Room: Boundaries Transgressed and Boundaries Blurred. In J. Preston and M. Bishop, editors, Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, pages 319–337. Oxford University Press, Oxford, 2002.

    Google Scholar 

  • A. Adam. Delegating and Distributing Morality: Can We Inscribe Privacy Protection in a Machine. Ethics and Information Technology, 7(4): 233–242, 2005.

    Article  Google Scholar 

  • M. Akrich. The De-Scription of Technical Objects. In W. E. Bijker and J. Law, editors, Shaping Technology/Building Society: Studies in Sociotechnical Change, pages 205–224. MIT Press, Cambridge, MA and London, 1997.

    Google Scholar 

  • R. A. Brooks. Intelligence Without Representation. Artificial Intelligence, 47: 139–160, 1991.

    Article  Google Scholar 

  • H. M. Collins. Artificial Experts: Social Knowledge and Intelligent Machines. MIT Press, Cambridge, MA, 1990.

    Google Scholar 

  • H. M. Collins and M. Kusch. The Shape of Actions: What Machines and Humans Can Do. MIT Press, Cambridge, MA and London, 1998.

    Google Scholar 

  • D. C. Dennett. Is there an Autonomous ‹Knowledge Level’? In Z. Pylshyn and W. Demopoulos, editors, Meaning and Cognitive Structure, pages 51–54. Ablex, Norwood, NJ, 1986.

    Google Scholar 

  • D. C. Dennett. Review of Newell, Unified Theories of Cognition. Artificial Intelligence, 59(1–2): 285–294, 1993.

    Article  Google Scholar 

  • D. C. Dennett. The Myth of Original Intentionality. In E. Dietrich, editor, Thinking Computers and Virtual Persons: Essays on the Intentionality of Machines, pages 91–107. Academic Press, San Diego, CA and London, 1994.

    Google Scholar 

  • B. Easlea. Science and Sexual Oppression: Patriarchy’s Confrontation with Woman and Nature. Weidenfeld and Nicolson, London, 1981.

    Google Scholar 

  • L. Floridi. Information Ethics, its Nature and Scope. In J. van den Hoven and J. Weckert, editors, Moral Philosophy and Information Technology. Cambridge University Press, UK, Cambridge, pp. 40–65, 2007a.

  • L. Floridi. Distributed Morality in Multiagent Systems Paper Presented at CEPE 2007, San Diego, 2007b. Available at http://cepe2007.sandiego.edu/abstractDetail.asp?ID=40. Accessed August 30, 2007.

  • L. Floridi. The Philosophy of Presence: From Epistemic Failure to Successful Observability. Presence: Teleoperators and Virtual Environments, 14(6):656–667, 2005.

  • K. E. Himma. There’s Something About Mary: The Moral Value of Things Qua Information Objects. Ethics and Information Technology, 6(3): 145–159, 2004.

    Article  Google Scholar 

  • K. E. Himma. Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to be a Moral Agent? Paper presented at CEPE 207, san Diego, 2007. Available at http://cepe2007.sandiego.edu/abstractDetail.asp?ID=2. Accessed August 30, 2007.

  • B. Latour. Where are the Missing Masses? The Sociology of a Few Mundane Artifacts. In W. E. Bijker and J. Law, editors, Shaping Technology/Building Society: Studies in Sociotechnical Change, pages 225–258. MIT Press, Cambridge, MA and London, 1997.

    Google Scholar 

  • L. Magnani. Distributed Morality and Technological Artifacts. Paper presented at 4th International Conference on Human being in Contemporary Philosophy, Volgograd, 2007. Available at http://volgograd2007.goldenideashome.com/2%20Papers/Magnani%20Lorenzo%20p.pdf. Accessed August 30, 2007.

  • G. Ryle. The Concept of Mind. Hutchinson, London, 1963.

    Google Scholar 

  • J. R. Searle. Minds, Brains and Programs. In R. Born, editor, Artificial Intelligence: The Case Against. pp. 18–40. Croom Helm, London and Sydney, 1987 (first published 1980).

  • M. Siponen. A Pragmatic Evaluation of the Theory of Information Ethics. Ethics and Information Technology, 6(4): 279–290, 2004.

    Article  Google Scholar 

  • T. Williamson. Knowledge and its Limits. Oxford University Press, Oxford, 2002.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alison Adam.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Adam, A. Ethics for things. Ethics Inf Technol 10, 149–154 (2008). https://doi.org/10.1007/s10676-008-9169-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-008-9169-3

Keywords

Navigation