Skip to main content
Log in

Killer Robot Arms: A Case-Study in Brain–Computer Interfaces and Intentional Acts

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

I use a hypothetical case study of a woman who replaces here biological arms with prostheses controlled through a brain–computer interface the explore how a BCI might interpret and misinterpret intentions. I define pre-veto intentions and post-veto intentions and argue that a failure of a BCI to differentiate between the two could lead to some troubling legal and ethical problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Adapted from Mason and Birch (2003)

Similar content being viewed by others

Notes

  1. Many bioethicists have addressed the issue of whether one can truly separate therapeutic uses and enhancement uses into distinct categories. I will not retread that ground here. Rather, I will take the common-sense approach that certain uses BCI fall clearly into the category of therapies, while other uses fall clearly into the category of enhancements. For example, a therapeutic use of BCI would be using the technology to give someone who did not have the ability to speak to communicate through a computerized speech-generating device. On the other hand, using BCI to enable someone with fully-functional limbs to operate a robotic arm capable of lifting two tons would qualify as enhancement. Whether or not the distinction between therapy and enhancement can be made in all cases is not germane to my argument in this paper. See Kass et al. (2003, p. 15), for a concise overview of the therapy versus enhancement debate.

  2. Also, it doesn’t seem that all intentions must be conscious. Mele (2008, p. 3) writes: “The last time you signaled for a turn in your car, were you conscious of an intention to do that?” The only circumstance in which we thought that signally was not intentional is if we genuinely felt surprise at the signal, either because we hit the signal by accident or because we felt that something outside of our control had forced us to signal.

  3. Note that I say that these intentions arise spontaneously but that we are consciously aware of them. This is distinct from something like unconsciously turning a signal in the note above. In that case, the intention to signal for a turn is neither conscious nor spontaneous; rather, it is part of a larger plan to get somewhere. In the case of pre-veto intentions, it can be said that the intentions are formed unconsciously, i.e., spontaneously, yet we are still consciously aware of them having formed.

  4. There is some research indicating that pre-veto intentions may result in action occurring before we are even aware that the pre-veto intention has been formed. See Mele (2008, pp. 1–12) for an overview. For my purposes, I will be assuming that there are at least some cases where we form pre-veto intentions and have the opportunity to veto them.

  5. This distinction in responsibility is tracked by law, which provides greater punishment for murder committed with deliberation (first-degree murder) and murder committed without deliberation, in the head of the moment (manslaughter or second-degree murder, depending on the jurisdiction).

  6. I.e., the scenarios I will present are plausible given current trends in BCI technology.

  7. See Glannon (2007, p. 142): “Those with chips implanted in their brains can think about executing a bodily movement, and that thought alone can cause the movement…. But forming an intention or plan and executing it are two separate mental acts…. One can form an intention to act but not executive that intention… by changing one’s mind at the last moment”.

  8. Ariz. Rev. Stat. Ann. § 13-1103(a)(2).

  9. Ariz. Rev. Stat. Ann. § 13-1104(a)(1).

  10. There are other questions that might be raised, such as whether the manufacture of the BCI device bears any responsibility for the outcome here. For example, in Human Values, Ethics, and Design, Friedman and Kahn (2003) argue that human–computer interface designers have an obligation to implement these technologies in an ethical manner. As such, designers of human–computer interfaces must take human values into account when designing systems. While this topic could form its own paper using the same case-study presented above, it is outside the scope of the issue presented here.

  11. Ariz. Rev. Stat. Ann. § 13-201.

  12. Circumstances where a defendant can be said to have had the mens rea but not the actus reus are usually those where the defendant has attempted to commit some crime but been unsuccessful in doing so, e.g., attempted murder.

  13. Ariz. Rev. Stat. Ann. § 13-105(2).

  14. Id. (10)(a).

  15. (2nd ed. 1989) The Oxford English Dictionary. Oxford, England: Oxford University Press.

References

  • Bratman, M. (1984). Two faces of intention. Philosophical Review, 93(3), 375–405.

    Article  Google Scholar 

  • Friedman, B., & Kahn, P., Jr. (2003). Human values, ethics, and design. In J. Jacko & A. Sears (Eds.), The human–computer interaction handbook (pp. 1177–1201). Mahway, NJ: Lawrence Erlbaum Associates Inc.

    Google Scholar 

  • Gardener, J. (2012, December 18). Paralyzed mom controls robotic arm using her thoughts. ABC News. Retrieved from http://new.yahoo.com.

  • Glannon, W. (2007). Bioethics and the brain. New York, NY: Oxford University Press.

    Google Scholar 

  • Kass, L. et al. (2003). Beyond therapy: Biotechnology and the pursuit of happiness. Washington, DC: President’s Council on Bioethics.

    Google Scholar 

  • Klose, C. (2007). Connections that count: Brain–computer interface enables the profoundly paralyzed to communicate. NIH Medline Plus, 2(3), 20–21.

    Google Scholar 

  • Long, J., Li, Y., Yu, T., et al. (2012). Hybrid brain–computer interface to control the direction and speed of a simulated or real wheelchair. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 20(5), 720–729.

    Article  MathSciNet  Google Scholar 

  • Mason, S., & Birch, G. (2003). A general framework for brain–computer interface design. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 11(1), 70–85.

    Article  Google Scholar 

  • Mele, A. (2008). Proximal intentions, intention-reports, and vetoing. Philosophical Psychology, 21(1), 1–14.

    Article  MathSciNet  Google Scholar 

  • Sunny, T. D., Aparna, T., Neethu, P., et al. (2016). Robotic arm with brain–computer interface. Procedia Techology, 24, 1089–1096.

    Article  Google Scholar 

  • Vallabhaneni, A., Wong, T., & He, B. (2005). Brain–computer interface. In B. He (Ed.), Neuronal engineering (pp. 85–121). New York: Springer.

    Chapter  Google Scholar 

  • van de Laar, B., Gurkok, H., & Plass-Oude Bos, D. (2013). Experiencing BCI control in a popular computer game. IEEE Transactions on Computational Intelligence and AI in Games, 5(2), 176–184.

    Article  Google Scholar 

  • Wan, W. (2017, November 15). New robotic hand named after Luke Skywalker helps amputee touch and feel again. Washington Post. Retrieved from http://www.washingtonpost.com.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Gurney.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gurney, D. Killer Robot Arms: A Case-Study in Brain–Computer Interfaces and Intentional Acts. Minds & Machines 28, 775–785 (2018). https://doi.org/10.1007/s11023-018-9462-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-018-9462-9

Keywords

Navigation