Skip to main content

Ethical Operating Systems

  • Chapter
  • First Online:
Reflections on Programming Systems

Part of the book series: Philosophical Studies Series ((PSSP,volume 133))

Abstract

A well-ingrained and recommended engineering practice in safety-critical software systems is to separate safety concerns from other aspects of the system. Along these lines, there have been calls for operating systems (or computing substrates, termed ethical operating systems) that implement ethical controls in an ethical layer separate from, and not amenable to tampering by, developers and modules in higher-level intelligence or cognition layers. There have been no implementations that demonstrate such a marshalling of ethical principles into an ethical layer. To address this, we present three different tracks for implementing such systems, and offer a prototype implementation of the third track. We end by addressing objections to our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We here use the word ‘theory’ as it is used in formal logic and mathematics; there, a theory is any arbitrary set of formulae Γ (which may e.g. be the closure under deduction of some set of core axioms). Hence, for us, an ethical theory is a set of formulae that governs ethical behavior. Coverage of such theories ranges from the simple, such as a list of prohibitions, to the more complex, e.g. the doctrine of double effect (discussed herein later), and beyond. Our conception of an ethical theory is in the end simply a rigorization of the concept of an ethical theory as employed by analytic ethicists, an examplar being Feldman (1978); a synoptic explanation of this is given in Footnote 11. Our sense of ‘ethical theory,’ then, is in the end a formal version of what systematic ethicists refer to when they discuss such ethical theories as utilitariaism, ethical egoism, contractualism, etc.

  2. 2.

    It is quite easy to see how Dijsktra’s principle still applies when we want to engineer ethical machines, for we read:

    We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained—on the contrary!—by tackling these various aspects simultaneously. It is what I sometimes have called ‘the separation of concerns,’ which, even if not perfectly possible, is yet the only available technique for effective ordering of one’s thoughts, that I know of. (Dijkstra (1982), p. 60)

  3. 3.

    One calculus that enables much of this is the deontic cognitive event calculus (with provision for modeling access/informational self-awareness), or for short DCℰC, which has now been used in its implemented form to guide and control the actions of a number of real-life versions of what r denotes in the present paper; e.g. see Bringsjord et al. (2014). The earliest work of this kind started over a decade ago (Bringsjord et al. 2006; Arkoudas et al. 2005), and has been steadily improving—but hitherto has not been connected to operating systems. An overview of DCℰC can be found at this url: http://www.cs.rpi.edu/~govinn/dcec.pdf.

  4. 4.

    See also the earlier Ganascia (2007).

  5. 5.

    At the moment, among formally verified operating-system kernels, the clear frontrunner is apparently seL4 https://sel4.systems. It runs on both x86 and ARM platforms, and can even run the Linux user-space, currently only within a virtual machine. It’s also open-source, including the proofs. These proofs can be combined with our own for ethical control. For a remarkable success story in formal verification at the OS-level, and one more in line with the formal logics and proof theories our lab is inclined to use, see Arkoudas et al. (2004).

  6. 6.

    At least at the conceptual level, there is some historical precedent for at least the first steps of what we are seeking: Flatt et al. (1999) showed that “MrEd,” while not a “bare-metal” OS, is a Lisp-flavored virtual machine that counts as an OS.

  7. 7.

    ‘ACL2’ abbreviates ‘A Computational Logic for Applicative Common Lisp.’ The home page is: http://www.cs.utexas.edu/~moore/acl2.

  8. 8.

    For summary and references, see Bringsjord (2015b), which includes a defense of a particular way to seek verification.

  9. 9.

    In distributed systems, there can be multiple such components.

  10. 10.

    The definition that immediately follows does not distinguish between virtual operating systems and meta-operating systems and does not account for nested meta-operating systems.

  11. 11.

    While the focus of the present paper is on Step 3, we provide a brief explanation of the mysterious-to-most-readers phrase “run through EH” that appears in the graphic of Fig. 8.6: An ethical theory T in the four-step process is formalized as a conjunction of robust biconditionals β(x 1, …, x k) that specify when actions, in general, are obligatory (and forbidden and morally neutral); here, x i are the variables appearing in the biconditional, and serve the purpose of allowing for the fixing of particular times, places, and so on. The general form of each definiendum of each biconditional refers to some action being \(\mathcal {M}\) for some agent in some particular context; the definiens then supplies the conditions that must hold for the action to be \(\mathcal {M}\). This is a rigorization of the approach to pinning down an ethical theory taken e.g. in Feldman (1978). The variable \(\mathcal {M}\) is a placeholder for the basic categories captured by modal operators in our calculi. For instance, \(\mathcal {M}\) can be obligatory, or forbidden, or civil, etc. Now, the ethical hierarchy introduced in Bringsjord (2015a) explains that this trio needs to be expanded to nine different deontic operators for \(\mathcal {M}\) (six in addition to the standard three of forbidden, morally neutral, and obligatory). (For example, some actions are right to do, but not obligatory. A classic example is the category of civil actions. There are also heroic actions. The expansion of deontic operators to cover these additional categories was first expressed systematically in (Chisholm 1982).) To “run a given ethical theory through ” is to expand the activity of Feldman (1978), for a given ethical theory, to biconditionals β(x 1, …, x k) for each of the nine operators. (Feldman only considers one.) A particular code C T based on an ethical theory T, if configured in keeping with , would include use of any of the operators in the nine in order to e.g. permit or proscribe a particular kind of action in a particular domain for a given agent under T.

  12. 12.

    Yes, even this family can be used for machine/robot ethics; see e.g. (Bringsjord and Taylor 2012).

  13. 13.

    In concurrent computing, there can be two or more different computational processes happening at the same time.

  14. 14.

    The inclusion of an arbitrary formal language \(\mathcal {L}\) is where we differ from the strict λ a-calculus as presented in, for instance, (Varela 2013, Chapter 4). This is merely for convenience and doesn’t sacrifice generality, as we can readily encode \(\mathcal {L}\) using primitives in just the λ-calculus and nothing more.

  15. 15.

    We ignore stray actors that neither observe nor act upon the environment.

  16. 16.

    A rapid, informal, but nonetheless nice overview of the doctrine is provided in McIntyre (2014).

  17. 17.

    A quick note on the expressivity of the formal system needed to model \({\ensuremath {\mathcal {DDE}}}\): It is well known that modeling knowledge in first-order logic can lead to fidelity problems by permitting inconsistencies. We show this explicitly in (Bringsjord and Govindarajulu 2012). This implies that \(\mathcal {DDE}\) requires going beyond first-order logic to first-order modal logic (an intensional logic) with operators covering minimally the epistemic and deontic realms. An intensional model of \(\mathcal {DDE}\) can be found in our (Govindarajulu and Bringsjord 2017).

  18. 18.

    The system is available for experimentation at https://github.com/naveensundarg/zeus.

  19. 19.

    Though E C would make sense only when considering driver-specific information, to keep the model simple we show it being applied to cars rather than a car-and-driver combination.

  20. 20.

    See (Banker 2016) for a description of work in which machine learning is used to predict truck accidents. Such information might be easier to compute in a future with millions of self-driving vehicles, with most of them connected to a handful of centralized networks; for a description of such a future, and discussion, see (Bringsjord and Sen 2016).

  21. 21.

    Similar to formal libraries for mathematics; see e.g. (Naumowicz and Kornilowicz 2009).

  22. 22.

    Stuart Russell and Thomas Dietterich, private communication with Selmer Bringsjord.

References

  • Annas, J. 2011. Intelligent virtue. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Anscombe, G. 1958. Modern moral philosophy. Philosophy 33(124): 1–19.

    Article  Google Scholar 

  • Arkin, R. 2009. Governing lethal behavior in autonomous robots. New York: Chapman and Hall/CRC.

    Book  Google Scholar 

  • Arkoudas, K., K. Zee, V. Kuncak, and M. Rinard. 2004. Verifying a file system implementation. In Sixth International Conference on Formal Engineering Methods (ICFEM’04), Lecture notes in computer science (LNCS), vol. 3308, 373–390. Seattle: Springer.

    Google Scholar 

  • Arkoudas, K., S. Bringsjord, and P. Bello. 2005. Toward ethical robots via mechanized deontic logic. In Machine Ethics: Papers from the AAAI Fall Symposium; FS–05–06, 17–23. Menlo Park: American Association for Artificial Intelligence. http://www.aaai.org/Library/Symposia/Fall/fs05-06.php

    Google Scholar 

  • Banker, S. 2016. Using big data and predictive analytics to predict which truck drivers will have an accident. Available at: https://www.forbes.com/sites/stevebanker/2016/10/18/using-big-data-and-predictive-analytics-to-predict-which-truck-drivers-will-have-an-accident/

  • Bentzen, M.M. 2016. The principle of double effect applied to ethical dilemmas of social robots. In Frontiers in Artificial Intelligence and Applications, Proceedings of Robophilosophy 2016/TRANSOR 2016, 268–279. Amsterdam: IOS Press.

    Google Scholar 

  • Berreby, F., G. Bourgne, and J.-G. Ganascia. 2015. Modelling moral reasoning and ethical responsibility with logic programming. In Logic for programming, artificial intelligence, and reasoning, 532–548. Berlin/Heidelberg: Springer.

    Chapter  Google Scholar 

  • Bojarski, M., D.D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L.D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba. 2016. End to end learning for self-driving cars. CoRR abs/1604.07316. http://arxiv.org/abs/1604.07316

  • Bon´er, J. 2010. Introducing Akka—simpler scalability, fault-tolerance, concurrency & remoting through actors. http://jonasboner.com/introducing-akka/

  • Boolos, G.S., J.P. Burgess, and R.C. Jeffrey. 2003. Computability and logic, 4th edn. Cambridge: Cambridge University Press.

    Google Scholar 

  • Bringsjord, S. 2015a. A 21st-century ethical hierarchy for humans and robots: \(\mathcal {EH}\). In A World With Robots: International Conference on Robot Ethics (ICRE 2015), ed. I. Ferreira, J. Sequeira, M. Tokhi, E. Kadar, and G. Virk, 47–61. Berlin: Springer.

    Google Scholar 

  • Bringsjord, S. 2015b. A vindication of program verification. History and philosophy of logic 36(3): 262–277.

    Article  Google Scholar 

  • Bringsjord, S. 2016. Can phronetic robots be engineered by computational logicians? In Proceedings of Robophilosophy/TRANSOR 2016, ed. J. Seibt, M. Nørskov, and S. Andersen, 3–6. Amsterdam: IOS Press.

    Google Scholar 

  • Bringsjord, S., and N.S. Govindarajulu. 2012. Given the Web, what is intelligence, really? Metaphilosophy 43(4): 361–532.

    Article  Google Scholar 

  • Bringsjord, S., and J. Taylor. 2012. The divine-command approach to robot ethics. In Robot ethics: The ethical and social implications of robotics, ed. P. Lin, G. Bekey, and K. Abney, 85–108. Cambridge: MIT Press.

    Google Scholar 

  • Bringsjord, S., and A. Sen. 2016. On creative self-driving cars: Hire the computational logicians, fast. Applied Artificial Intelligence 30: 758–786.

    Article  Google Scholar 

  • Bringsjord, S., K. Arkoudas, and P. Bello. 2006. Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems 21(4): 38–44.

    Article  Google Scholar 

  • Bringsjord, S., J. Taylor, A. Shilliday, M. Clark, and K. Arkoudas. 2008. Slate: An argument-centered intelligent assistant to human reasoners. In Proceedings of the 8th International Workshop on Computational Models of Natural Argument (CMNA 8)’, ed. F. Grasso, N. Green, R. Kibble, and C. Reed, 1–10. Patras: University of Patras.

    Google Scholar 

  • Bringsjord, S., N. Govindarajulu, D. Thero, and M. Si. 2014. Akratic robots and the computational logic thereof. In Proceedings of ETHICS 2014, (2014 IEEE Symposium on Ethics in Engineering, Science, and Technology), 22–29, Chicago. http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6883275

  • Chisholm, R. 1982. Supererogation and offence: A conceptual scheme for ethics. In Brentano and Meinong studies, ed. R. Chisholm, 98–113. Atlantic Highlands: Humanities Press.

    Google Scholar 

  • Dijkstra, E.W. 1982. On the role of scientific thought. In Selected writings on computing: A personal perspective, 60–66. New York: Springer.

    Chapter  Google Scholar 

  • Feldman, F. 1978. Introductory ethics. Englewood Cliffs: Prentice-Hall.

    Google Scholar 

  • Flatt, M., R. Findler, S. Krishnamurthi, and M. Felleisen. 1999. Programming languages as operating systems (or revenge of the son of the Lisp machine). In Proceedings of the International Conference on Functional Programming (ICFP 1999). http://www.ccs.neu.edu/racket/pubs/icfp99-ffkf.pdf

  • Ganascia, J.-G. 2007. Modeling ethical rules of lying with answer set programming. Ethics and Information Technology 9: 39–47.

    Article  Google Scholar 

  • Ganascia, J.-G. 2015. Non-monotonic resolution of conflicts for ethical reasoning. In A construction manual for robots’ ethical systems: Requirements, methods, implementations, ed. R. Trappl, 101–118. Basel: Springer.

    Chapter  Google Scholar 

  • Govindarajulu, N.S. 2010. Common Lisp actor system. http://www.cs.rpi.edu/~govinn/actors.pdf. See also: https://github.com/naveensundarg/Common-Lisp-Actors

  • Govindarajulu, N.S., and S. Bringsjord. 2015. Ethical regulation of robots must be embedded in their operating systems. In A construction manual for robots’ ethical systems: Requirements, methods, implementations, ed. R. Trappl, 85–100. Basel: Springer.

    Chapter  Google Scholar 

  • Govindarajulu, N.S., and S. Bringsjord. 2017. On automating the doctrine of double effect. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17’, ed. C. Sierra, 4722–4730, Melbourne.

    Google Scholar 

  • Hursthouse, R., and G. Pettigrove. 2003/2016. Virtue ethics. In The stanford encyclopedia of philosophy, Metaphysics research lab, ed. E. Zalta. Stanford University. https://plato.stanford.edu/entries/ethics-virtue

  • Hutter, M. 2005. Universal artificial intelligence: Sequential decisions based on algorithmic probability. New York: Springer.

    Book  Google Scholar 

  • Johnson, G. 2016. Argument & inference: An introduction to inductive logic. Cambridge: MIT Press.

    Google Scholar 

  • Kwiatkowska, M., G. Norman, and D. Parker. 2011. PRISM 4.0: Verification of probabilistic real-time systems. In International Conference on Computer Aided Verification, 585–591. Berlin: Springer.

    Chapter  Google Scholar 

  • McIntyre, A. 2014. Doctrine of double effect. In The stanford encyclopedia of philosophy, winter 2014 edn, Metaphysics Research Lab, ed. E.N. Zalta. Stanford University.

    Google Scholar 

  • McKinsey, J., A. Sugar, and P. Suppes. 1953. Axiomatic foundations of classical particle mechanics. Journal of Rational Mechanics and Analysis 2: 253–272.

    Google Scholar 

  • Naumowicz, A., and A. Kornilowicz. 2009. A brief overview of Mizar. In Theorem proving in higher order logics, Lecture notes in computer science (LNCS), vol. 5674, ed. S. Berghofer, T. Nipkow, C. Urban, and M. Wenzel, 67–72. Berlin: Springer.

    Chapter  Google Scholar 

  • Pereira, L. M., and A. Saptawijaya. 2016a. Counterfactuals, logic programming and agent morality. In Logic, argumentation and reasoning, ed. S. Rahman and J. Redmond, 85–99. Berlin: Springer.

    Google Scholar 

  • Pereira, L., and A. Saptawijaya. 2016b. Programming machine ethics. Berlin: Springer.

    Book  Google Scholar 

  • Ramos, S., S.K. Gehrig, P. Pinggera, U. Franke, and C. Rother. 2016. Detecting unexpected obstacles for self-driving cars: Fusing deep learning and geometric modeling. CoRR, abs/1612.06573. http://arxiv.org/abs/1612.06573

  • Russell, S., and P. Norvig. 2009. Artificial intelligence: A modern approach, 3rd edn. Upper Saddle River: Prentice Hall.

    Google Scholar 

  • Varela, C.A. 2013. Programming distributed computing systems: A foundational approach. MIT Press. http://wcl.cs.rpi.edu/pdcs

  • Varela, C., and G. Agha. 2001. Programming dynamically reconfigurable open systems with salsa. ACM SIGPLAN Notices, 36(12): 20–34.

    Article  Google Scholar 

  • Vaughan, R.T., B.P. Gerkey, and A. Howard. 2003. On device abstractions for portable, reusable robot code. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, vol. 3, 2421–2427.

    Google Scholar 

Download references

Acknowledgements

We are indebted to seven anonymous reviewers (of the core of the present version, as well as its predecessor) for insightful comments, suggestions, and objections. In addition, we are grateful to ONR for its support of making morally competent machines, and to AFOSR for its support of our pursuit of computational intelligence in machines, on the strength of novel modes of machine reasoning. Finally, without the energy, passion, intelligence, and wisdom of both Giuseppe Primiero and Liesbeth De Mol, any progress we have made in the direction of ethical OSs would be non-existent.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Govindarajulu, N.S., Bringsjord, S., Sen, A., Paquin, JC., O’Neill, K. (2018). Ethical Operating Systems. In: De Mol, L., Primiero, G. (eds) Reflections on Programming Systems. Philosophical Studies Series, vol 133. Springer, Cham. https://doi.org/10.1007/978-3-319-97226-8_8

Download citation

Publish with us

Policies and ethics