Skip to main content
Log in

Indecision and Buridan’s Principle

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

The problem known as Buridan’s Ass says that a hungry donkey equipoised between two identical bales of hay will starve to death. Indecision kills the ass. Some philosophers worry about human analogs. Computer scientists since the 1960s have known about the computer versions of such cases. From what Leslie Lamport calls ‘Buridan’s Principle’—a discrete decision based on a continuous range of input-values cannot be made in a bounded time—it follows that the possibilities for human analogs of Buridan’s Ass are far more wide-ranging and securely provable than has been acknowledged in philosophy. We are never necessarily decisive. This is mathematically provable. I explore four consequences: first, increased interest of the literature’s solutions to Buridan’s Ass; second, a new asymmetry between responsibility for omissions and responsibility for actions; third, clarification of the standard account of akrasia; and, fourth, clarification of the role of credences in normative decision-theory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It is argued in Rescher (1960) that the problem is wrongly ascribed to Buridan. It may be that Buridan’s Ass is so-called not because Buridan came up with it but, rather, because it made trouble for “the most important philosopher at the most important university in the world for three decades in the mid-fourteenth century” (Pasnau 2017: 59).

  2. In a technical appendix to this paper, I summarize a mathematical strategy (used in, for instance, Lamport & Palais, 1976 but also relied on in Anderson & Gouda 1991; Golubcovs et al., 2019, and others) for proving the Glitch. I then show that all of the assumptions needed for the Glitch proof are also satisfied in human cases such as driving, walking, listening, and so on: we have continuous input for a discrete decision required in limited time. In short, I build on some results and convergences in computer science, electrical engineering, and mathematics to show that it is mathematically provable that human beings are never necessarily decisive. Leslie Lamport’s (2012) paper on Buridan’s Principle remains by far the best resource for seeing that the Glitch is just the computer-version of Buridan’s Principle.

  3. There are, of course, exceptions: Galen Strawson (1994) argues that no one is ultimately morally responsible for anything including actions and omissions because there is a vicious regress in the explanation for moral responsibility for any action or omission. Neil Levy (2011) argues that the influence of luck in shaping our characters and other causes of our decisions entails that no one is morally responsible. See also Waller’s (1990), (2011), and (2015), Pereboom’s (2001) & (2014), and Honderick’s (2002).

  4. John Martin Fischer argues against the symmetry of PPA and PAP in his (1985) as well as in his and Mark Ravizza’s (1991). They argue that PAP is false but PPA true. Peter van Inwagen (1978, 1983) holds similar views. Skipping over a great deal of insightful discussion, Taylor Cyr (2021) has recently argued that PAP and PPA stand or fall together.

  5. Claim (1) is commonly taken for granted in discussions of both regular akrasia and epistemic akrasia. But the claim that akrasia involves irrationality is explicitly taken as an important assumption in arguments from, for instance, James Fritz (2021, p. 103) and Declan Smithies (2019: ch 8).

  6. Claim (2) expresses a standard understanding of akrasia. That is, when akrasia is discussed in the literature, claim (2) roughly expresses what is meant. For recent examples of this standard usage, see Baker (2015), Ovenden (2018), and Hartford (2020).

  7. For helpful further discussions, see, for instance, Julia Staffel’s (2019) instructive summary of the debate followed by an argument for the claim that we should use knowledge norms in discussions of what to do under normative uncertainty. See also Thoma (2021), MacAskill (2016), and Trammell (2019).

  8. One might argue that S still did something morally wrong in that world. All Buridan’s Principle shows is that S might be off the hook. Blameworthiness of a person may be held distinct from matter-of-fact moral wrongness of an action. But the latter distinction may be granted without its being any less odd to hold that it is morally wrong for someone to behave in a way that is consistent with Buridan’s Principle.

  9. After Lamport & Palais’s proof in 1976, James Anderson and Mohamed Gouda (1991) showed that the possibility of an indecision-caused-glitch must exist in computer circuits even without assuming continuity.

  10. For constructive suggestions and corrections, I am grateful to the Editor and two reviewers at Synthese.

References

  • Anderson, J. H., & Gouda, M. G. (1991). A new explanation of the glitch phenomenon. Acta Informatica, 28, 297–309.

    Article  Google Scholar 

  • Catt, I. (1966). Time loss through gating of asynchronous logic signal pulses. IEEE Transactions on Electronic Computers (Short Notes), EC-15: 108-111.

  • Catt, I. (1972). My experience with the synchronizer problem. Washington University.

    Google Scholar 

  • Chaney, T. J., & Littlefield, W. (1966). The glitch phenomenon. Syst. Lab, Washington Univ., St. Louis, Mo., Tech. Memo.

    Google Scholar 

  • Chaney, T. J., & Molnar, C. E. (1973). Anomalous behaviour of synchronizer and arbiter circuits. IEEE Transactions on Computers, 22, 421–422.

    Article  Google Scholar 

  • Chaney, T. J. (2012). My work on all things metastable OR: (Me and My Glitch). Online: https://www.arl.wustl.edu/~jon.turner/cse/260/glitchChaney.pdf.

  • Cyr, T. W. (2021a). Semicompatibilism and moral responsibility for actions and omissions. In defence of symmetrical requirements. Australasian Journal of Philosophy, 99, 349–363.

    Article  Google Scholar 

  • Baker, D. C. (2015). Akrasia and the problem of the unity of reason. Ratio, 28, 65–80.

    Article  Google Scholar 

  • Chislenko, E. (2016). A solution for Buridan’s Ass. Ethics, 126, 283–310.

    Article  Google Scholar 

  • Davidson, D. (1980). How is weakness of will possible? In Davidson, (Ed.), Actions and events (pp. 21–42). Clarendon Press.

  • Denning, P. J. (1985). The science of computing: The arbitration problem. American Scientist, 73, 516–518.

    Google Scholar 

  • de Montaigne, M. (1877). Essays, trans. Charles Cotton. London: Reeves & Turner.

  • Fischer, J. M. (1985). Responsibility and failure. Proceedings of the Aristotelian Society, 86, 251–270.

    Article  Google Scholar 

  • Fischer, J. M. (1994). The metaphysics of free will: An essay on control. Blackwell.

    Google Scholar 

  • Fischer, J. M., & Ravizza, M. (1991). Responsibility and inevitability. Ethics, 101, 258–278.

    Article  Google Scholar 

  • Fritz, J. (2021). Akrasia and epistemic impurism. Journal of the American Philosophical Association, 7, 98–116.

    Article  Google Scholar 

  • Ginet, C. (2000). The Epistemic Requirements for Moral Responsibility. Philosophical Perspectives, 14, 267–277.

    Google Scholar 

  • Golubcovs, S., Mokhov, A., Bystrov, A., Sokolov, D., & Yakovlev, A. (2019). Generalised asynchronous arbiter. In 2019 19th International Conference on Application of Concurrency to System Design (ACSD) (pp. 3–12). https://doi.org/10.1109/ACSD.2019.00005.

  • Gray, H. J. (1963). Digital computer engineering (pp. 198–201). Prentice-Hall.

    Google Scholar 

  • Harman, E. (2011). Does moral ignorance exculpate? Ratio, 24, 443–468.

    Article  Google Scholar 

  • Harman, E. (2015). The irrelevance of moral uncertainty. Oxford Studies in Metaethics 10.

  • Hartford, A. (2020). Complex akrasia and blameworthiness. Journal of Philosophical Research, 45, 15–33.

    Article  Google Scholar 

  • Hedden, B. (2016). Does MITE make right? Oxford Studies in Meta-Ethics, 11, 102–128.

    Article  Google Scholar 

  • King, M. (2014). Traction without tracing: A (partial) solution for control-based accounts of moral responsibility. European Journal of Philosophy, 22, 463–482.

    Article  Google Scholar 

  • Lamport, L. (2003). Arbiter-free synchronization. Distributed Computing, 16, 219–237.

    Article  Google Scholar 

  • Lamport, L. (2012). Buridan’s Principle. Foundations of Physics, 42, 1056–1066.

    Article  Google Scholar 

  • Lamport, L., & Palais, R. (1976). “On the glitch phenomenon.” Technical Report CA-7611-0811, Massachusetts Computer Associates, Wakefield, Massachusetts, November 1976. Accessed and available at Microsoft Research: https://www.microsoft.com/en-us/research/publication/on-the-glitch-phenomenon/.

  • Levy, N. (2014). Consciousness and moral responsibility. Oxford University Press.

    Book  Google Scholar 

  • Leibniz, G. W. (1952). Theodicy, trans. E.M. Huggard. Yale University Press.

  • MacAskill, W., & Ord, T. (2020). Why maximize expected choice-worthiness? Noûs, 54, 327–353.

    Article  Google Scholar 

  • Mele, A. (2010). Moral responsibility for actions: Epistemic and freedom conditions. philosophical Explorations, 13, 101–111.

    Article  Google Scholar 

  • Mintoff, J. (2001). Buridan’s Ass and Reducible Intentions. Journal of Philosophical Research, 26, 207–221.

    Article  Google Scholar 

  • Narveson, J. (1976). Utilitarianism, Group Actions, and Coordination, or, Must the Utilitarian Be a Buridan’s Ass? Nous, 10, 173–194.

    Article  Google Scholar 

  • Nelkin, D., & Rickless, S. C. (2017). Moral responsibility for unwitting omissions: A new tracing view. In The Ethics and Law of Omissions, pp. 106–129.

  • Ovenden, C. (2018). Guidance control and the anti-akrasia chip. Synthese, 195, 2001–2019.

    Article  Google Scholar 

  • Pasnau, R. (2017). After certainty. Oxford University Press.

    Book  Google Scholar 

  • Pechouček, M. (1976). Anomalous response times of input synchronizers. IEEE Transactions on Computers, 25, 133–139.

    Article  Google Scholar 

  • Pereboom, D. (2001). Living without free will. Cambridge University Press.

    Book  Google Scholar 

  • Podgorski, A. (2020). Normative uncertainty and the dependence problem. Mind, 129, 43–70.

    Article  Google Scholar 

  • Rescher, N. (1960). Choice without preference: A study of the history and logic of Buridan’s Ass. Kant-Studien, 51, 142–175.

    Article  Google Scholar 

  • Sharadin, N., & Dellsén, F. (2017). The beliefs and intentios of Buridan’s Ass. Journal of the American Philosophical Association, 3, 209–226.

    Article  Google Scholar 

  • Sliwa, P. (2017). On knowing what’s right and being responsible for it. In Robichaud and Wieland, 2017, 127–145.

    Google Scholar 

  • Smith, A. M. (2008). Control, responsibility, and moral assessment. Philosophical Studies, 138, 367–392.

    Article  Google Scholar 

  • Smith, D. (2019). The epistemic role of consciousness. Oxford University Press.

    Book  Google Scholar 

  • Smith, H. M. (2017). Tracing cases of culpable ignorance. In Peels, 2017, 95–119.

    Google Scholar 

  • Spinoza, B. (2002). Complete works, trans. Samuel Shirley. Indianapolis: Hackett, 276.

  • Staffel, J. (2019). Normative uncertainty and probabilistic moral knowledge. Synthese, 198, 6739–6765.

    Article  Google Scholar 

  • Strawson, G. (1994). The Impossibility of Moral Responsibility. Philosophical Studies, 75, 5–24.

    Article  Google Scholar 

  • Tarsney, C. (2019). Normative Uncertainty and Social Choice. Mind, 128, 1285–1308.

    Article  Google Scholar 

  • Thoma, J. (2021). Judgmentalism about normative decision theory. Synthese, 198, 6767–6787.

    Article  Google Scholar 

  • Trammel, P. (2019). Fixed-point solutions to the regress problem in normative uncertainty. Synthese, 198, 1177–1199.

    Article  Google Scholar 

  • Ullmann-Margalit, E., & Morgenbesser, S. (1977). Picking and choosing. Social Research, 44, 759–760.

    Google Scholar 

  • van Inwagen, P. (1978). Ability and responsibility. Philosophical Review, 87, 201–224.

    Article  Google Scholar 

  • van Inwagen, P. (1983). An essay on free will. Oxford University Press.

    Google Scholar 

  • Waller, B. N. (1990). Freedom without responsibility. Temple University Press.

    Google Scholar 

  • Waller, B. N. (2011). Against moral responsibility. MIT Press.

    Book  Google Scholar 

  • Waller, B. N. (2015). Restorative free will: Back to the biological base. Lexington Books.

    Google Scholar 

  • Weatherson, B. (2014). Running risks morally. Philosophical Studies, 167, 141–163.

    Article  Google Scholar 

  • Weintraub, R. (2012). What can we learn from Buridan’s Ass? Canadian Journal of Philosophy, 42, 281–301.

    Article  Google Scholar 

  • Wolf, S. (2015). Character and responsibility. Journal of Philosophy, 112, 356–372.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Coren.

Ethics declarations

Conflict of interest

The author has no conflict of interest (financial or otherwise) to report.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 A brief summary of (a) Lamport & Palais’s proof of the computer version of Buridan’s Principle and (b) applications of their proof to human instantiations of Buridan’s Principle

Below I summarize the main elements of Lamport and Palais’s (1976) formal proof of the Principle of the Glitch, which is just the computer-version of Buridan’s Principle, before explaining how it provides the structure for proving the possibility of limit-exceeding indecision in human cases such as driving, walking, listening, speaking, and all of our other ordinary decision-requiring activities.

Today the Glitch or Arbiter Problem is widely acknowledged in computer science. It has been proved in several different ways with modest assumptions, and all the relevant similarities (continuity of input and a discrete decision required in limited time) exist for all day-to-day human activities. Recall that the Glitch states that for any device which is instructed to make a discrete decision based upon a continuous range of possible inputs, there are inputs for which it will take arbitrarily long to reach a decision (Lamport & Palais, 1976, p. 1). Lamport and Palais (1976, p. 2) begin their proof with the following stipulations:

  • Let R denote the set of real numbers, let I [for Input] and O [for Output] be two sets, let I be a set of mappings from R to I, and O a set of mappings from R to O.

  • The elements of I represent the possible values of inputs to the device. At an instant in time, any of the elements of I is a possible input value.

  • The elements of O represent the possible values of outputs of the device. At an instant in time, any of the elements of O is a possible output value, and O is the set of possible outputs.

  • An element i of I represents a possible input to the device; so, i(t) is the value of the input at time t.

  • Assume for simplicity that the device operates at all times (though a similar proof can be constructed for a device that operates at some specific time).

  • The device defines a mapping Δ: I → O; namely Δ(i) is the output produced by the input i. For an input at time t, namely, i(t), Δ (i)(t) is the output at t.

Then the general strategy for proving the Principle of the Glitch, this computer-version of Buridan’s Principle, involves proving three things, as Lamport and Palais (1976, p. 5) explain:

  1. (1)

    Prove that Δ: I → O is continuous. Lamport & Palais use compactness and convergence to prove this.

  2. (2)

    Prove that the space I is pathwise connected.

In general, a space is pathwise connected iff every two points are connected by a path in that space. Lamport & Palais put pathwise connectedness more precisely for the purposes of their proof as follows (1976, p. 4):

Let U be a subset of a metric space S, and let [0, 1] as usual denote the interval of real numbers t with 0 ≤ t ≤ 1. If u0 and u1 are points in U, then a path in U from u0 to u1 is a continuous mapping π : [0, 1] → S such that π(0) = u0, π(1) = u1, and π(t) is in U for all t in [0, 1].We say that U is a pathwise connected subset of S if such a π can be found for each choice of u0 and u1 in U. If F is a continuous mapping of U into a metric space T and π : [0, 1] → U is as above, then the composition Fπ : [0, 1] → T is a path in T from F(u0) to F(u1). It follows that if U is a pathwise connected subset of T, then F(U) is pathwise connected subset of T, where F(U), the image of U under F, is the set of all points in T of the form F(u) for some point u in U.]

And the third step of Lamport and Palais’s (1976, p. 5) proof is to show that:

  1. (3)

    The set of outputs in Δ(I) for which the decision is made before some fixed time r is not pathwise connected.

They point out that, since (1) and (2) entail that we know that Δ(I) is pathwise connected, it follows that “(3) shows that for any finite time r there must be inputs in I for which the device does not reach a decision by time r” (1976, p. 5). They argue that (2) and (3) are straightforward for most types of devices and contexts; (1) is the only complex part of the proof. By proving (1), (2), and (3), they prove that input-continuity entails that it is impossible for the device to make a discrete decision in a bounded time. And assuming that “an approximately correct theory will describe the approximate behaviour of a system”, it follows that “the device must occasionally take very much longer than usual to make a decision” (1976, p. 6). This does not tell us anything about how often the device will take very much longer to make a decision. For occasionally is consistent with one case of limit-exceeding indecision for every trillion instructions on average. It is also consistent with one case of indecision for every thousand instructions on average, and so on.Footnote 9

Though I gave only a very quick summary of Lamport & Palais’s proof of the computer version of Buridan’s Principle of Buridan’s Principle, it is not difficult to see that we may construct analogous proofs for ordinary human cases involved in walking, driving, perception, listening, speaking, and so on. We have all the relevant similarities even if we grant that there are plenty of differences between humans and computers. This is what Lamport (2012) later observed when he gave a general mathematical expression of Buridan’s Ass: the possibility of limit-exceeding indecision applies not just to computers but to human activities as well, and this is precisely why Buridan’s Principle is of interest not just to computer scientists but also to philosophers:

The problem of Buridan’s Ass, named after the fourteenth century French philosopher Jean Buridan, states that an ass placed equidistant between two bales of hay must starve to death because it has no reason to choose one bale over the other. With the benefit of modern mathematics, the argument may be expressed as follows. …A continuous mechanism must either forgo discreteness, permitting a continuous range of decisions, or must allow an unbounded length of time to make the decision (Lamport, 2012, p. 1).

It is an irrelevant difference that in human cases such as figuring out which way to turn or how to understand a speaker, there is a different kind of continuous mechanism and a different kind of continuity of input than in computer cases. All the relevant similarities are still present: As long as the mechanism is continuous, input is continuous, and a discrete decision required, it follows that there cannot be any upper bound on the time required to make the decision. (Of course, sometimes the decision might be quick. But it is unbounded because it is always possible that there are some cases where the decision takes longer than any finite time allowed.) For, in general, since it is possible to prove that Δ: I → O is continuous, and since we always have some continuous range of input-values in walking cases and perception cases and so on, we may prove that analogous mappings from possible inputs to possible outputs for the walking cases and perception cases (and so on) will be continuous. For just as with the proof of the Glitch, I and O will be appropriately continuous functions. In the case where the person must decide between two dates, for example: there is no discontinuity in the range of inputs. In particular, for the range of input-values, the person may begin arbitrarily close to location 0, that is, arbitrarily close to the first date, and therefore arbitrarily close to location 1, the second date. So, it is accurate to represent the inputs with the mappings from the reals, namely, R, to the possible input-values, I, giving I, where each element of I represents a possible starting position for the snack-decider between the two dates. So, the person’s starting position is the analog of the possible input-value to the device (the computer) in the Glitch case. And recall that Mt (x) is a continuous function with respect to the person’s starting positions. So, it is accurate that the possible outputs in such cases are represented by a mapping from R to O: elements of O represent the possible starting positions when hunger strikes.

So, it should be immaterial that there are plenty of differences between (a) the binary computer-system case analyzed in Gray (1963), Catt (1966), Chaney and Littlefield (1966), Catt (1972), Lamport and Palais (1976), Anderson and Gouda (1991), Golubcovs et al. (2019), and others, and (b) human cases such as the date case and other ordinary cases. All that matters is that we have the relevant similarities, namely, functional continuity derived from the continuous range of possible input-values and the discrete decision required in a limited time. This is clarified in Lamport (2012). Given those similarities, we may show that Δ: I → O is continuous where, like Lamport & Palais’s proof, “continuity is defined for mappings between topological spaces” (1976, p. 3). We may also show that the space I is pathwise connected and that the set of outputs in Δ(I) for which the decision is made (before some fixed time r) is not pathwise connected. So, there must be a finite number of starting locations for which it will take a hungry snack-decider arbitrarily long to decide whether to pick one snack or another, some job interviewees who will take arbitrarily long to decide whether to go with Answer A or Answer B before the next question comes, and so on for listening, reading, writing, walking, and other everyday activities requiring discrete decisions based on a continuous range of input-values in a limited time. So, we cannot guarantee that our decision-time will stay within r (whether r is five seconds or five years). Therefore, we are never necessarily decisive.Footnote 10

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Coren, D. Indecision and Buridan’s Principle. Synthese 200, 353 (2022). https://doi.org/10.1007/s11229-022-03843-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-022-03843-3

Keywords

Navigation