On the face of it ‘deterministic chance’ is an oxymoron: either an event is chancy or deterministic, but not both. Nevertheless, the world is rife with events that seem to be exactly that: chancy and deterministic at once. Simple gambling devices like coins and dice are cases in point. On the one hand they are governed by deterministic laws – the laws of classical mechanics – and hence given the initial condition of, say, a coin toss it is determined whether (...) it will land heads or tails.2 On the other hand, we commonly assign probabilities to the different outcomes a coin toss, and doing so has proven successful in guiding our actions. The same dilemma also emerges in less mundane contexts. Classical statistical mechanics (which is still an important part of modern physics) assigns probabilities to the occurrence of certain events – for instance to the spreading of a gas that is originally confined to the left half of a container – but at the same time assumes that the relevant systems are deterministic. How can this apparent conflict be resolved? (shrink)
Determinism and Chance.Barry Loewer - 2001 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 32 (4):609-620.details
It is generally thought that objective chances for particular events different from 1 and 0 and determinism are incompatible. However, there are important scientific theories whose laws are deterministic but which also assign non-trivial probabilities to events. The most important of these is statistical mechanics whose probabilities are essential to the explanations of thermodynamic phenomena. These probabilities are often construed as 'ignorance' probabilities representing our lack of knowledge concerning the microstate. I argue that this construal is incompatible with the (...) role of probability in explanation and laws. This is the 'paradox of deterministic probabilities'. After surveying the usual list of accounts of objective chance and finding them inadequate I argue that an account of chance sketched by David Lewis can be modified to solve the paradox of deterministic probabilities and provide an adequate account of the probabilities in deterministic theories like statistical mechanics. (shrink)
Multialgebras have been much studied in mathematics and in computer science. In 2016 Carnielli and Coniglio introduced a class of multialgebras called swap structures, as a semantic framework for dealing with several Logics of Formal Inconsistency that cannot be semantically characterized by a single finite matrix. In particular, these LFIs are not algebraizable by the standard tools of abstract algebraic logic. In this paper, the first steps towards a theory of non-deterministic algebraization of logics by swap structures are given. Specifically, (...) a formal study of swap structures for LFIs is developed, by adapting concepts of universal algebra to multialgebras in a suitable way. A decomposition theorem similar to Birkhoff’s representation theorem is obtained for each class of swap structures. Moreover, when applied to the 3-valued algebraizable logics J3 and Ciore, their classes of algebraic models are retrieved, and the swap structures semantics become twist structures semantics. This fact, together with the existence of a functor from the category of Boolean algebras to the category of swap structures for each LFI, suggests that swap structures can be seen as non-deterministic twist structures. This opens new avenues for dealing with non-algebraizable logics by the more general methodology of multialgebraic semantics. (shrink)
Determinism is a perennial topic of philosophical discussion. Very little acquaintance with the philosophical literature is needed to reveal the Tower of ...
Can there be deterministic chance? That is, can there be objective chance values other than 0 or 1, in a deterministic world? I will argue that the answer is no. In a deterministic world, the only function that can play the role of chance is one that outputs just Os and 1s. The role of chance involves connections from chance to credence, possibility, time, intrinsicness, lawhood, and causation. These connections do not allow for deterministic chance.
Rational agents face choices, even when taking seriously the possibility of determinism. Rational agents also follow the advice of Causal Decision Theory (CDT). Although many take these claims to be well-motivated, there is growing pressure to reject one of them, as CDT seems to go badly wrong in some deterministic cases. We argue that deterministic cases do not undermine a counterfactual model of rational deliberation, which is characteristic of CDT. Rather, they force us to distinguish between counterfactuals that are (...) relevant and ones that are irrelevant for the purposes of deliberation. We incorporate this distinction into decision theory to develop ‘Selective Causal Decision Theory’, which delivers the correct recommendations in deterministic cases while respecting the key motivations behind CDT. (shrink)
Bobzien presents the definitive study of one of the most interesting intellectual legacies of the ancient Greeks: the Stoic theory of causal determinism. She explains what it was, how the Stoics justified it, and how it relates to their views on possibility, action, freedom, moral responsibility, moral character, fatalism, logical determinism and many other topics. She demonstrates the considerable philosophical richness and power that these ideas retain today.
I argue that there are non-trivial objective chances (that is, objective chances other than 0 and 1) even in deterministic worlds. The argument is straightforward. I observe that there are probabilistic special scientific laws even in deterministic worlds. These laws project non-trivial probabilities for the events that they concern. And these probabilities play the chance role and so should be regarded as chances as opposed, for example, to epistemic probabilities or credences. The supposition of non-trivial deterministic chances might seem to (...) land us in contradiction. The fundamental laws of deterministic worlds project trivial probabilities for the very same events that are assigned non-trivial probabilities by the special scientific laws. I argue that any appearance of tension is dissolved by recognition of the level-relativity of chances. There is therefore no obstacle to accepting non-trivial chance-role-playing deterministic probabilities as genuine chances. (shrink)
Some have argued that chance and determinism are compatible in order to account for the objectivity of probabilities in theories that are compatible with determinism, like Classical Statistical Mechanics (CSM) and Evolutionary Theory (ET). Contrarily, some have argued that chance and determinism are incompatible, and so such probabilities are subjective. In this paper, I argue that both of these positions are unsatisfactory. I argue that the probabilities of theories like CSM and ET are not chances, but also (...) that they are not subjective probabilities either. Rather, they are a third type of probability, which I call counterfactual probability. The main distinguishing feature of counterfactual-probability is the role it plays in conveying important counterfactual information in explanations. This distinguishes counterfactual probability from chance as a second concept of objective probability. (shrink)
I sketch a new constraint on chance, which connects chance ascriptions closely with ascriptions of ability, and more specifically with 'CAN'-claims. This connection between chance and ability has some claim to be a platitude; moreover, it exposes the debate over deterministic chance to the extensive literature on (in)compatibilism about free will. The upshot is that a prima facie case for the tenability of deterministic chance can be made. But the main thrust of the paper is to draw attention to the (...) connection between the truth conditions of sentences involving 'CAN' and 'CHANCE', and argue for the context sensitivity of each term. Awareness of this context sensitivity has consequences for the evaluation of particular philosophical arguments for (in) compatibilism when they are presented in particular contexts. (shrink)
The studies we report indicate that it is possible to manipulate explicit ascriptions of consciousness by manipulating whether an agent’s behavior is deterministically caused. In addition, we explore whether this impact of determinism on consciousness is direct, or mediated by notions linked to agency – notions like moral responsibility, free will, deliberate choice, and sensitivity to moral reasons. We provide evidence of mediation. This result extends work on attributions of consciousness and their connection to attributions of agency by Adam (...) Arico, Brian Fiala, and Shaun Nichols (Arico et al. 2011, Fiala et al. 2014) and supports it against recent criticisms (e.g., Sytsma 2014). (shrink)
I argue that free will and determinism are compatible, even when we take free will to require the ability to do otherwise and even when we interpret that ability modally, as the possibility of doing otherwise, and not just conditionally or dispositionally. My argument draws on a distinction between physical and agential possibility. Although in a deterministic world only one future sequence of events is physically possible for each state of the world, the more coarsely defined state of an (...) agent and his or her environment can be consistent with more than one such sequence, and thus different actions can be “agentially possible”. The agential perspective is supported by our best theories of human behaviour, and so we should take it at face value when we refer to what an agent can and cannot do. On the picture I defend, free will is not a physical phenomenon, but a higher-level one on a par with other higher-level phenomena such as agency and intentionality. (shrink)
A previously unrecognised argument against deterministic chance is introduced. The argument rests on the twin ideas that determined outcomes are settled, while chancy outcomes are unsettled, thus making cases of determined but chancy outcomes impossible. Closer attention to tacit assumptions about settledness makes available some principled lines of resistance to the argument for compatibilists about chance and determinism. Yet the costs of maintaining compatibilism may be higher with respect to this argument than with respect to existing incompatibilist arguments.
Adams presents an in-depth interpretation of three important parts of Leibniz's metaphysics, thoroughly grounded in the texts as well as in philosophical analysis and critique. The three areas discussed are the metaphysical part of Leibniz's philosophy of logic, his essentially theological treatment of the central issues of ontology, and his theory of substance. Adams' work helps make sense of one of the great classic systems of modern philosophy.
This article illustrates in which sense genetic determinism is still part of the contemporary interactionist consensus in medicine. Three dimensions of this consensus are discussed: kinds of causes, a continuum of traits ranging from monogenetic diseases to car accidents, and different kinds of determination due to different norms of reaction. On this basis, this article explicates in which sense the interactionist consensus presupposes the innate?acquired distinction. After a descriptive Part 1, Part 2 reviews why the innate?acquired distinction is under (...) attack in contemporary philosophy of biology. Three arguments are then presented to provide a limited and pragmatic defense of the distinction: an epistemic, a conceptual, and a historical argument. If interpreted in a certain manner, and if the pragmatic goals of prevention and treatment (ideally specifying what medicine and health care is all about) are taken into account, then the innate?acquired distinction can be a useful epistemic tool. It can help, first, to understand that genetic determination does not mean fatalism, and, second, to maintain a system of checks and balances in the continuing nature?nurture debates. (shrink)
Originally published in 1934, this book presents the content of an inaugural lecture delivered by the British philosopher Charles Dunbar Broad (1887-1971), upon taking up the position of Knightbridge Professor of Moral Philosophy at Cambridge University. The text presents a discussion of the relationship between determinism, indeterminism and libertarianism. This book will be of value to anyone with an interest in the writings of Broad and the history of philosophy.
This article focuses on three themes concerning determinism and indeterminism. The first theme is observational equivalence between deterministic and indeterministic models. Here I discuss several results about observational equivalence and present an argument on how to choose between deterministic and indeterministic models involving indirect evidence. The second theme is whether Newtonian physics is indeterministic. I argue that the answer depends on what one takes Newtonian mechanics to be, and I highlight how contemporary debates on this issue differ from those (...) in the nineteenth century. The third major theme is how the method of arbitrary functions can be used to make sense of deterministic probabilities. I discuss various ways of interpreting the initial probability distributions and argue that they are best understood as physical, biological etc. quantities characterising the particular situation at hand. Also, I emphasise that the method of arbitrary functions deserves more attention than it has received so far. (shrink)
The article puts forward a branching-style framework for the analysis of determinism and indeterminism of scientific theories, starting from the core idea that an indeterministic system is one whose present allows for more than one alternative possible future. We describe how a definition of determinism stated in terms of branching models supplements and improves current treatments of determinism of theories of physics. In these treatments, we identify three main approaches: one based on the study of equations, one (...) based on mappings between temporal realizations, and one based on branching models. We first give an overview of these approaches and show that current orthodoxy advocates a combination of the mapping- and the equations-based approaches. After giving a detailed formal explication of a branching-based definition of determinism, we consider three concrete applications and end with a formal comparison of the branching- and the mapping-based approach. We conclude that the branching-based definition of determinism most usefully combines formal clarity, connection with an underlying philosophical notion of determinism, and relevance for the practical assessment of theories. 1 Introduction2 Determinism in Philosophy of Science: Three Approaches2.1 Determinism: The core idea and how to spell it out2.2 The three approaches in more detail2.3 Representing indeterminism3 Orthodoxy: DMAP, with Invocations of DEQN4 Branching-Style Determinism 4.1 Models and realizations4.2 Faithfulness4.3 Two types of branching topologies 5 Comparing the Approaches5.1 Case studies5.2 Formal comparison of the DMAP and DBRN frameworks6 Conclusions. (shrink)
Legendary since his own time as a universal genius, Gottfried Wilhelm Leibniz (1646-1716) contributed significantly to almost every branch of learning. One of the creators of modern mathematics, and probably the most sophisticated logician between the Middle Ages and Frege, as well as a pioneer of ecumenical theology, he also wrote extensively on such diverse subjects as history, geology, and physics. But the part of his work that is most studied today is probably his writings in metaphysics, which have been (...) the focus of particularly lively philosophical discussion in the last twenty years or so. The writings contain one of the great classic systems of modern philosophy, but the system must be pieced together from a vast and miscellaneous array of manuscripts, letters, articles, and books, in a way that makes especially strenuous demands on scholarship. This book presents an in-depth interpretation of three important parts of Leibniz's metaphysics, thoroughly grounded in the texts as well as in philosophical analysis and critique. The three areas discussed are the metaphysical part of Leibniz's philosophy of logic, his essentially theological treatment of the central issues of ontology, and his theory of substance (the famous theory of monads). (shrink)
The hole argument contends that a substantivalist has to view General Relativity as an indeterministic theory. A recent form of substantivalist reply to the hole argument has urged the substantivalist to identify qualitatively isomorphic possible worlds. Gordon Belot has argued that this form of substantivalism is unable to capture other genuine violations of determinism. This paper argues that Belot's alleged examples of indeterminism should not be seen as a violation of a form of determinism that physicists are interested (...) in. What is undetermined in these examples, and in the hole argument, is a haecceitistic feature of the world. It is argued that these features are not among those we should expect the physical state of the world to determine. This vindicates the substantivalist reply to the hole argument, but also illustrates that philosophers of physics cannot ignore metaphysics when characterizing determinism for a physical theory. (shrink)
A deterministic weakening \ of the Belnap–Dunn four-valued logic \ is introduced to formalize the acceptance and rejection of a proposition at a state in a linearly ordered informational frame with persistent valuations. The logic \ is formalized as a sequent calculus. The completeness and decidability of \ with respect to relational semantics are shown in terms of normal forms. From an algebraic perspective, the class of all algebras for \ is described, and found to be a subvariety of Berman’s (...) variety \. Every linearly ordered frame is logically equivalent to its dual algebra. It is proved that \ is the logic of a nine-element distributive lattice with a negation. Moreover, \ is embedded into \ by Glivenko’s double-negation translation. (shrink)
It has been argued that in a deterministic universe, no one has any reason to do anything. Since we ought to do what we have most reason to do, no one ought to do anything either. Firstly, it is argued that an agent cannot have reason to do anything unless she can do otherwise; secondly, that the relevant ‘can’ is incompatibilist. In this paper, I argue that even if the first step of the argument for reason incompatibilism succeeds, the second (...) one does not. It is argued that reasons require alternative possibilities, because reasons are action-guiding. A supposed reason to do the impossible, or to do what was inevitable anyway, could not fill this function. I discuss different interpretations of the claim that reasons are action-guiding, and show that according to one interpretation it is sufficient that the agent believes that she has several alternative options. According to other interpretations, the agent must really have alternative options, but only in a compatibilist sense. I suggest that an interpretation of action-guidance according to which reasons can only guide actions when we have several options open to us in an incompatibilist sense cannot be found. We should therefore assume that reasons and obligations are compatible with determinism. (shrink)
Twenty-first-century biology rejects genetic determinism, yet an exaggerated view of the power of genes in the making of bodies and minds remains a problem. What accounts for such tenacity? This article reports an exploratory study suggesting that the common reliance on Mendelian examples and concepts at the start of teaching in basic genetics is an eliminable source of support for determinism. Undergraduate students who attended a standard ‘Mendelian approach’ university course in introductory genetics on average showed no change (...) in their determinist views about genes. By contrast, students who attended an alternative course which, inspired by the work of a critic of early Mendelism, W. F. R. Weldon, replaced an emphasis on Mendel’s peas with an emphasis on developmental contexts and their role in bringing about phenotypic variability, were less determinist about genes by the end of teaching. Improvements in both the new Weldonian curriculum and the study design are in view for the future. (shrink)
We clarify the status of the so-called causal minimality condition in the theory of causal Bayesian networks, which has received much attention in the recent literature on the epistemology of causation. In doing so, we argue that the condition is well motivated in the interventionist (or manipulability) account of causation, assuming the causal Markov condition which is essential to the semantics of causal Bayesian networks. Our argument has two parts. First, we show that the causal minimality condition, rather than an (...) add-on methodological assumption of simplicity, necessarily follows from the substantive interventionist theses, provided that the actual probability distribution is strictly positive. Second, we demonstrate that the causal minimality condition can fail when the actual probability distribution is not positive, as is the case in the presence of deterministic relationships. But we argue that the interventionist account still entails a pragmatic justification of the causal minimality condition. Our argument in the second part exemplifies a general perspective that we think commendable: when evaluating methods for inferring causal structures and their underlying assumptions, it is relevant to consider how the inferred causal structure will be subsequently used for counterfactual reasoning. (shrink)
This article illustrates in which sense genetic determinism is still part of the contemporary interactionist consensus in medicine. Three dimensions of this consensus are discussed: kinds of causes, a continuum of traits ranging from monogenetic diseases to car accidents, and different kinds of determination due to different norms of reaction. On this basis, this article explicates in which sense the interactionist consensus presupposes the innate?acquired distinction. After a descriptive Part 1, Part 2 reviews why the innate?acquired distinction is under (...) attack in contemporary philosophy of biology. Three arguments are then presented to provide a limited and pragmatic defense of the distinction: an epistemic, a conceptual, and a historical argument. If interpreted in a certain manner, and if the pragmatic goals of prevention and treatment are taken into account, then the innate?acquired distinction can be a useful epistemic tool. It can help, first, to understand that genetic determination does not mean fatalism, and, second, to maintain a system of checks and balances in the continuing nature?nurture debates. (shrink)
In his Philosophical Inquiry concerning Human Liberty (1717), the English deist Anthony Collins proposed a complete determinist account of the human mind and action, partly inspired by his mentor Locke, but also by elements from Bayle, Leibniz and other Continental sources. It is a determinism which does not neglect the question of the specific status of the mind but rather seeks to provide a causal account of mental activity and volition in particular; it is a ‘volitional determinism’. Some (...) decades later, Diderot articulates a very similar determinism, which seeks to recognize the existence of “causes proper to man” (as he says in the Réfutation d’Helvétius). The difference with Collins is that now biological factors are being taken into account. Obviously both the ‘volitional’ and the ‘biological’ forms of determinism are noteworthy inasmuch as they change our picture of the nature of determinism itself, but my interest here is to compare these two determinist arguments, both of which are broadly Spinozist in nature – and as such belong to what Jonathan Israel called in his recent book “the radical Enlightenment,” i.e. a kind of underground Enlightenment constituted by Spinozism – and to see how Collins’ specifically psychological vision and Diderot’s specifically biological vision correspond to their two separate national contexts: determinism in France in the mid-1750s was a much more medico-biological affair than English determinism, which appears to be on a ‘path’ leading to Mill and associationist psychology. (shrink)
A deterministic model that accounts for the statistical behavior of random samples of identical particles is presented. The model is based on some nonmeasurable distribution of spin values in all directions. The mathematical existence of such distributions is proved by set-theoretical techniques, and the relation between these distributions and observed frequencies is explored within an appropriate extension of probability theory. The relation between quantum mechanics and the model is specified. The model is shown to be consistent with known polarization phenomena (...) and the existence of macroscopic magnetism. Finally.. (shrink)
The central question of this paper is: are deterministic and indeterministic descriptions observationally equivalent in the sense that they give the same predictions? I tackle this question for measure-theoretic deterministic systems and stochastic processes, both of which are ubiquitous in science. I first show that for many measure-theoretic deterministic systems there is a stochastic process which is observationally equivalent to the deterministic system. Conversely, I show that for all stochastic processes there is a measure-theoretic deterministic system which is observationally equivalent (...) to the stochastic process. Still, one might guess that the measure-theoretic deterministic systems which are observationally equivalent to stochastic processes used in science do not include any deterministic systems used in science. I argue that this is not so because deterministic systems used in science even give rise to Bernoulli processes. Despite this, one might guess that measure-theoretic deterministic systems used in science cannot give the same predictions at every observation level as stochastic processes used in science. By proving results in ergodic theory, I show that also this guess is misguided: there are several deterministic systems used in science which give the same predictions at every observation level as Markov processes. All these results show that measure-theoretic deterministic systems and stochastic processes are observationally equivalent more often than one might perhaps expect. Furthermore, I criticise the claims of the previous philosophy papers Suppes (1993, 1999), Suppes and de Barros (1996) and Winnie (1998) on observational equivalence. (shrink)
While standard procedures of causal reasoning as procedures analyzing causal Bayesian networks are custom-built for (non-deterministic) probabilistic struc- tures, this paper introduces a Boolean procedure that uncovers deterministic causal structures. Contrary to existing Boolean methodologies, the procedure advanced here successfully analyzes structures of arbitrary complexity. It roughly involves three parts: first, deterministic dependencies are identified in the data; second, these dependencies are suitably minimalized in order to eliminate redundancies; and third, one or—in case of ambiguities—more than one causal structure is (...) assigned to the minimalized deterministic dependencies. (shrink)
How can hard determinism deal with the need to punish, when coupled with the obligation to be just? I argue that even though hard determinists might find it morally permissible to incarcerate wrongdoers apart from lawful society, they are committed to the punishment’s taking a very different form from common practice in contemporary Western societies. Hard determinists are in fact committed to what I will call funishment, instead of punishment. But, by its nature funishment is a practical reductio of (...) hard determinism: it makes implementing hard determinism impossible to contemplate. Indeed, the social practices that hard determinism requires turn out to be morally bad even according to hard determinism itself. I conclude by briefly reflecting upon the implications. (shrink)
In this paper, a concept of chance is introduced that is compatible with deterministic physical laws, yet does justice to our use of chance-talk in connection with typical games of chance. We take our cue from what Poincaré called "the method of arbitrary functions," and elaborate upon a suggestion made by Savage in connection with this. Comparison is made between this notion of chance, and David Lewis' conception.
In a famous passage drawing implications from determinism, Laplace introduced the image an intelligence who knew the positions and momenta of all of the particles of which the universe is composed, and asserted that in a deterministic universe such an intelligence would be able to predict everything that happens over its entire history. It is not, however, difficult to establish the physical possibility of a counterpredictive device, i.e., a device designed to act counter to any revealed prediction of its (...) behavior. What would happen if a Laplacean intelligence were put into communication with such a device and forced to reveal its prediction of what the device would do on some occasion? On the one hand, it seems that the Laplacean Intelligence should be able to predict the device's behavior. On the other hand, it seems like that device should be able to act counter to the prediction. An examination of the puzzle leads to clarification of what determinism does entail, with some insights about various other things along the way. (shrink)
Divine determinism, though affirmed by many Calvinists, implicates God in the decisions people make that ultimately damn them to the terrible destiny of hell. In this paper, the authors argue that this scenario is a problem for divine determinism. The article contends that determinism is inconsistent with God’s love and the Scriptures that explicitly state that God does not ‘desire’ anyone to go to hell. Even human love for others strongly suggests that God, who is ‘love’, will (...) not determine anyone to hell. On the other extreme, those who argue for universalism, though appealing to Scripture, often do so with questionable exegesis. (shrink)
A strongly deterministic theory of physics is one that permits exactly one possible history of the universe. In the words of Penrose (1989), "it is not just a matter of the future being determined by the past; the entire history of the universe is fixed, according to some precise mathematical scheme, for all time.” Such an extraordinary feature may appear unattainable in any realistic and simple theory of physics. In this paper, I propose a definition of strong determinism and (...) contrast it with those of standard determinism and super-determinism. Next, I discuss its consequences for explanation, causation, prediction, fundamental properties, free will, and modality. Finally, I present the first example of a realistic, simple, and strongly deterministic physical theory--the Everettian Wentaculus. As a consequence of physical laws, the history of the Everettian multiverse could not have been different. If the Everettian Wentaculus is empirically equivalent to other quantum theories, we can never empirically find out whether or not our world is strongly deterministic. Even if strong determinism fails to be true, it is closer to the actual world than we have presumed, with implications for some of the central topics in philosophy and foundations of physics. (shrink)
The first part of this paper reveals a conflict between the core principles of deterministic causation and the standard method of difference, which is widely seen as a correct method of causally analyzing deterministic structures. We show that applying the method of difference to deterministic structures can give rise to causal inferences that contradict the principles of deterministic causation. The second part then locates the source of this conflict in an inference rule implemented in the method of difference according to (...) which factors that can make a difference to investigated effects relative to one particular test setup are to be identified as causes, provided the causal background of the corresponding setup is homogeneous. The paper ends by modifying the method of difference in a way that renders it compatible with the principles of deterministic causation. (shrink)
In this paper we aim to disentangle the thesis that the future is open from theses that often get associated or even conflated with it. In particular, we argue that the open future thesis is compatible with both the unrestricted principle of bivalence and determinism with respect to the laws of nature. We also argue that whether or not the future (and indeed the past) is open has no consequences as to the existence of (past and) future ontology.
This book develops a new theory of determinism that offers fresh insights into questions of how intentions and other mental events relate to neural events, how both come about, and how both result in actions. Honderich tests his theory against neuroscience, quantum theory, and possible philosophical refutations, and discusses the consequences of determinism and near-determinism for life-hopes, knowledge, and personal feelings.
The proper limit to paternalist regulation of citizens' private lives is a recurring theme in political theory and ethics. In the present study, we examine the role of beliefs about free will and determinism in attitudes toward libertarian versus paternalist policies. Throughout five studies we find that a scientific deterministic worldview reduces opposition toward paternalist policies, independent of the putative influence of political ideology. We suggest that exposure to scientific explanations for patterns in human behavior challenges the notion of (...) personal autonomy and, in turn, undermines libertarian arguments against state paternalism appealing to autonomy and personal choice. (shrink)
This article surveys the difficulties in establishing determinism for classical physics within the context of several distinct foundational approaches to the discipline. It explains that such problems commonly emerge due to a deeper problem of ‘missing physics'. The Problems of Formalism Norton's Example Three Species of Classical Mechanics 3.1 Mass point physics 3.2 The physics of perfect constraints 3.3 Continuum mechanics Conclusion CiteULike Connotea Del.icio.us What's this?
Scott Sehon recently argued that the standard notion of determinism employed in the Consequence Argument makes it so that, if our world turns out to be deterministic, then an interventionist God is logically impossible. He further argues that because of this, we should revise our notion of determinism. In this paper I show that Sehon’s argument for the claim that the truth of determinism, in this sense, would make an interventionist God logically impossible ultimately fails. I then (...) offer and respond to a weaker version of the argument for the claim that we should revise our notion of determinism. (shrink)
Bayesians since Savage (1972) have appealed to asymptotic results to counter charges of excessive subjectivity. Their claim is that objectionable differences in prior probability judgments will vanish as agents learn from evidence, and individual agents will converge to the truth. Glymour (1980), Earman (1992) and others have voiced the complaint that the theorems used to support these claims tell us, not how probabilities updated on evidence will actually}behave in the limit, but merely how Bayesian agents believe they will behave, suggesting (...) that the theorems are too weak to underwrite notions of scientific objectivity and intersubjective agreement. I investigate, in a very general framework, the conditions under which updated probabilities actually converge to a settled opinion and the conditions under which the updated probabilities of two agents actually converge to the same settled opinion. I call this mode of convergence deterministic, and derive results that extend those found in Huttegger (2015b). The results here lead to a simple characterization of deterministic convergence for Bayesian learners and give rise to an interesting argument for what I call strong regularity, the view that probabilities of non-empty events should be bounded away from zero. (shrink)
In this paper I consider the view, held by some Thomistic thinkers, that divine determinism is compatible with human freedom, even though natural determinism is not. After examining the purported differences between divine and natural determinism, I discuss the Consequence Argument, which has been put forward to establish the incompatibility of natural determinism and human freedom. The Consequence Argument, I note, hinges on the premise that an action ultimately determined by factors outside of the actor’s control (...) is not free. Since, I argue, divine determinism also entails that human actions are ultimately determined by factors outside of the actors’ control, I suggest that a parallel argument to the Consequence Argument can be constructed for the incompatibility of divine determinism and human freedom. I conclude that those who reject natural compatibilism on the basis of the Consequence Argument should also reject divine compatibilism. (shrink)
Contrary to Lewis and Vihvelin, I argue that free will in a deterministic world is an ability to break a law of nature or to change the remote past. Even if it were true, as Lewis and Vihvelin think, that an agent who is predetermined to perform a particular act might not break a law or change the remote past by doing otherwise, it would nevertheless be true that he is able to do otherwise only if he is able to (...) break a law or to change the remote past. (shrink)
The purpose of this paper is to give a brief survey the implications of the theories of modern physics for the doctrine of determinism. The survey will reveal a curious feature of determinism: in some respects it is fragile, requiring a number of enabling assumptions to give it a fighting chance; but in other respects it is quite robust and very difficult to kill. The survey will also aim to show that, apart from its own intrinsic interest, (...) class='Hi'>determinism is an excellent device for probing the foundations of classical, relativistic, and quantum physics. The survey is conducted under three major presuppositions. First, I take a realistic attitude towards scientific theories in that I assume that to give an interpretation of a theory is, at a minimum, to specify what the world would have to be like in order for the theory to be true. But we will see that the demand for a deterministic interpretation of a theory can force us to abandon a naively realistic reading of the theory. Second, I reject the “no laws” view of science and assume that the field equations or laws of motion of the most fundamental theories of current physics represent science’s best guesses as to the form of the basic laws of nature. Third, I take determinism to be an ontological doctrine, a doctrine about the temporal evolution of the world. This ontological doctrine must not be confused with predictability, which is an epistemological doctrine, the failure of which need not entail a failure of determinism. From time to time I will comment on ways in which predictability can fail in a deterministic setting. Finally, my survey will concentrate on the Laplacian variety of determinism according to which the instantaneous state of the world at any time uniquely determines the state at any other time. The plan of the survey is as follows. Section 2 illustrates the fragility of determinism by means of a Zeno type example. Then sections 3 and 4 survey successively the fortunes of determinism in the Newtonian and the special relativistic settings.. (shrink)
It is often called “the problem of free will and determinism,” as if the only thing that might challenge free will is determinism and as if determinism is obviously a problem. The traditional debates about free will have proceeded accordingly. Typically, incompatibilists about free will and determinism suggest that their position is intuitive or commonsensical, such that compatibilists have the burden of showing how, despite appearances, the problem of determinism is not really a problem. Compatibilists, (...) in turn, tend to proceed as if showing that determinism is not a problem thereby shows that we have free will, as if determinism is the only thing that might threaten free will. In this chapter, I reject both of these elements of the traditional debate; the question of whether we have free will should neither begin nor end with the so-called problem of determinism. I present and discuss evidence from a variety of studies that suggests that incompatibilism is not particularly intuitive. Most people do not have to be talked out of incompatibilism but rather talked into it. This provides some reasons—though certainly not decisive reasons—to think that compatibilism is true. I conclude by pointing out that, even if compatibilism were true, it would not dissolve the problem of free will, because there are problems other than determinism that need to be confronted—namely, challenges to free will suggested by current and “future science,” including neuroscience and psychology. The threats to free will suggested by these sciences are distinct from the traditional threat of determinism, and they are the ones that “ordinary persons” find intuitively threatening to free will. In fact, I will argue that the reason incompatibilism about free will and determinism appears to be intuitive is that determinism is often and easily misunderstood to involve these distinct threats to free will—threats that suggest that our rational, conscious mental activity is bypassed in the process of our making decisions and coming to act. (shrink)
The inference from determinism to predictability, though intuitively plausible, needs to be qualified in an important respect. We need to distinguish between two different kinds of predictability. On the one hand, determinism implies external predictability , that is, the possibility for an external observer, not part of the universe, to predict, in principle, all future states of the universe. Yet, on the other hand, embedded predictability as the possibility for an embedded subsystem in the universe to make such (...) predictions, does not obtain in a deterministic universe. By revitalizing an older result—the paradox of predictability —we demonstrate that, even in a deterministic universe, there are fundamental, non-epistemic limitations on the ability of one subsystem embedded in the universe to predict the future behaviour of other subsystems embedded in the same universe. As an explanation, we put forward the hypothesis that these limitations arise because the predictions themselves are physical events which are part of the law-like causal chain of events in the deterministic universe. While the limitations on embedded predictability cannot in any direct way show evidence of free human agency, we conjecture that, even in a deterministic universe, human agents have a take-it-or-leave-it control over revealed predictions of their future behaviour. (shrink)