It has recently been suggested that a distinctive metaphysical relation— ‘Grounding’—is ultimately at issue in contexts in which some goings-on are said to hold ‘in virtue of’’, be ‘metaphysically dependent on’, or be ‘nothing over and above’ some others. Grounding is supposed to do good work in illuminating metaphysical dependence. I argue that Grounding is also unsuited to do this work. To start, Grounding alone cannot do this work, for bare claims of Grounding leave open such basic questions as whether (...) Grounded goings-on exist, whether they are reducible to or rather distinct from Grounding goings-on, whether they are efficacious, and so on; but in the absence of answers to such basic questions, we are not in position to assess the associated claim or theses concerning metaphysical dependence. There is no avoiding appeal to the specific metaphysical relations typically at issue in investigations into dependence—for example, type or token identity, functional realization, classical mereological parthood, the set membership relation, the proper subset relation, the determinable/determinate relation, and so on—which are capable of answering these questions. But, I argue, once the specific relations are on the scene, there is no need for Grounding. (shrink)
On many currently live interpretations, quantum mechanics violates the classical supposition of value definiteness, according to which the properties of a given particle or system have precise values at all times. Here we consider whether either metaphysical supervaluationist or determinable-based approaches to metaphysical indeterminacy can accommodate quantum metaphysical indeterminacy (QMI). We start by discussing the standard theoretical indicator of QMI, and distinguishing three seemingly different sources of QMI (S1). We then show that previous arguments for the conclusion that metaphysical supervaluationism (...) cannot accommodate QMI, due to Darby 2010 and Skow 2010, are unsuccessful, in leaving open several supervaluationist responses. We go on to provide more comprehensive argumentation for the negative conclusion. Here, among other results, we establish that the problems for supervaluationism extend far beyond the concern that is the focus of Darby's and Skow's discussions (according to which a supervaluationist approach is incompatible with the orthodox interpretation, in light of the Kochen-Specker theorem) to also attach to common understandings of other interpretations on which there is QMI (S2). We then argue that a determinable-based account can successfully accommodate all three varieties of QMI (S3). We close by observing the positive mutual bearing of our results on the coherence and intelligibility of both quantum mechanics and metaphysical indeterminacy (S4). (shrink)
Both the special sciences and ordinary experience suggest that there are metaphysically emergent entities and features: macroscopic goings-on (including mountains, trees, humans, and sculptures, and their characteristic properties) which depend on, yet are distinct from and distinctively efficacious with respect to, lower-level physical configurations and features. These appearances give rise to two key questions. First, what is metaphysical emergence, more precisely? Second, is there any metaphysical emergence, in principle and moreover in fact? Metaphysical Emergence provides clear and systematic answers to (...) these questions. Wilson argues that there are two, and only two, forms of metaphysical emergence of the sort seemingly at issue in the target cases: 'Weak' emergence, whereby a dependent feature has a proper subset of the powers of the feature upon it depends, and 'Strong' emergence, whereby a dependent feature has a power not had by the feature upon which it depends. Weak emergence unifies and illuminates seemingly diverse accounts of non-reductive physicalism; Strong emergence does the same as regards seemingly diverse anti-physicalist views positing fundamental novelty at higher levels of compositional complexity. After defending the in-principle viability of each form of emergence, Wilson considers whether complex systems, ordinary objects, consciousness, and free will are actually metaphysically emergent. She argues that Weak emergence is quite common, and that there is Strong emergence in the important case of free will. (shrink)
ABSTRACT Many phenomena appear to be indeterminate, including material macro-object boundaries and certain open future claims. Here I provide an account of indeterminacy in metaphysical, rather than semantic or epistemic, terms. Previous accounts of metaphysical indeterminacy have typically taken this to involve its being indeterminate which of various determinate states of affairs obtain. On my alternative account, MI involves its being determinate that an indeterminate state of affairs obtains. I more specifically suggest that MI involves an object's having a determinable (...) property, but not having any unique determinate of that determinable. I motivate the needed extension of the traditional understanding of determinables, then argue that a determinable-based account of MI accommodates, in illuminating fashion, both ‘glutty’ and ‘gappy’ cases of MI, while satisfactorily treating concerns about MI stemming from Evans’ argument and the problem of the many. (shrink)
Motivated by the seeming structure of the sciences, metaphysical emergence combines broadly synchronic dependence coupled with some degree of ontological and causal autonomy. Reflecting the diverse, frequently incompatible interpretations of the notions of dependence and autonomy, however, accounts of emergence diverge into a bewildering variety. Here I argue that much of this apparent diversity is superficial. I first argue, by attention to the problem of higher-level causation, that two and only two strategies for addressing this problem accommodate the genuine emergence (...) of special science entities. These strategies in turn suggest two distinct schema for metaphysical emergence---'Weak' and 'Strong' emergence, respectively. Each schema imposes a condition on the powers of entities taken to be emergent: Strong emergence requires that higher-level features have more token powers than their dependence base features, whereas Weak emergence requires that higher-level features have a proper subset of the token powers of their dependence base features. Importantly, the notion of “power” at issue here is metaphysically neutral, primarily reflecting commitment just to the plausible thesis that what causes an entity may bring about are associated with how the entity is---that is, with its features. (shrink)
Contemporary philosophers commonly suppose that any fundamental entities there may be are maximally determinate. More generally, they commonly suppose that, whether or not there are fundamental entities, any determinable entities there may be are grounded in, hence less fundamental than, more determinate entities. So, for example, Armstrong takes the physical objects constituting the presumed fundamental base to be “determinate in all respects” (1961, 59), and Lewis takes the properties characterizing things “completely and without redundancy” to be “highly specific” (1986, 60). (...) Here I look at the usually cited reasons for these suppositions as directed against the case of determinable properties, in particular, and argue that none is compelling (Sections 1 to 3). The discussion in Section 3 moreover identifies positive reason for taking some determinable properties to be part of a fundamental (or relatively fundamental) base. I close (Section 4) by noting certain questions arising from the possibility of fundamental determinables, as directions for future research. (shrink)
I argue that an adequate account of non-reductive realization must guarantee satisfaction of a certain condition on the token causal powers associated with (instances of) realized and realizing entities---namely, what I call the 'Subset Condition on Causal Powers' (first introduced in Wilson 1999). In terms of states, the condition requires that the token powers had by a realized state on a given occasion be a proper subset of the token powers had by the state that realizes it on that occasion. (...) Accounts of non-reductive realization conforming to this condition are implementing what I call 'the powers-based subset strategy'. I focus on the crucial case involving mental and brain states; the results may be generalized, as appropriate. I first situate and motivate the strategy by attention to the problem of mental causation; I make the case, in schematic terms, that implementation of the strategy makes room (contra Kim 1989, 1993, 1998, and elsewhere) for mental states to be ontologically and causally autonomous from their realizing physical states, without inducing problematic causal overdetermination, and compatible with both Physicalism and Non-reduction; and I show that several contemporary accounts of non-reductive realization (in terms of functional realization, parthood, and the determinable/determinate relation) are plausibly seen as implementing the strategy. As I also show, implementation of the powers-based strategy does not require endorsement of any particular accounts of either properties or causation---indeed, a categoricalist contingentist Humean can implement the strategy. The schematic location of the strategy in the space of available responses to the problem of mental (more generally, higher-level) causation, as well as the fact that the schema may be metaphysically instantiated, strongly suggests that the strategy is, appropriately generalized and instantiated, sufficient and moreover necessary for non-reductive realization. I go on to defend the sufficiency and necessity claims against a variety of objections, considering, along the way, how the powers-based subset strategy fares against competing accounts of purportedly non-reductive realization in terms of supervenience, token identity, and constitution. (shrink)
The physicalist thesis that all entities are nothing over and above physical entities is often interpreted as appealing to a supervenience-based account of "nothing over and aboveness”, where, schematically, the A-entities are nothing over and above the B-entities if the A-entities supervene on the B-entities. The main approaches to filling in this schema correspond to different ways of characterizing the modal strength, the supervenience base, or the supervenience connection at issue. I consider each approach in turn, and argue that the (...) resulting formulation of physicalism is compatible with physicalism’s best traditional rival: a naturalist emergentism. Others have argued that supervenience-based formulations of physicalism fail. My aim here, besides addressing the full spectrum of supervenience-based approaches, is to show how certain philosophical and scientific theses concerning naturalism, properties, and laws give us new reasons to think that supervenience-based formulations of physicalism are untenable. (shrink)
Note: this is the first published presentation and defense of the 'proper subset strategy' for making sense of non-reductive physicalism or the associated notion of realization; this is sometimes, inaccurately, called "Shoemaker's subset strategy"; if people could either call it the 'subset strategy' or better yet, add my name to the mix I would appreciate it. Horgan claims that physicalism requires "superdupervenience" -- supervenience plus robust ontological explanation of the supervenient in terms of the base properties. I argue that Horgan's (...) account fails to rule out physically unacceptable emergence. I rather suggest that this and other unacceptable possibilities may be ruled out by requiring that each individual causal power in the set associated with a given supervenient property be numerically identical with a causal power in the set associated with its base property. I go on to show that a wide variety of physicalist accounts, both reductive and non-reductive, are implicitly or explicitly designed to meet this condition, and so are more similar than they seem. In particular, non-reductive physicalism accounts typically appeal to a relation plausibly ensuring that the powers of a higher-level property are a proper subset of those of its physical base property. (shrink)
How should physical entities be characterized? Physicalists, who have most to do with the notion, usually characterize the physical by reference to two components: 1. The physical entities are the entities treated by fundamental physics with the proviso that 2. Physical entities are not fundamentally mental (that is, do not individually possess or bestow mentality) Here I explore the extent to which the appeals to fundamental physics and to the NFM (“no fundamental mentality”) constraint are appropriate for characterizing the physical, (...) especially for purposes of formulating physicalism. Ultimately, I motivate and defend a version of an account incorporating both components: The physics-based NFM account: An entity existing at a world w is physical iff (i) it is treated, approximately accurately, by current or future (in the limit of inquiry, ideal) versions of fundamental physics at w, and (ii) it is not fundamentally mental (that is, does not individually either possess or bestow mentality). (shrink)
In Calosi and Wilson (Phil Studies 2019/2018), we argue that on many interpretations of quantum mechanics (QM), there is quantum mechanical indeterminacy (QMI), and that a determinable-based account of metaphysical indeterminacy (MI), as per Wilson 2013 and 2016, properly accommodates the full range of cases of QMI. Here we argue that this approach is superior to other treatments of QMI on offer, both realistic and deflationary, in providing the basis for an intelligible explanation of the interference patterns in the double-slit (...) experiment. We start with a brief overview of the motivations for QMI and for a determinable-based account of MI (§1). We then apply a developed 'glutty' implementation of determinable-based QMI to the superposition-based QMI present in the double-slit experiment, and positively compare the associated explanation of double-slit interference with that available on a metaphysical supervaluationist account of QMI (§2). We then present and respond to objections, due to Glick (2017) and Torza (2017), either to QMI (§3) or to our specific account of QMI (§4); in these sections we also positively compare our treatment of double-slit interference to that available on Glick's deflationary treatment of QMI. We conclude with some dialectical observations (§5). (shrink)
Hume's Dictum (HD) says, roughly and typically, that there are no metaphysically necessary connections between distinct, intrinsically typed, entities. HD plays an influential role in metaphysical debate, both in constructing theories and in assessing them. One should ask of such an influential thesis: why believe it? Proponents do not accept Hume's arguments for his dictum, nor do they provide their own; however, some have suggested either that HD is analytic or that it is synthetic a priori (that is: motivated by (...) intuitions we have no good reason to question). Here I explore whether belief in HD is directly justified on either grounds. I motivate and present more formal characterizations of HD; I show that there are good prima facie cases to be made for HD's being analytic and for its being synthetic a priori; I argue that each of the prima facie cases fails, some things considered. I close by offering two suggestions for how belief in HD might be indirectly justified on argumentative grounds. (shrink)
How can mental properties bring about physical effects, as they seem to do, given that the physical realizers of the mental goings-on are already sufficient to cause these effects? This question gives rise to the problem of mental causation (MC) and its associated threats of causal overdetermination, mental causal exclusion, and mental causal irrelevance. Some (e.g., Cynthia and Graham Macdonald, and Stephen Yablo) have suggested that understanding mental-physical realization in terms of the determinable/determinate relation (henceforth, 'determination') provides the key to (...) solving the problem of MC: if mental properties are determinables of their physical realizers, then (since determinables and determinates are distinct, yet don't causally compete) all three threats may be avoided. Not everyone agrees that determination can do this good work, however. Some (e.g., Douglas Ehring, Eric Funkhauser, and Sven Walter) object that mental-physical realization can't be determination, since such realization lacks one or other characteristic feature of determination. I argue that on a proper understanding of the features of determination key to solving the problem of MC these arguments can be resisted. (shrink)
Grounding, understood as a primitive posit operative in contexts where metaphysical dependence is at issue, is not able on its own to do any substantive work in characterizing or illuminating metaphysical dependence---or so I argue in 'No Work for a Theory of Grounding' (Inquiry, 2014). Such illumination rather requires appeal to specific metaphysical relations---type or token identity, functional realization, the determinable-determinate relation, the mereological part-whole relation, and so on---of the sort typically at issue in these contexts. In that case, why (...) posit 'big-G' Grounding in addition to the 'small-g' grounding relations already in the metaphysician's toolkit? The best reasons for doing so stem from the Unity argument, according to which the further posit of Grounding is motivated as an apt unifier of the specific relations, and the Priority argument, according to which Grounding is needed in order to fix the direction of priority of the specific relations. I previously considered versions of these arguments, and argued that they did not succeed; in two papers, however, Jonathan Schaffer aims to develop a better version of the Unity argument, and offers certain objections to my reasons for rejecting the Priority argument. Here I consider these new arguments for Grounding. (shrink)
I problematize Grounding-based formulations of physicalism. More specifically, I argue, first, that motivations for adopting a Grounding-based formulation of physicalism are unsound; second, that a Grounding-based formulation lacks illuminating content, and that attempts to imbue Grounding with content by taking it to be a strict partial order are unuseful and problematic ; third, that conceptions of Grounding as constitutively connected to metaphysical explanation conflate metaphysics and epistemology, are ultimately either circular or self-undermining, and controversially assume that physical dependence is incompatible (...) with explanatory gaps; fourth, that in order to appropriately distinguish physicalism from strong emergentism, a Grounding-based formulation must introduce one and likely two primitives in addition to Grounding; and fifth, that understanding physical dependence in terms of Grounding gives rise to ‘spandrel’ questions, including, e.g., “What Grounds Grounding?”, which arise only due to the overly abstract nature of Grounding. (shrink)
Here we challenge the orthodoxy according to which abduction is an a posteriori mode of inference. We start by providing a case study illustrating how abduction can justify a philosophical claim not justifiable by empirical evidence alone. While many grant abduction's epistemic value, nearly all assume that abductive justification is a posteriori, on grounds that our belief in abduction's epistemic value depends on empirical evidence about how the world contingently is. Contra this assumption, we argue, first, that our belief in (...) abduction’s epistemic value is not and could not be justified a posteriori, and second, that attention to the roles experience plays in abductive justification supports taking abduction to be an a priori mode of inference. We close by highlighting how our strategy for establishing the a priority of abduction positively contrasts with strategies in Bonjour (1998), Swinburne (2001), and Peacocke (2004) aiming to establish the a priority of certain ampliative modes of inference or abductive principles. (shrink)
Some claim that Non- reductive Physicalism is an unstable position, on grounds that NRP either collapses into reductive physicalism, or expands into emergentism of a robust or ‘strong’ variety. I argue that this claim is unfounded, by attention to the notion of a degree of freedom—roughly, an independent parameter needed to characterize an entity as being in a state functionally relevant to its law-governed properties and behavior. I start by distinguishing three relations that may hold between the degrees of freedom (...) needed to characterize certain special science entities, and those needed to characterize their composing physical entities; these correspond to what I call ‘reductions’, ‘restrictions’, and ‘eliminations’ in degrees of freedom. I then argue that eliminations in degrees of freedom, in particular—when strictly fewer degrees of freedom are required to characterize certain special science entities than are required to characterize their composing physical entities—provide a basis for making sense of how certain special science entities can be both physically acceptable and ontologically irreducible to physical entities. (shrink)
Here I compare two accounts of metaphysical indeterminacy (MI): first, the 'meta-level' approach described by Elizabeth Barnes and Ross Cameron in the companion to this paper, on which every state of affairs (SOA) is itself precise/determinate, and MI is a matter of its being indeterminate which determinate SOA obtains; second, my preferred 'object-level' determinable-based approach, on which MI is a matter of its being determinate---or just plain true---that an indeterminate SOA obtains, where an indeterminate SOA is one whose constitutive object (...) has a determinable property, but no unique determinate of that determinable. In S1, I first note an important difference between our accounts, concerning whether MI is taken to induce propositional indeterminacy; in S2, I highlight and defend certain advantages of my account; in S3, I address certain of Barnes and Cameron's objections to my account. (shrink)
What makes a biological entity an individual? Jack Wilson shows that past philosophers have failed to explicate the conditions an entity must satisfy to be a living individual. He explores the reason for this failure and explains why we should limit ourselves to examples involving real organisms rather than thought experiments. This book explores and resolves paradoxes that arise when one applies past notions of individuality to biological examples beyond the conventional range and presents an analysis of identity and persistence. (...) The book's main purpose is to bring together two lines of research, theoretical biology and metaphysics, which have dealt with the same subject in isolation from one another. Wilson explains an alternative theory about biological individuality which solves problems which cannot be addressed by either field alone. He presents a more fine-grained vocabulary of individuation based on diverse kinds of living things, allowing him to clarify previously muddled disputes about individuality in biology. (shrink)
Democracy establishes relationships of political equality, ones in which citizens equally share authority over what they do together and respect one another as equals. But in today's divided public square, democracy is challenged by political thinkers who disagree about how democratic institutions should be organized, and by antidemocratic politicians who exploit uncertainties about what democracy requires and why it matters. Democratic Equality mounts a bold and persuasive defense of democracy as a way of making collective decisions, showing how equality of (...) authority is essential to relating equally as citizens. James Lindley Wilson explains why the US Senate and Electoral College are urgently in need of reform, why proportional representation is not a universal requirement of democracy, how to identify racial vote dilution and gerrymandering in electoral districting, how to respond to threats to democracy posed by wealth inequality, and how judicial review could be more compatible with the democratic ideal. What emerges is an emphatic call to action to reinvigorate our ailing democracies, and a road map for widespread institutional reform. Democratic Equality highlights the importance of diverse forms of authority in democratic deliberation and electoral and representative processes—and demonstrates how that authority rests equally with each citizen in a democracy. (shrink)
Newtonian forces are pushes and pulls, possessing magnitude and direction, that are exerted (in the first instance) by objects, and which cause (in particular) motions. I defend Newtonian forces against the four best reasons for denying or doubting their existence. A running theme in my defense of forces will be the suggestion that Newtonian Mechanics is a special science, and as such has certain prima facie ontological rights and privileges, that may be maintained against various challenges.
Some claim that the notion of strong emergence as involving ontological or causal novelty makes no sense, on grounds that any purportedly strongly emergent features or associated powers 'collapse', one way or another, into the lower-level base features upon which they depend. Here we argue that there are several independently motivated and defensible means of preventing the collapse of strongly emergent features or powers into their lower-level bases, as directed against a conception of strongly emergent features as having fundamentally novel (...) powers. After introducing the project (Section 1), we motivate and present the powers-based account (Section 2); we then canvass the two main versions of the collapse objection, show how these apply to the powers-based account, and problematize certain strategies of response (Section 3); we then present and defend four better strategies of response (Section 4). (shrink)
Many contemporary philosophers accept Hume's Dictum, according to which there are no metaphysically necessary connections between distinct, intrinsically typed entities. Tacit in Lewis 's work is a potential motivation for HD, according to which one should accept HD as presupposed by the best account of the range of metaphysical possibilities---namely, a combinatorial account, applied to spatiotemporal fundamentalia. Here I elucidate and assess this Ludovician motivation for HD. After refining HD and surveying its key, recurrent role in Lewis ’s work, I (...) present Lewis ’s appeal to HD as providing a broadly axiomatic generating basis for the space of metaphysical modality, and canvas the prima facie advantages of the resulting combinatorial principle---HD ---as being principled, extensionally adequate and modally reductive. Most criticisms of Lewis 's combinatorialism have targeted seeming ways in which the theory overgenerates the desired space; I rather argue that HD seriously undergenerates the desired space in three different ways. For each way I argue that available means of overcoming the undergeneration either fail to close the gap, undermine the claim that HD is a principled generator of metaphysical modal space, undermine the reductive status of Lewis 's combinatorialism, or call into question the truth of HD. (shrink)
Some claim that Non-reductive Physicalism is an unstable position, on grounds that NRP either collapses into reductive physicalism, or expands into emergentism of a robust or ‘strong’ variety. I argue that this claim is unfounded, by attention to the notion of a degree of freedom—roughly, an independent parameter needed to characterize an entity as being in a state functionally relevant to its law-governed properties and behavior. I start by distinguishing three relations that may hold between the degrees of freedom needed (...) to characterize certain special science entities, and those needed to characterize their composing physical entities; these correspond to what I call ‘reductions’, ‘restrictions’, and ‘eliminations’ in degrees of freedom. I then argue that eliminations in degrees of freedom, in particular—when strictly fewer degrees of freedom are required to characterize certain special science entities than are required to characterize their composing physical entities—provide a basis for making sense of how certain special science entities can be both physically acceptable and ontologically irreducible to physical entities. (shrink)
Horgan (1993) proposed that "superdupervenience" - supervenience preserving physicalistic acceptability - is a matter of robust explanation. I argued against him (1999) that (as nearly all physicalist and emergentist accounts reflect) superdupervenience is a matter of Condition on Causal Powers (CCP): every causal power bestowed by the supervenient property is identical with a causal power bestowed by its base property. Here I show that CCP is, as it stands, unsatisfactory,for on the usual understandings of causal power bestowal, it is trivially (...) satisfied or falsified. I offer a revision of CCP which incorporates the evident fact that causal powers are grounded in fundamental forces. (shrink)
Research involving human subjects is much more stringently regulated than many other nonresearch activities that appear to be at least as risky. A number of prominent figures now argue that research is overregulated. We argue that the reasons typically offered to justify the present system of research regulation fail to show that research should be subject to more stringent regulation than other equally risky activities. However, there are three often overlooked reasons for thinking that research should be treated as a (...) special case. First, research typically involves the imposition of risk on people who do not benefit from this risk imposition. Second, research depends on public trust. Third, the complexity of the moral decision making required favors ethics committees as a regulative solution for research. (shrink)
Every paper in this collection is worth reading, for one reason or another. Still, due to certain problematic metametaphysical presuppositions most of these discussions miss the deeper mark, on the pessimist as well as the optimist side. My reasons for thinking this come from considering how best to answer three metametaphysical questions. First, why be pessimistic about metaphysics – why be Carnapian in a post-positivist age? There is, I’ll suggest, a post-positivist strategy for reviving Carnapian pessimism, but it is almost (...) entirely neglected here; and the motivations that pessimists offer instead are not compelling. Second, why think that the best way to approach metametaphysical questions is by attention to features of language, and in particular to quantifier semantics, in ordinary or ontological language(s)? Here again we are offered little motivation for this supposition, which, notwithstanding its acceptance by nearly all contributors, faces clear difficulties. Third, granting that quantification is somehow bound up with first-order questions about what exists, what is the nature of this connection, and what are the associated implications for metametaphysics? Here I find the accounts of the connection on offer implausible, especially as compared to an alternative that makes better sense of metaphysical practice and disagreement. The moral following consideration of these questions is that real progress in metametaphysics is likely to occur less by attention to semantic issues pertaining to representation, translation and quantification and more to non-semantic issues pertaining to epistemology and metaphysical determinacy. (shrink)
This article concerns the nature and limits of individuals’ rights to privacy over information that they have made public. For some, even suggesting that an individual can have a right to privacy over such information may seem paradoxical. First, one has no right to privacy over information that was never private to begin with. Second, insofar as one makes once-private information public – whether intentionally or unintentionally – one waives one’s right to privacy to that information. In this article, however, (...) we suggest the moral situation is more complicated than this. Rather, we argue that there is a class of public information – namely, once-private information that individuals have made public unintentionally – which remains within the scope of an individuals’ right to privacy, even when it has passed into the public domain. Significantly, this class includes any information rights-holders were unaware could be inferred from information they have made public and which they would not otherwise have wanted to be in the public domain. As we show, as well as clarifying several everyday dilemmas with regards to individuals’ privacy rights, this finding has elucidates a number of problems in the ethics of Big Data. (shrink)
In what does philosophical progress consist? 'Vertical' progress corresponds to development within a specific paradigm/framework for theorizing (of the sort associated, revolutions aside, with science); 'horizontal' progress corresponds to the identification and cultivation of diverse paradigms (of the sort associated, conservativism aside, with art and pure mathematics). Philosophical progress seems to involve both horizontal and vertical dimensions, in a way that is somewhat puzzling: philosophers work in a number of competing frameworks (like artists or mathematicians), while typically maintaining that only (...) one of these is correct (like scientists). I diagnose this situation as reflecting that we are presently quite far from the end of inquiry into philosophical methodology. The good news is that we appear to be making advances on this score. The bad news is that failure to recognize or make explicit that our standards are in flux often leads to dogmatism, as I illustrate by attention to three assumptions presently operative in metaphysical and metametaphysical contexts. I close by identifying a tension between vertical and horizontal progress in philosophy, and suggesting an updated version of Carnap's principle of tolerance for new philosophical forms. (shrink)
Do component forces exist in conjoined circumstances? Cartwright (1980) says no; Creary (1981) says yes. I'm inclined towards Cartwright's side in this matter, but find several problems with her argumentation. My primary aim here is to present a better, distinctly causal, argument against component forces: very roughly, I argue that the joint posit of component and resultant forces in conjoined circumstances gives rise to a threat of causal overdetermination, avoidance of which best proceeds via eliminativism about component forces. A secondary (...) aim is to show that rejecting component forces does not require, pace Cartwright, rejecting certain attractive theses about what laws of nature express and the role such laws play in scientific explanations. (shrink)
The nonlinearity of a composite system, whereby certain of its features (including powers and behaviors) cannot be seen as linear or other broadly additive combinations of features of the system's composing entities, has been frequently seen as a mark of metaphysical emergence, coupling the dependence of a composite system on an underlying system of composing entities with the composite system's ontological autonomy from its underlying system. But why think that nonlinearity is a mark of emergence, and moreover, of metaphysical rather (...) than merely epistemological emergence? Are there diverse ways in which nonlinearity might enter into an account of properly metaphysical emergence? And what are the prospects for there actually being phenomena that are metaphysically emergent in any available sense? Here I explore the mutual bearing of nonlinearity and metaphysical emergence, with an eye towards answering these and related questions. (shrink)
Health systems that provide for universal patient access through a scheme of prepayments—whether through taxes, social insurance, or a combination of the two—need to make decisions on the scope of coverage that they secure. Such decisions are inherently controversial, implying, as they do, that some patients will receive less than comprehensive health care, or less than complete protection from the financial consequences of ill-heath, even when there is a clinically effective therapy to which they might have access.Controversial decisions of this (...) sort call for a public justification for covering or not covering a given treatment. Priority-setting agencies play a key role in providing such a justification. A recent... (shrink)
Introduction Philosophy and education 'Philosophy of education' is a name for nothing clear; but despite this there seem already to be two bodies of opinion ...
Relativized Metaphysical Modality (RMM: Murray and Wilson, 'Relativized metaphysical modality', Oxford Studies in Metaphysics, 2012; Murray, Perspectives on Modal Metaphysics, 2017) exploits 'two-dimensionalist' resources to metaphysical, rather than epistemological, ends: the second dimension offers perspective-dependence without contingency, diverting attacks on 'Classical' analyses of modals (in effect, analyses validating S5 and the Barcan Formulae). Here, we extend the RMM program in two directions. First, we harvest resources for RMM from Lewis's 1980 'Context--Index' (CI) framework: (a) the ban in CI on binding (...) into context-arguments (akin to Kaplan's 'monstrosity' ban) projects a bright line between perspective-dependence and contingency; and (b) CI-postulated connections among meaning, content, truth, argument-structure, context, and modality collectively generate a 'Generalized Humphrey Problem' for any non-Classical analysis (examples covered include appeals to accessibility, contingent domains, and counterpart relations). Second, we sharpen the tools of RMM-based metaphysical analysis, and extend their domain of coverage across familiar anomalies for Classical modals: we revisit earlier RMM-based bulwarks for S5 (against 'Chisholm's Paradox' for moderate flexibility of essence, and nomological necessitarianism); and we now similarly shore up the Barcan Formulae (against the apparent contingency of existence and nonexistence). (shrink)
Public health policies which involve active intervention to improve the health of the population are often criticized as paternalistic. This article argues that it is a mistake to frame our discussions of public health policies in terms of paternalism. First, it is deeply problematic to pick out which policies should count as paternalistic; at best, we can talk about paternalistic justifications for policies. Second, two of the elements that make paternalism problematic at an individual level—interference with liberty and lack of (...) individual consent—are endemic to public policy contexts in general and so cannot be used to support the claim that paternalism in particular is wrong. Instead of debating whether a given policy is paternalistic, we should ask whether the infringements of liberty it contains are justifiable, without placing any weight on whether or not those infringements of liberty are paternalistic. Once we do so, it becomes apparent that a wide range of interventionist public health policies are justifiable. (shrink)
Free will, if such there be, involves free choosing: the ability to mentally choose an outcome, where the outcome is 'free' in being, in some substantive sense, up to the agent of the choice. As such, it is clear that the questions of how to understand free will and mental causation are connected, for events of seemingly free choosing are mental events that appear to be efficacious vis-a-vis other mental events as well as physical events. Nonetheless, the free will and (...) mental causation debates have proceeded largely independently of each other. Here we aim to make progress in determining how the free will and mental causation debates bear on one another. We first argue that the problems of free will and of mental causation can be seen as special cases of a more general problem, concerning whether and how mental events of a given type may be efficacious, qua the types of event they are---qualitative, intentional, freely deliberative---given their apparent causal irrelevancy for effects of the type in question; here we generalize what Horgan 1989 identifies as "the problem of mental quausation" (S1). We then build on this result to identify fruitful parallels between hard determinism and eliminative physicalism (S2) and soft determinism and non-reductive physicalism (S3). (shrink)
Comprehensive Biomedical Research Centre and Centre for Philosophy, Justice and Health, UCL, First Floor, Charles Bell House, 67–73 Riding House Street, London W1W 7EJ, UK. Tel.: +44 (0)20 7679 9417; Fax: +44 (0)20 7679 9426; Email: james-gs.wilson{at}ucl.ac.uk ' + u + '@' + d + ' '//--> . Abstract This paper aims to shed some light on the difficulties we face in constructing a generally acceptable normative framework for thinking about public health. It argues that there are three factors that (...) combine to make theorising about public health difficult, and which when taken together defeat simplistic top-down and bottom-up approaches to the design of public health policies. The first factor is the problem of complex systems, namely that the distribution of health both affects and is affected by the distribution of other goods. The second is the difficulty of defining the goals of public health: we still need to get clear about what we should mean by health in this context, and what the goals of public health should be. The third is that we stand in need of an account of how important health is relative to the importance of other goods that a just society should be trying to secure for its citizens. The paper argues that these problems should lead us to abandon the search for a ‘one-size fits all’ normative framework for thinking about public health. Rather, different approaches will be appropriate at different levels of abstraction. CiteULike Connotea Del.icio.us What's this? (shrink)
Health inequities.James Wilson - 2011 - In Angus Dawson (ed.), Public Health Ethics: Key Concepts and Issues in Policy and Practice. Cambridge: Cambridge University Press. pp. 211-230.details
The infant mortality rate in Liberia is 50 times higher than it is in Sweden, whilst a child born in Japan has a life expectancy at birth of more than double that of one born in Zambia. 1 And within countries, we see differences which are nearly as great. For example, if you were in the USA and travelled the short journey from the poorer parts of Washington to Montgomery County Maryland, you would find that ‘for each mile travelled life (...) expectancy rises about a year and a half. There is a twenty-year gap between poor blacks at one end of the journey and rich whites at the other’. (Marmot, 2004, p.2). There are two types of questions which it is important to ask about inequalities in health such as these. The first are social scientific questions about the extent of inequalities in health and the factors which are causally responsible for these inequalities. Examples of social scientific questions to ask might be: how do infant mortality rates in the UK differ according to social class? What is the difference in life expectancy between Japanese who emigrate to the US and those who remain in Japan? Why do civil servants in higher ranked jobs tend to live longer than civil servants in lower ranked jobs? The second type are normative questions about the reasons we have to care about inequalities in health. Important normative questions to answer are: which inequalities in health should we care about (all inequalities or merely some of them)? When is an inequality in health unjust? How should we weigh our concern for equality in health against other factors such as maximising the.. (shrink)
In his preface Mr Wilson writes 'I feel that a great many adults … would do better to spend less time in simply accepting the concepts of others uncritically, and more time in learning how to analyse concepts in general'. Mr Wilson starts by describing the techniques of conceptual analysis. He then gives examples of them in action by composing answers to specific questions and by criticism of quoted passages of argument. Chapter 3 sums up the importance of this kind (...) of mental activity. Chapter 4 presents selections for the reader to analyse, followed by questions of university entrance/scholarship type. This is a book to be worked through, in a sense a text-book. (shrink)
In Meaning and Necessity (1947/1950), Carnap advances an intensional semantic framework on which modal claims are true in virtue of semantical rules alone, and so are a priori. In 'Empiricism, Semantics, and Ontology' (1950), Carnap advances an epistemic-ontological framework on which metaphysical claims are either trivial or meaningless, since lacking any means of substantive confirmation. Carnap carried out these projects two decades before Kripke influentially argued, in Naming and Necessity (1972/1980), that some modal claims are true a posteriori. How should (...) a neo-Carnapian respond to Kripke's results? Some (notably, Chalmers and Jackson, in their 2001) have suggested that an extension of intensional semantics along lines of "epistemic two-dimensionalism" can accommodate Kripke's results while largely preserving commitment to the semantics-based a priority of modal claims. Here we consider how best to implement this suggestion, and how the resulting semantics fits with Carnap's second project. We find that the most promising (and most Carnapian!) post-Kripke version of Carnap’s semantics---abductive two-dimensionalism---presupposes an epistemology which undermines Carnap's metaphysical anti-realism. (shrink)
It is commonly supposed that metaphysical modal claims are to be evaluated with respect to a single domain of possible worlds: a claim is metaphysically necessary just in case it is true in every possible world, and metaphysically possible just in case it is true in some possible world. We argue that the standard understanding is incorrect; rather, whether a given claim is metaphysically necessary or possible is relative to which world is indicatively actual. We motivate our view by attention (...) to discussions in Salmon 1989 and Fine 2005, in which various data are taken to support rejecting the transitivity of accessibility and modal monism ; we argue that relativized metaphysical modality can accommodate these data compatible with both standard modal logic and modal monism. Noting an analogy with two-dimensional semantics, we argue that metaphysical modality has a complex structure, reflecting what is counterfactually possible, relative to each indicatively actual world. In arguing for the need for relativization, we are broadly on the same side as Crossley and Humberstone and Davies and Humberstone ; our contribution here is, first, to offer distinctively metaphysical reasons for relativization, and second, to show that relativization can be incorporated in ways minimally departing from standard modal logic. (shrink)
Three main claims are made in this paper. First, it is argued that Onora O’Neill has uncovered a serious problem in the way medical ethicists have thought about both respect for autonomy and informed consent. Medical ethicists have tended to think that autonomous choices are intrinsically worthy of respect, and that informed consent procedures are the best way to respect the autonomous choices of individuals. However, O’Neill convincingly argues that we should abandon both these thoughts. Second, it is argued that (...) O’Neill’s proposed solution to this problem is inadequate. O’Neill’s approach requires that a more modest view of the purpose of informed consent procedures be adopted. In her view, the purpose of informed consent procedures is simply to avoid deception and coercion, and the ethical justification for informed consent derives from a different ethical principle, which she calls principled autonomy. It is argued that contrary to what O’Neill claims, the wrongness of coercion cannot be derived from principled autonomy, and so its credentials as a justification for informed consent procedures is weak. Third, it is argued that we do better to rethink autonomy and informed consent in terms of respecting persons as ends in themselves, and a characteristically liberal commitment to allowing individuals to make certain categories of decisions for themselves. Respect for autonomy is in trouble. In recent work in this journal1 and elsewhere,2 O’Neill has forcefully argued that respect for autonomy, as it has come to be used in medical ethics, is philosophically indefensible. If her arguments are sound, then, contrary to the standard view, respect for autonomy cannot be the source of the ethical requirement to seek informed consent before treating a patient or enrolling a participant in a trial. So her critique goes to the heart of contemporary medical ethics: if O’Neill is right, medical ethicists have systematically misunderstood two of the most fundamental concepts they deal with—respect for autonomy and informed consent. This paper has four sections. Section 1 distinguishes between three different ways of talking about respect for autonomy, and looks in more detail at the one that has come to be central to bioethical writing on informed consent—namely, the idea that we should respect autonomous choices. Section 2 argues, following O’Neill, that it is implausible to think that the purpose of informed consent requirements is to respect autonomous choices. Section 3 argues that O’Neill’s proposed reworking of autonomy and informed consent is inadequate. O’Neill’s approach requires us to adopt a more modest view of the purpose of informed consent procedures. In her view, the purpose of informed consent procedures is simply to avoid deception and coercion, and the ethical justification for informed consent derives from a different ethical principle, which she calls principled autonomy. I argue that contrary to what O’Neill claims, we cannot derive the wrongness of coercion from principled autonomy, and so its credentials as a justification for informed consent procedures is weak. Section 4 argues that we do better to rethink autonomy and informed consent in terms of respecting persons as ends in themselves, and a characteristically liberal commitment to allowing individuals to make certain categories of decisions for themselves. (shrink)
Why believe Hume's Dictum, according to which there are, roughly speaking, no necessary connections between wholly distinct entities? Schaffer suggests that HD, at least as applied to causal or nomological connections, is motivated as required by the best account of of counterfactuals---namely, a similarity-based possible worlds account, where the operative notion of similarity requires 'miracles'---more specifically, worlds where entities of the same type that actually exist enter into different laws. The main cited motivations for such an account of similarity are (...) first, that some salient contexts presuppose CF asymmetry, and second, that accounts of CFs failing to presuppose CF asymmetry are epistemologically problematic, such that under conditions of determinism, the variations in initial micro-conditions needed to implement a given counterfactual antecedent would result in so many changes to macro-states that evaluation of CFs would be rendered practically impossible. Against the first reason, I argue that no non-artificial contexts presuppose CF asymmetry; against the second, I observe that such micro-variation is compatible, in principle, with significant similarity as regards macroscopic states of affairs---enough, in particular, to allow CFs to be appropriately evaluated. (shrink)