ABSTRACTSpatial cueing paradigms are popular tools to assess human attention to emotional stimuli, but different variants of these paradigms differ in what participants’ primary task is. In one variant, participants indicate the location of the target, whereas in the other they indicate the shape of the target. In the present paper we test the idea that although these two variants produce seemingly comparable cue validity effects on response times, they rest on different underlying processes. Across four studies using both variants (...) and manipulating the motivational relevance of cue content, diffusion model analyses revealed that cue validity effects in location tasks are primarily driven by response biases, whereas the same effect rests on delay due to attention to the cue in identification tasks. Based on this, we predict and empirically support that a symmetrical distribution of valid and invalid cues would reduce cue validity... (shrink)
Humean accounts of natural lawhood have often been criticized as unable to account for the laws’ characteristic explanatory power in science. Loewer has replied that these criticisms fail to distinguish grounding explanations from scientific explanations. Lange has replied by arguing that grounding explanations and scientific explanations are linked by a transitivity principle, which can be used to argue that Humean accounts of natural law violate the prohibition on self-explanation. Lange’s argument has been sharply criticized by Hicks and van (...) Elswyk, Marshall, and Miller. This paper shows how Lange’s argument can withstand these criticisms once the transitivity principle and the prohibition on self-explanation are properly refined. The transitivity principle should be refined to accommodate contrasts in the explanans and explanandum. The prohibition on self-explanation should be refined so that it precludes a given fact p from helping to explain why some other fact q helps to explain why p. In this way, the transitivity principle avoids having counterintuitive consequences in cases involving macrostates having multiple possible microrealizations. The transitivity principle is perfectly compatible with the irreducibility of macroexplanations to microexplanations and with the diversity of the relations that can underwrite scientific explanations. (shrink)
In Lange 2004a, I argued that 'scientific essentialism' [Ellis 2001 cannot account for the characteristic relation between laws and counterfactuals without undergoing considerable ad hoc tinkering. In recent papers, Brian Ellis 2005 and Toby Handfield 2005 have defended essentialism against my charge. Here I argue that Ellis's and Handfield's replies fail. Even in ordinary counterfactual reasoning, the 'closest possible world' where the electron's electric charge is 5% greater may have less overlap with the actual world in its fundamental natural (...) kinds than a 'more distant possible world' where the electron's charge is 5% greater. But more importantly, essentialism's flexibility in being able to accommodate virtually any relation between laws and counterfactuals is a symptom of essentialism's explanatory impotence as far as that relation is concerned. (shrink)
Counterfactuals all the way down? Content Type Journal Article DOI 10.1007/s11016-010-9437-9 Authors Jim Woodward, History and Philosophy of Science, 1017 Cathedral of Learning, University of Pittsburgh, Pittsburgh, PA 15260, USA Barry Loewer, Department of Philosophy, Rutgers University, New Brunswick, NJ 08901, USA John W. Carroll, Department of Philosophy and Religious Studies, North Carolina State University, Raleigh, NC 27695-8103, USA Marc Lange, Department of Philosophy, University of North Carolina at Chapel Hill, CB#3125—Caldwell Hall, Chapel Hill, NC 27599-3125, USA Journal Metascience (...) Online ISSN 1467-9981 Print ISSN 0815-0796 Journal Volume Volume 20 Journal Issue Volume 20, Number 1. (shrink)
In the last five years there have been a number of results about the computable content of the prime, saturated, or homogeneous models of a complete decidable theory T in the spirit of Vaught's "Denumerable models of complete theories" combined with computability methods for degrees d ≤ 0′. First we recast older results by Goncharov, Peretyat'kin, and Millar in a more modern framework which we then apply. Then we survey recent results by Lange, "The degree spectra of homogeneous models," (...) which generalize the older results and which include positive results on when a certain homogeneous model of T has an isomorphic copy of a given Turing degree. We then survey Lange's "A characterization of the 0-basis homogeneous bounding degrees" for negative results about when does not have such copies, generalizing negative results by Goncharov, Peretyat'kin, and Millar. Finally, we explain recent results by Csima, Harizanov, Hirschfeldt, and Soare, "Bounding homogeneous models," about degrees d that are homogeneous bounding and explain their relation to the PA degrees. (shrink)
In Educations in Ethnic Violence, Matthew Lange explores the effects education has on ethnic violence. Lange contradicts the widely held belief that education promotes peace and tolerance. Rather, Lange finds that education commonly contributes to aggression, especially in environments with ethnic divisions, limited resources and ineffective political institutions. He describes four ways in which organized learning spurs ethnic conflicts. Socialization in school shapes students' identities and the norms governing intercommunal relations. Education can also increase students' frustration and (...) aggression when their expectations are not met. Sometimes, the competitive atmosphere gives students an incentive to participate in violence. Finally, education provides students with superior abilities to mobilize violent ethnic movements. Lange employs a cross-national statistical analysis with case studies of Sri Lanka, Cyprus, the Palestinian territories, India, sub-Saharan Africa, Canada and Germany. (shrink)
Certain scientific explanations of physical facts have recently been characterized as distinctively mathematical –that is, as mathematical in a different way from ordinary explanations that employ mathematics. This article identifies what it is that makes some scientific explanations distinctively mathematical and how such explanations work. These explanations are non-causal, but this does not mean that they fail to cite the explanandum’s causes, that they abstract away from detailed causal histories, or that they cite no natural laws. Rather, in these explanations, (...) the facts doing the explaining are modally stronger than ordinary causal laws or are understood in the why question’s context to be constitutive of the physical arrangement at issue. A distinctively mathematical explanation works by showing the explanandum to be more necessary than ordinary causal laws could render it. Distinctively mathematical explanations thus supply a kind of understanding that causal explanations cannot. 1 Introduction2 Some Distinctively Mathematical Scientific Explanations3 Are Distinctively Mathematical Explanations Set Apart by their Failure to Cite Causes? 4 Distinctively Mathematical Explanations do not Exploit Causal Powers5 How these Distinctively Mathematical Explanations Work6 Conclusion. (shrink)
It is often presumed that the laws of nature have special significance for scientific reasoning. But the laws' distinctive roles have proven notoriously difficult to identify--leading some philosophers to question if they hold such roles at all. This study offers original accounts of the roles that natural laws play in connection with counterfactual conditionals, inductive projections, and scientific explanations, and of what the laws must be in order for them to be capable of playing these roles. Particular attention is given (...) to laws of special sciences, levels of scientific explanation, natural kinds, ceteris-paribus clauses, and physically necessary non-laws. (shrink)
Unlike explanation in science, explanation in mathematics has received relatively scant attention from philosophers. Whereas there are canonical examples of scientific explanations, there are few examples that have become widely accepted as exhibiting the distinction between mathematical proofs that explain why some mathematical theorem holds and proofs that merely prove that the theorem holds without revealing the reason why it holds. This essay offers some examples of proofs that mathematicians have considered explanatory, and it argues that these examples suggest a (...) particular account of explanation in mathematics. The essay compares its account to Steiner's and Kitcher's. Among the topics that arise are proofs that exploit symmetries, mathematical coincidences, brute-force proofs, simplicity in mathematics, merely clever proofs, and proofs that unify what other proofs treat as separate cases. (shrink)
It has often been argued that Humean accounts of natural law cannot account for the role played by laws in scientific explanations. Loewer (Philosophical Studies 2012) has offered a new reply to this argument on behalf of Humean accounts—a reply that distinguishes between grounding (which Loewer portrays as underwriting a kind of metaphysical explanation) and scientific explanation. I will argue that Loewer’s reply fails because it cannot accommodate the relation between metaphysical and scientific explanation. This relation also resolves a puzzle (...) about scientific explanation that Hempel and Oppenheim (Philosophy of Science 15:135–75, 1948) encountered. (shrink)
Although all mathematical truths are necessary, mathematicians take certain combinations of mathematical truths to be ‘coincidental’, ‘accidental’, or ‘fortuitous’. The notion of a ‘ mathematical coincidence’ has so far failed to receive sufficient attention from philosophers. I argue that a mathematical coincidence is not merely an unforeseen or surprising mathematical result, and that being a misleading combination of mathematical facts is neither necessary nor sufficient for qualifying as a mathematical coincidence. I argue that although the components of a mathematical coincidence (...) may possess a common explainer, they have no common explanation ; that two mathematical facts have a unified explanation makes their truth non-coincidental. I suggest that any motivation we may have for thinking that there are mathematical coincidences should also motivate us to think that there are mathematical explanations, since the notion of a mathematical coincidence can be understood only in terms of the notion of a mathematical explanation. I also argue that the notion of a mathematical coincidence plays an important role in scientific explanation. When two phenomenological laws of nature are similar, despite concerning physically distinct processes, it may be that any correct scientific explanation of their similarity proceeds by revealing their similarity to be no mathematical coincidence. (shrink)
Several foundational documents of bioethics mention the special obligation researchers have to vulnerable research participants. However, the treatment of vulnerability offered by these documents often relies on enumeration of vulnerable groups rather than an analysis of the features that make such groups vulnerable. Recent attempts in the scholarly literature to lend philosophical weight to the concept of vulnerability are offered by Luna and Hurst. Luna suggests that vulnerability is irreducibly contextual and that Institutional Review Boards (Research Ethics Committees) can only (...) identify vulnerable participants by carefully examining the details of the proposed research. Hurst, in contrast, defines the vulnerable as those especially at risk of incurring the wrongs to which all research ethics participants are exposed. We offer a more substantive conception of vulnerability than Luna but one that gives rise to a different rubric of responsibilities from Hurst's. While we understand vulnerability to be an ontological condition of human existence, in the context of research ethics, we take the vulnerable to be research subjects who are especially prone to harm or exploitation. Our analysis rests on developing a typology of sources of vulnerability and showing how distinct sources generate distinct obligations on the part of the researcher. Our account emphasizes that the researcher's first obligation is not to make the research participant even more vulnerable than they already are. To illustrate our framework, we consider two cases: that of a vulnerable population involved in international research and that of a domestic population of people with diminished capacity. (shrink)
A conservation law in physics can be either a constraint on the kinds of interaction there could be or a coincidence of the kinds of interactions there actually are. This is an important, unjustly neglected distinction. Only if a conservation law constrains the possible kinds of interaction can a derivation from it constitute a scientific explanation despite failing to describe the causal/mechanical details behind the result derived. This conception of the relation between “bottom-up” scientific explanations and one kind of “top-down” (...) scientific explanation is motivated by several examples from classical and modern physics. (shrink)
Many philosophers have believed that the laws of nature differ from the accidental truths in their invariance under counterfactual perturbations. Roughly speaking, the laws would still have held had q been the case, for any q that is consistent with the laws. (Trivially, no accident would still have held under every such counterfactual supposition.) The main problem with this slogan (even if it is true) is that it uses the laws themselves to delimit qs range. I present a means of (...) distinguishing the laws (and their logical consequences) from the accidents, in terms of their range of invariance under counterfactual antecedents, that does not appeal to physical modalities (or any cognate notion) in delimiting the relevant range of counterfactual perturbations. I then argue that this approach explicates the sense in which the laws possess a kind of necessity. (shrink)
Philosophers who regard some mathematical proofs as explaining why theorems hold, and others as merely proving that they do hold, disagree sharply about the explanatory value of proofs by mathematical induction. I offer an argument that aims to resolve this conflict of intuitions without making any controversial presuppositions about what mathematical explanations would be.
This paper argues that in at least some cases, one proof of a given theorem is deeper than another by virtue of supplying a deeper explanation of the theorem — that is, a deeper account of why the theorem holds. There are cases of scientific depth that also involve a common abstract structure explaining a similarity between two otherwise unrelated phenomena, making their similarity no coincidence and purchasing depth by answering why questions that separate, dissimilar explanations of the two phenomena (...) cannot correctly answer. The connections between explanation, depth, unification, power, and coincidence in mathematics and science are compared. (shrink)
Among the niftiest arguments for scientific anti-realism is the ‘pessimistic induction’ (also sometimes called ‘the disastrous historical meta-induction’). Although various versions of this argument differ in their details (see, for example, Poincare 1952: 160, Putnam 1978: 25, and Laudan 1981), the argument generally begins by recalling the many scientific theories that posit unobservable entities and that at one time or another were widely accepted. The anti-realist then argues that when these old theories were accepted, the evidence for them was quite (...) persuasive – roughly as compelling as our current evidence is for our best scientific theories positing various unobservable entities. Nevertheless, the anti-realist argues, most of these old theories turned out to be incorrect in the unobservables they posited. Therefore, the anti-realist concludes that with regard to the theories we currently accept, we should believe that probably, most of them are likewise incorrect in the unobservable entities they posit. (This argument appeals to what our best current theories say about unobservables in order to show that the entities posited by some earlier theory are not real. So the argument takes the form of a reductio of the view that the apparent success of some scientific theory justifies our believing in its accuracy regarding unobservables.) Of course, this argument has been criticized on many grounds. Some have argued, for instance, that the scientific theories we currently accept are much better supported than were earlier scientific theories at the time they were accepted. In addition, some have argued that many scientific theories accepted justly in the past were in fact accurate.. (shrink)
Scientific essentialism aims to account for the natural laws' special capacity to support counterfactuals. I argue that scientific essentialism can do so only by resorting to devices that are just as ad hoc as those that essentialists accuse Humean regularity theories of employing. I conclude by offering an account of the laws' distinctive relation to counterfactuals that portrays laws as contingent but nevertheless distinct from accidents by virtue of possessing a genuine variety of necessity.
Why do forces compose according to the parallelogram of forces? This question has been controversial; it is one episode in a longstanding, fundamental dispute regarding which facts are not to be explained dynamically. If the parallelogram law is explained statically, then the laws of statics are separate from and “transcend” the laws of dynamics. Alternatively, if the parallelogram law is explained dynamically, then statical laws become mere corollaries to the dynamical laws. I shall attempt to trace the history of this (...) controversy in order to identify what it would be for one or the other of these rival views to be correct. I shall argue that various familiar accounts of natural law not only make it difficult to see what the point of this dispute could have been, but also improperly foreclose some serious scientific options. I will sketch an alternative account of laws that helps us to understand what this dispute was all about. (shrink)
Hempel and Giere contend that the existence of provisos poses grave difficulties for any regularity account of physical law. However, Hempel and Giere rely upon a mistaken conception of the way in which statements acquire their content. By correcting this mistake, I remove the problem Hempel and Giere identify but reveal a different problem that provisos pose for a regularity account — indeed, for any account of physical law according to which the state of affairs described by a law-statement presupposes (...) a Humean regularity. These considerations suggest a normative analysis of law-statements. On this view, law-statements are not distinguished from accidental generalizations by the kind of Humean regularities they describe because a law-statement need not describe any Humean regularity. Rather, a law-statement says that in certain contexts, one ought to regard the assertion of a given type of claim, if made with justification, as a proper way to justify a claim of a certain other kind. (shrink)
Sober 2011 argues that, contrary to Hume, some causal statements can be known a priori to be true?notably, some ?would promote? statements figuring in causal models of natural selection. We find Sober's argument unconvincing. We regard the Humean thesis as denying that causal explanations contain any a priori knowable statements specifying certain features of events to be causally relevant. We argue that not every ?would promote? statement is genuinely causal, and we suggest that Sober has not shown that his examples (...) of ?would promote? statements manage to achieve a priori status without sacrificing their causal character. (shrink)
Rosenberg has recently argued that explanations supplied by (what he calls) functional biology are mere promissory notes for macromolecular adaptive explanations. Rosenberg's arguments currently constitute one of the most substantial challenges to the autonomy, irreducibility, and indispensability of the explanations supplied by functional biology. My responses to Rosenberg's arguments will generate a novel account of the autonomy of functional biology. This account will turn on the relations between counterfactuals, scientific explanations, and natural laws. Crucially, in their treatment of the laws' (...) relation to counterfactuals, Rosenberg's arguments beg the question against the autonomy of functional biology. This relation is considerably more subtle than is suggested by familiar slogans such as Laws support counterfactuals; accidents don't. (shrink)
We show that if H is an effectively completely decomposable computable torsion-free abelian group, then there is a computable copy G of H such that G has computable orders but not orders of every degree.
A long lasting debate in the field of implicit learning is whether participants can learn without acquiring conscious knowledge. One crucial problem is that no clear criterion exists allowing to identify participants who possess explicit knowledge. Here, we propose a method to diagnose during a serial reaction time task those participants who acquire conscious knowledge. We first validated this method by using Stroop-like material during training. Then we assessed participants’ knowledge with the Inclusion/Exclusion task and the wagering task . Both (...) experiments confirmed that for participants diagnosed as having acquired conscious knowledge about the underlying sequence the Stroop congruency effect disappeared, whereas for participants not diagnosed as possessing conscious knowledge it only slightly decreased. In addition, both experiments revealed that only participants diagnosed as conscious were able to strategically use their acquired knowledge. Thus, our method allows to reliably distinguish between participants with and without conscious knowledge. (shrink)
I identify the special sort of stability (invariance, resilience, etc.) that distinguishes laws from accidental truths. Although an accident can have a certain invariance under counterfactual suppositions, there is no continuum between laws and accidents here; a law's invariance is different in kind, not in degree, from an accident's. (In particular, a law's range of invariance is not "broader"--at least in the most straightforward sense.) The stability distinctive of the laws is used to explicate what it would mean for there (...) to be multiple grades (or degrees) of physical necessity. Whether there are is for science to discover. (shrink)
Very little has been done to find out what corporations have done to build ethical values into their organizations. In this report on a survey of 1984 Fortune 1000 industrial and service companies the Center for Business Ethics reveals some facts regarding codes of ethics, ethics committees, social audits, ethics training programs, boards of directors, and other areas where corporations might institutionalize ethics. Based on the survey, the Center for Business Ethics is convinced that corporations are beginning to take steps (...) to institutionalize ethics, while recognizing that in most cases more specific mechanisms and strategies need to be implemented to make their ethics efforts truly effective. (shrink)
In a notable article entitled “What is the Theory of Relativity?” written at the request of The Times and published in its November 28, 1919 edition, Albert Einstein famously distinguished “theories of principle” from “constructive theories.” Einstein placed relativity theory among the principle theories. His distinction has recently received increased attention, especially as it relates to scientific explanation. In particular, there has been considerable discussion of how to explain why there obtain the Lorentz transformations as well as of how to (...) account for the Lorentz covariance of the dynamical laws. Some .. (shrink)
I offer an argument regarding chances that appears to yield a dilemma: either the chances at time t must be determined by the natural laws and the history through t of instantiations of categorical properties, or the function ch(•) assigning chances need not satisfy the axioms of probability. The dilemma's first horn might seem like a remnant of determinism. On the other hand, this horn might be inspired by our best scientific theories. In addition, it is entailed by the familiar (...) view that facts about chances at t are ontologically reducible to facts about the laws and the categorical history through t. However, that laws are ontologically prior to chances stands in some tension with the view that chances are governed by laws just as categorical-property instantiations are. The dilemma's second horn entails that if chances are in fact probabilities, then this is a matter of natural law rather than logical or conceptual necessity. I conclude with a suggestion for going between the horns of the dilemma. This suggestion involves a generalization of the notion that chances evolve by conditionalization. Introduction "Chances evolve by conditionalization" How might the lawful magnitude principle be defended? A historical interlude What if chances failed to be determined by the laws and categorical facts? (shrink)
After reviewing several failed arguments that laws cannot change, I use the laws' special relation to counterfactuals to show how temporary laws would have to differ from eternal but time-dependent laws. Then I argue that temporary laws are impossible and that neither Lewis's nor Armstrong's analyses of law nicely accounts for the laws' immutability. *Received September 2006; revised September 2007. ‡Many thanks to John Roberts and John Carroll for valuable comments on earlier drafts, as well as to several anonymous referees (...) for their good suggestions. †To contact the author, please write to: Department of Philosophy, University of North Carolina, CB #3125, Caldwell Hall, Chapel Hill, NC 27599-3125; e-mail: firstname.lastname@example.org. (shrink)
has offered a lovely example to motivate the intuition that a successful prediction has a kind of confirmatory significance that an accommodation lacks. This paper scrutinizes Maher's example. It argues that once the example is tweaked, the intuitive difference there between prediction and accommodation disappears. This suggests that the apparent superiority of prediction to accommodation is actually a side effect of an important difference between the hypotheses that tend to arise in each case.
Recently, biologists and computer scientists who advocate the "strong thesis of artificial life" have argued that the distinction between life and nonlife is important and that certain computer software entities could be alive in the same sense as biological entities. These arguments have been challenged by Sober (1991). I address some of the questions about the rational reconstruction of biology that are suggested by these arguments: What is the relation between life and the "signs of life"? What work (if any) (...) might the concept of "life" (over and above the "signs of life") perform in biology? What turns on scientific disputes over the utility of this concept? To defend my answers to these questions, I compare "life" to certain other concepts used in science, and I examine historical episodes in which an entity's vitality was invoked to explain certain phenomena. I try to understand how these explanations could be illuminating even though they are not accompanied by any reductive definition of "life.". (shrink)
Why should science be so interested in discovering whether p is a law over and above whether p is true? The answer may involve the laws' relation to counterfactuals: p is a law iff p would still have obtained under any counterfactual supposition that is consistent with the laws. But unless we already understand why science is especially concerned with the laws, we cannot explain why science is especially interested in what would have happened under those counterfactual suppositions consistent with (...) the laws. It is argued that the laws form the only non-trivially "stable" set, where "stability" is invariance under a certain range of counterfactual suppositions not itself defined by reference to the laws. It is then explained why science should be so interested in identifying a non-trivially "stable" set: because of stability's relation to the best set of "inductive strategies". (shrink)
Suppose that unobtanium-346 is a rare radioactive isotope. Consider: (1) Every Un346 atom, at its creation, decays within 7 microseconds (µs). (50%) Every Un346 atom, at its creation, has a 50% chance of decaying within 7µs. (1) and (50%) can be true together, but (1) and (50%) cannot together be laws of nature. Indeed, (50%)'s mere (non-vacuous) truth logically precludes (1)'s lawhood. A satisfactory analysis of chance and lawhood should nicely account for this relation. I shall argue first that David (...) Lewis's Humean picture accounts for this relation only by inserting this relation ‘by hand’. Next, I shall argue that this relation between law and chance also threatens a radically non-Humean picture of laws and chances. Finally, I shall offer an account of natural law that nicely explains the relation between chancy facts and deterministic laws. This explanation is not ad hoc because it derives the relation from the very same features of lawhood that account for the laws' special relation to counterfactuals and explain how the laws (unlike the accidents) possess a variety of necessity. The reason that a chancy fact such as (50%) keeps (1) from being a law, without keeping (1) from being true, is ultimately that a chancy fact constrains the subjunctive facts and (1)'s lawhood, unlike (1)'s truth, depends upon the subjunctive facts. (shrink)
This chapter presents arguments positing that there is an important sense in which it takes more than all of the actual kinds to make a world, contrary to the popular saying that goes “it takes all kinds to make a world.” In a variety of ways, the various species of elementary particles are ideal cases of natural kinds since each belongs to exactly one of these natural kinds and it essentially belongs to that kind. There exists perfect uniformities within each (...) species and sharp distinctions between the species with respect to certain properties. Some of these properties are essential and suffice to give necessary and adequate conditions for species-membership, while others derive from the fundamental properties via exceptionless uniformities with no ceteris paribus escape clauses. (shrink)
Myrvold (2003) has proposed an attractive Bayesian account of why theories that unify phenomena tend to derive greater epistemic support from those phenomena than do theories that fail to unify them. It is argued, however, that "unification" in Myrvold's sense is both too easy and too difficult for theories to achieve. Myrvold's account fails to capture what it is that makes unification sometimes count in a theory's favor.
The first half of this book consists of Michael Friedman’s Kant Lectures in essentially the form in which they were delivered at Stanford University in 1999. In the second half, “Fruits of Discussion,” Friedman elaborates, refines, and defends the central ideas of these lectures. Taken together, these halves form an eminently readable, slim, yet rich and ambitious volume. It proves our fullest account to date not only of Friedman’s neo-Kantian, historicized, dynamical conception of relativized a priori principles of mathematics and (...) physics, but also of the pivotal role that Friedman sees philosophy as playing in making scientific revolutions rational. (shrink)