Several proponents of the interventionisttheory of causation have recently argued for a neo-Russellian account of causation. The paper discusses two strategies for interventionists to be neo-Russellians. Firstly, I argue that the open systems argument – the main argument for a neo-Russellian account advocated by interventionists – fails. Secondly, I explore and discuss an alternative for interventionists who wish to be neo-Russellians: the statistical mechanical account. Although the latter account is an attractive alternative, it is argued (...) that interventionists are not able to adopt it straightforwardly. Hence, to be neo-Russellians remains a challenge to interventionists. (shrink)
The systems studied in the special sciences are often said to be causally autonomous, in the sense that their higher-level properties have causal powers that are independent of those of their more basic physical properties. This view was espoused by the British emergentists, who claimed that systems achieving a certain level of organizational complexity have distinctive causal powers that emerge from their constituent elements but do not derive from them.2 More recently, non-reductive physicalists have espoused a similar view about the (...) causal autonomy of specialscience properties. They argue that since these properties can typically have multiple physical realizations, they are not identical to physical properties, and further they possess causal powers that differ from those of their physical realizers.3 Despite the orthodoxy of this view, it is hard to find a clear exposition of its meaning or a defence of it in terms of a well-motivated account of causation. In this paper, we aim to address this gap in the literature by clarifying what is implied by the doctrine of the causal autonomy of special-science properties and by defending the doctrine using a prominent theory of causation from the philosophy of science. The theory of causation we employ is a simplified version of an “interventionist” theory advanced by James Woodward (2003, forthcoming a, b), according to which a cause makes a counterfactual difference to its effects. In terms of this theory, it is possible to show that a special-science property can make a difference to some effect while the physical property that realizes it does not. Although other philosophers have also used counterfactual analyses of causation to argue for the causal autonomy of special-science properties,4 the theory of causation we employ is able to establish this with an unprecedented level of precision.. (shrink)
According to James Woodward’s influential interventionist account of causation, X is a cause of Y iff, roughly, there is a possible intervention on X that changes Y. Woodward requires that interventions be merely logically possible. I will argue for two claims against this modal character of interventions: First, merely logically possible interventions are dispensable for the semantic project of providing an account of the meaning of causal statements. If interventions are indeed dispensable, the interventionisttheory collapses (...) into (some sort of) a counterfactual theory of causation. Thus, the interventionisttheory is not tenable as a theory of causation in its own right. Second, if one maintains that merely logically possible interventions are indispensable, then interventions with this modal character lead to the fatal result that interventionist counterfactuals are evaluated inadequately. Consequently, interventionists offer an inadequate theory of causation. I suggest that if we are concerned with explicating causal concepts and stating the truth-conditions of causal claims we best get rid of Woodwardian interventions. (shrink)
One part of the true theory of actual causation is a set of conditions responsible for eliminating all of the non-causes of an effect that can be discerned at the level of counterfactual structure. I defend a proposal for this part of the theory.
Hume thought that if you believed in powers, you believed in necessary connections in nature. He was then able to argue that there were none such because anything could follow anything else. But Hume wrong-footed his opponents. A power does not necessitate its manifestations: rather, it disposes towards them in a way that is less than necessary but more than purely contingent. -/- In this paper a dispositional theory of causation is offered. Causes dispose towards their effects and (...) often produce them. But a set of causes, even though they may succeed in producing an effect, cannot necessitate it since the effect could have been counteracted by some additional power. This would require a separation of our concepts of causal production and causal necessitation. The most conspicuous cases of causation are those where powers accumulate and pass a requisite threshold for an effect to occur. -/- We develop a model for representing powers as constituent vectors within an n-dimensional quality space, where composition of causes appears as vector addition. Even our resultant vector, however, has to be understood as having dispositional force only. This model throws new light on causal modality and cases of prevention, causation by absence and probabilistic causation. (shrink)
I propose a non-Humean theory of causation with “tendencies” as causal connections. Not, however, as “necessary connexions”: causes are not sufficient, they do not necessitate their effects. The theory is designed to be, not an analysis of the concept of causation, but a description of what is the case in typical cases of causa-tion. I therefore call it a metaphysical theory of causation, as opposed to a semantic one.
This article presents Roman Ingarden’s theory of causation, as developed in volume III of The Controversy about the Existence of the World, and defends analternative which uses some important insights of Ingarden. It rejects Ingarden’s claim that a cause is simultaneous with its effect and that a cause necessitates its effect. It uses Ingarden’s notion of ‘inclinations’ and accepts Ingarden’s claim that an event cannot necessitate a later event.
This critical notice highlights the important contributions that Eric Watkins's writings have made to our understanding of theories about causation developed in eighteenth-century German philosophy and by Kant in particular. Watkins provides a convincing argument that central to Kant's theory of causation is the notion of a real ground or causal power that is non-Humean (since it doesn't reduce to regularities or counterfactual dependencies among events or states) and non-Leibnizean because it doesn't reduce to logical or conceptual (...) relations. However, we raise questions about Watkins's more specific claims that Kant completely rejects a model on which the first relatum of a phenomenal causal relation is an event and that he maintains that real grounds are metaphysically and not just epistemically indeterminate. -/- . (shrink)
In this paper I argue that Kierkegaard's theory of change is motivated by a robust notion of contingency. His view of contingency is sharply juxtaposed with a strong notion of absolute necessity. I show that how he understands these notions explains certain of his claims about causation. I end by suggesting a compatibilist interpretation of Kierkegaard's philosophy.
A theory of causation with ‘tendencies’ as causal con- nections is proposed. Not, however, as ‘necessary connec- tions’: causes are not sufficient, they do not necessitate their effects. The theory is not an analysis of the concept of causation, but a description of what is the case in typical cases of causation. Therefore it does not strictly contradict any analysis of the concept of causation, not even reduct- ive ones. It would even be supported (...) by a counterfactual or a probabilistic analysis. (shrink)
An actual cause of some token effect is itself a (distinct) token event (or fact, or state of affairs, …) that helped to bring about that effect. The notion of an actual cause is different from that of a potential cause – for example a pre-empted backup – which had the capacity to bring about the effect, but which wasn't in fact operative on the occasion in question. Sometimes actual causes are also distinguished from mere background conditions: as when we (...) judge that the struck match was a cause of the fire, while the presence of oxygen was merely part of the relevant background against which the struck match operated. Actual causation is also to be distinguished from type causation: actual causation holds between token events in a particular, concrete scenario; type causation, by contrast, holds between event kinds in scenario kinds. (shrink)
The problem of freedom and determinism has vexed philosophers for several millennia, and continues to be a topic of lively debate today. One of the proposed solutions to the problem that has received a great deal of attention is the Theory of Agent Causation. While the theory has enjoyed its share of advocates, and perhaps more than its share of critics, the theory’s advocates and critics have always agreed on one thing: the Theory of Agent (...)Causation is an incompatibilist theory. That is, both believers and nonbelievers in the theory have taken it for granted that the most plausible version of the Theory of Agent Causation is one according to which freedom and determinism are incompatible. In fact, so entrenched is this assumption that no one on either side of the debate has ever questioned it. Yet it turns out that this assumption is wrong – the most plausible version of the Theory of Agent Causation is a compatibilist one. (shrink)
In The Secret Connexion1 Galen Strawson argues against the traditional interpretation of Hume, according to which Hume’s theory of meaning leads him to a regularity theory of causation. In actual fact, says Strawson, ‘Hume believes firmly in some sort of natural necessity’ (p. 277). What Hume denied was that we are aware of causal connections outrunning regular succession, and that we have a ‘positively or descriptively contentful conception’ of such powers (p. 283); he did not deny that (...) there are such powers, or that they are what we are talking about when we talk about causation. Strawson has four central lines of argument. His ‘most direct evidence’ (p. 2) against a regularity interpretation consists of (1) passages where Hume refers to hidden powers underlying the regularities of which we are aware. Strawson’s broader motivations for rejecting the traditional interpretation are (2) that the regularity theory is in itself quite absurd, and (3) that it is incompatible with Hume’s ‘non-committal scepticism’. And the method which he uses to defend his interpretation against pressure from the theory of ideas is (4) to develop some comments of Hume’s on ‘relative’ ideas into something like a further theory of content to supplement the theory of ideas. Strawson develops almost the strongest case I can imagine for his claims. I shall try to explain why he leaves me unconvinced. (shrink)
Advocates of the conserved quantity (CQ) theory of causation have their own peculiar problem with conservation laws. Since they analyze causal process and interaction in terms of conserved quantities that are in turn defined as physical quantities governed by conservation laws, they must formulate conservation laws in a way that does not invoke causation, or else circularity threatens. In this paper I will propose an adequate formulation of a conservation law that serves CQ theorists' purpose.
permits a sound and rigorously definable notion of ‘originating cause’ or causa causans—a type of transition event—of an outcome event. Mackie has famously suggested that causes form a family of ‘inus’ conditions, where an inus condition is ‘an insufficient but non-redundant part of an unnecessary but sufficient condition’. In this essay the needed concepts of BST theory are developed in detail, and it is then proved that the causae causantes of a given outcome event have exactly the structure of (...) a set of Mackie inus conditions. The proof requires the assumption that there is no EPR-like ‘funny business’. This seems enough to constitute a theory of ‘causation’ in at least one of its many senses. Introduction The cement of the universe Preliminaries 3.1 First definitions and postulates 3.2 Ontology: propositions 3.3 Ontology: initial events 3.4 Ontology: outcome events 3.5 Ontology: transition events 3.6 Propositional language applied to events Causae causantes 4.1 Causae causantes are basic primary transition events 4.2 Causae causantes of an outcome chain 4.3 No funny business Causae causantes and inns and inus conditions 5.1 Inns conditions of outcome chains: not quite 5.2 Inns conditions of outcome chains 5.3 Inns conditions of scattered outcome events 5.4 Inus conditions for disjunctive outcome events 5.5 Inns and inus conditions of transition events Counterfactual conditionals Appendix: Tense and modal connectives in BST. (shrink)
In this paper I offer an 'integrating account' of singular causation, where the term 'integrating' refers to the following program for analysing causation. There are two intuitions about causation, both of which face serious counterexamples when used as the basis for an analysis of causation. The 'process' intuition, which says that causes and effects are linked by concrete processes, runs into trouble with cases of 'misconnections', where an event which serves to prevent another fails to do (...) so on a particular occasion and yet the two events are linked by causal processes. The chance raising intuition, according to which causes raise the chance of their effects, easily accounts for misconnections but faces the problem of chance lowering causes, a problem easily accounted for by the process approach. The integrating program attempts to provide an analysis of singular causation by synthesising the two insights, so as to solve both problems. In this paper I show that extant versions of the integrating program due to Eells, Lewis, and Menzies fail to account for the chance-lowering counterexample. I offer a new diagnosis of the chance lowering case, and use that as a basis for an integrating account of causation which does solve both cases. In doing so, I accept various assumptions of the integrating program, in particular that there are no other problems with these two approaches. As an example of the process account, I focus on the recent CQ theory of Wesley Salmon (1997). (shrink)
Several authors have recently attempted to provide a physicalist analysis of causation by appealing to terms from physics that characterise causal processes. Accounts based on forces, energy/momentum transfer and fundamental interactions have been suggested in the literature. In this paper, I wish to show that the former two are untenable when the effect of enclosed electromagnetic fluxes in quantum theory is considered (i.e. the Aharonov-Bohm effect). Furthermore, I suggest that even in the classical and non-relativistic limits, a (...)theory of fundamental interactions should not be reduced to either a theory of forces or of energy/momentum transfer, but should be understood as a classical account of mutual interactions. Causal links are therefore correctly characterised by generalised potentials. This leads to some speculation regarding the fundamental ontology of interactions and, in particular, the role of the quantum mechanical phase. (shrink)
In this paper, my central aim is to defend the Powers Theory of causation, according to which causation is the exercise of a power (or manifestation of a disposition). I will do so by, first, presenting a recent version of the Powers Theory, that of Mumford (Forthcoming). Second, I will raise an objection to Mumford’s account. Third, I will offer a revised version that avoids the objection. And, fourth, I will end by briefly comparing the proposed (...) Powers Theory with the Neo-Humean, counterfactual theory. (shrink)
It is commonplace to distinguish between propositional justification (having good reasons for believing p) and doxastic justification (believing p on the basis of those good reasons).One necessary requirement for bridging the gap between S’s merely having propositional justification that p and S’s having doxastic justification that p is that S base her belief that p on her reasons (propositional justification).A plausible suggestion for what it takes for S’s belief to be based on her reasons is that her reasons must contribute (...) causally to S’s having that belief. Though this suggestion is plausible, causal accounts of the basing relation that have been proposed have not fared well. In particular, cases involving causal deviancy and cases involving over-determination have posed serious problems for causal accounts of the basing relation. Although previous causal accounts of the basing relation seem to fall before these problems, it is possible to construct an acceptable causal account of the basing relation. That is, it is possible to construct a causal account of the basing relation that not only fits our intuitions about doxastic justification in general, but also is not susceptible to the problems posed by causal deviancy and causal over-determination. The interventionist account of causation provides the tools for constructing such an account. My aim is to make use of the insights of the interventionist account of causation to develop and defend an adequate causal account of the basing relation. (shrink)
In this paper today, I would like to offer a new analysis of causation and of causal claims. It is an unorthodox one, as you will see, but I suspect that in the not too distant future it will be seen as intuitively, perhaps even trivially, true. I hardly need defend the urgency of my project. Ever since Hume, philosophers have wondered whether there are causes. This is a desperate situation. With no causes, it's hard to see how brushing (...) my teeth is likely to prevent tooth decay. Indeed, it would not be unreasonable to read Hume as an advocate of rotten teeth, which might explain the sad state that many British mouths find themselves in today. The attentive listener will have noted that I said Hume's advocacy of rotten teeth might explain the abysmal state of British oral hygiene. Of course, if Hume is right about causation then nothing explains anything, and that explains why I have been tentative in my claim. The account I would like to propose is this. The claim ‘x causes y’ is to be understood in the following way: ‘x makes y happen’. That is, to say that x is the cause of y is just to say that x makes y happen. Or, to put it more succinctly, if x is the cause of y, then x makes y happen. This is no doubt a startling claim, and one in need of further clarification and defense. To begin, I should like to contrast my analysis with another that might, on its surface, appear similar. Suppose one were to claim that 'x is the cause of y' means that x brings y about. But ‘bringing about’ is hardly an informative verbal clause, and does little ampliative work. This way of putting it lacks the opaque transparency that we’ve come to expect of philosophical analyses of causation. Now this new account is not necessarily inconsistent with other, more traditional analyses, such as Lewis and Hausman's analyses of causation in terms of counterfactuals or Eells' probabilistic theory of causation. Consider first counterfactual analyses of causation. These are efforts to account for the meaning of causal dependencies.. (shrink)
In this paper, I will first clarify Lewis’s influence theory of causation by relying on his theory of events. And then I will consider Michael Strevens’s charge against the sufficiency of Lewis’s theory. My claim is that it is legitimate but does not pose as serious a problem for Lewis’s theory as Strevens thinks because Lewis can surmount it by limiting the scope of his theory to causation between concrete events. Michael Strevens raises (...) an alleged counterexample to the necessity of Lewis’s theory that, if successful, would have a very important advantage over other alleged counterexamples. But I will assert that it is simply mistaken. My defense of Lewis’s theory will shed interesting light on the relationship between Lewis’s theory and Salmon’s mark theory. (shrink)
A probabilistic theory of causation is a theory which holds that the central feature of causation is that causes (usually) raise the probability of their effects. In this dissertation, I defend Hans Reichenbach's original (1953) version of the probabilistic theory of causation, which analyses causal relations in terms of a three place statistical betweenness relation. Unlike most discussions of this theory, I hold that the statistical relation should be taken as a sufficient, but (...) not as necessary, condition for causal betweenness. With this difference in interpretation, Reichenbach's theory is shown to be immune to all of the criticisms which have been raised against it in the last.. (shrink)
The key idea of the interventionist account of causation is that a variable A causes a variable B if and only if B would change if A were manipulated in an appropriate way. I argue that Woodward’s (Making things happen. Oxford University Press, Oxford, 2003) version of interventionism does not provide a sufficient condition for causation, insofar as it is not adequate for manipulations grounded on association laws. Such laws, which express relations of mutual dependence between variables, (...) ground manipulative relationships which are not causal. I suggest that the interventionist analysis is sufficient for nomological dependence rather than for causation. (shrink)
In the 17th Discussion of his Tahafut al-Falasifah (“Incoherence of the Philosophers”), Ghazali presents two theories of causation which, he claims, accommodate belief in the possibility of miracles. The first of these, which is usually taken to represent Ghazali’s own position, is a form of occasionalism. In this paper I argue that Ghazali fails to prove that this theory is compatible with belief in the possibility of miracles.
This article attempts to develop the abandoned occasionalist model of causation into a credible present-day theory. If objects can never exhaust one another through their relations, it is hard to know how they can ever interact at all. This article handles the problem by dividing objects into two kinds: the real objects that emerge from Heidegger’s tool-analysis and the intentional objects of Husserl’s phenomenology. Each of these objects turns out to be split by an additional rift between the (...) object as an enduring unit and its plurality of traits. This explains Heidegger’s notorious ‘fourfold’ model of the thing. This article shows that Heidegger’s Geviert must be reinterpreted as a system of four tensions that can be identified as time, space, essence, and eidos. Time and space can no longer be left as peerless dimensions of the cosmos. Instead, they are shown to arise from the tensions between things and their qualities. And for this reason they are joined by essence (in the classical sense of the term) and eidos (in Husserl’s sense, not Plato’s) as two out of four basic features of the fabric of the world. (shrink)
Larry Wright and others have advanced causal accounts of functional explanation, designed to alleviate fears about the legitimacy of such explanations. These analyses take functional explanations to describe second order causal relations. These second order relations are conceptually puzzling. I present an account of second order causation from within the framework of Eells' probabilistic theory of causation; the account makes use of the population-relativity of causation that is built into this theory.
I apply some of the lessons from quantum theory, in particular from Bell’s theorem, to a debate on the foundations of decision theory and causation. By tracing a formal analogy between the basic assumptions of causal decision theory (CDT)—which was developed partly in response to Newcomb’s problem— and those of a local hidden variable theory in the context of quantum mechanics, I show that an agent who acts according to CDT and gives any nonzero credence (...) to some possible causal interpretations underlying quantum phenomena should bet against quantum mechanics in some feasible game scenarios involving entangled systems, no matter what evidence they acquire. As a consequence, either the most accepted version of decision theory is wrong, or it provides a practical distinction, in terms of the prescribed behaviour of rational agents, between some metaphysical hypotheses regarding the causal structure underlying quantum mechanics. (shrink)
Advocates of the computational theory of mind claim that the mind is a computer whose operations can be implemented by various computational systems. According to these philosophers, the mind is multiply realisable because—as they claim—thinking involves the manipulation of syntactically structured mental representations. Since syntactically structured representations can be made of different kinds of material while performing the same calculation, mental processes can also be implemented by different kinds of material. From this perspective, consciousness plays a minor role in (...) mental activity. However, contemporary neuroscience provides experimental evidence suggesting that mental representations necessarily involve consciousness. Consciousness does not only enable individuals to become aware of their own thoughts, it also constantly changes the causal properties of these thoughts. In light of these empirical studies, mental representations appear to be intrinsically dependent on consciousness. This discovery represents an obstacle to any attempt to construct an artificial mind. (shrink)
This paper argues that, notwithstanding the remarkable popularity of Woodward's (2003) interventionist analysis of causation, the exact definitional details of that theory are surprisingly little understood. There exists a discrepancy in the literature between the clarity about the logical details of interventionism, on the one hand, and the enormous work interventionism is expected to do, on the other. The first part of the paper distinguishes three significantly different readings of the logical form of Woodward's (2003) interventionist (...)theory and identifies the reading that best captures the basic intuitions behind interventionism. In the second part, I show that this preferable reading is far from doing all the work that friends of interventionism would like it to do. (shrink)
A combination of process and counterfactual theories of causation is proposed with the aim of preserving the strengths of each of the approaches while avoiding their shortcomings. The basis for the combination, or hybrid, view is the need, common to both accounts, of imposing a stability requirement on the causal relation.
In Making Things Happen, James Woodward influentially combines a causal modeling analysis of actual causation with an interventionist semantics for the counterfactuals encoded in causal models. This leads to circularities, since interventions are defined in terms of both actual causation and interventionist counterfactuals. Circularity can be avoided by instead combining a causal modeling analysis with a semantics along the lines of that given by David Lewis, on which counterfactuals are to be evaluated with respect to worlds (...) in which their antecedents are realized by miracles. I argue, pace Woodward, that causal modeling analyses perform just as well when combined with the Lewisian semantics as when combined with the interventionist semantics. Reductivity therefore remains a reasonable hope. (shrink)
Ehring shows the inadequacy of received theories of causation, and, introducing conceptual devices of his own, provides a wholly new account of causation as the persistence over time of individual properties, or "tropes.".
After briefly presenting Ronald Giere's (1979, 1980) recent counterfactual characterization of population-level causation, I present two counterexamples to the characterization. The difficulty discussed stems from nonaccidental correlations that can obtain between causally effective and causally neutral factors.
Sober (1984) presents an account of selection motivated by the view that one property can causally explain the occurrence of another only if the first plays a unique role in the causal production of the second. Sober holds that a causal property will play such a unique role if it is a population level cause of its effect, and on this basis argues that there is selection for a trait T only if T is a population level cause of survival (...) and reproductive success. Sterelny and Kitcher (1988) claim against Sober that some traits directly subject to selection will not satisfy the probabilistic condition on population level causation. In this paper I show that Sober has the resources to resist the Sterelny-Kitcher complaint, but I argue that not all traits that satisfy the probabilistic condition play the required unique role in the production of their effects. (shrink)
In recent papers, Lei Zhong argues that the autonomy solution to the causal exclusion problem is unavailable to anyone that endorses the counterfactual model of causation. The linchpin of his argument is that the counterfactual theory entails the downward causation principle, which conflicts with the autonomy solution. In this note I argue that the counterfactual theory does not entail the downward causation principle, so it is possible to advocate for the autonomy solution to the causal (...) exclusion problem from within the counterfactual theory of causation. (shrink)
The majority of the currently flourishing theories of actual (token-level) causation are located in a broadly counterfactual framework that draws on structural equations. In order to account for cases of symmetric overdeterminiation and preemption, these theories resort to rather intricate analytical tools, most of all, to what Hitchcock (J Philos 98:273–299, 2001) has labeled explicitly nonforetracking counterfactuals. This paper introduces a regularity theoretic approach to actual causation that only employs material (non-modal) conditionals, standard Boolean minimization procedures, and a (...) (non-modal) stability condition that regulates the behavior of causal models under model expansions. Notwithstanding its lightweight analytical toolbox, this regularity theory performs at least as well as the structural equations accounts with their heavy appliances. (shrink)
The Agency and the Manipulability theory of causation, in spite of significant differences, share at least three claims. First, that manipulation – roughly, that by manipulating causes we bring about effects – is a central notion for causation; second, that such a notion of manipulation allows a reductive – i.e. general and comprehensive – account of causation; third, that this view has its forefathers in the works of Collingwood, Gasking and von Wright. This paper mainly challenges (...) the third claim and argues that the misreading of those authors leads to a more dangerous consequence: a confusion between epistemological, metaphysical and methodological issues about causation. (shrink)
The basic idea of counterfactual theories of causation is that the meaning of causal claims can be explained in terms of counterfactual conditionals of the form “If A had not occurred, C would not have occurred”. While counterfactual analyses have been given of type-causal concepts, most counterfactual analyses have focused on singular causal or token-causal claims of the form “event c caused event e”. Analyses of token-causation have become popular in the last thirty years, especially since the development (...) in the 1970's of possible world semantics for counterfactuals. The best known counterfactual analysis of causation is David Lewis's (1973b) theory. However, intense discussion over thirty years has cast doubt on the adequacy of any simple analysis of singular causation in terms of counterfactuals. Recent years have seen a proliferation of different refinements of the basic idea to achieve a closer match with commonsense judgements about causation. (shrink)
The need to find an intrinsic characterization of what makes a relation between events causal arises not only in local theories of causation like Salmon's process theory but also in global approaches like Lewis' counterfactual theory. According to the localist intuition, whether a process connecting two events is causal should depend only on what goes on between the events, not on conditions that hold elsewhere in the world. If such intrinsic characterizations could be found, an identification of (...) the causal relation in the actual world (though not in other possible worlds) with physical processes may be feasible (the a posteriori identification). I consider recent proposals made for intrinsic characterizations of causality and conclude that none of them is able to deliver the intended result. (shrink)
To the extent, then, that we set our face against admitting the truth of Humeanism in the theory of motivation, to that extent we are probably going to feel that there is no such thing as the theory of motivation, so conceived, at all. And that will be the position that this paper is trying to defend, though not only for this reason. It might seem miraculous that so much can be extracted from the little distinction with which (...) we started, between the reasons why an action was right and the agent's reasons for doing it. It is not so much the distinction itself which is the culprit, however, as the account of it that sees motivating reasons as complexes of beliefs and desires, i.e. as complexes of psychological states of whatever sort, and sees justifying reasons as truths. It is this account, which puts into form the attempt to combine value realism with Humean philosophical psychology, that leads to the results I have outlined above. (shrink)
The theory of mind (ToM) deficit associated with autism spectrum disorder has been a central topic in the debate about the modularity of the mind. In a series of papers, Philip Gerrans and Valerie Stone argue that positing a ToM module does not best explain the deficits exhibited by individuals with autism (Gerrans 2002; Stone & Gerrans 2006a, 2006b; Gerrans & Stone 2008). In this paper, I first criticize Gerrans and Stone’s (2008) account. Second, I discuss various studies of (...) individuals with autism and argue that they are best explained by positing a higher-level, domain-specific ToM module. (shrink)
The function of a trait token is usually defined in terms of some properties of other (past, present, future) tokens of the same trait type. I argue that this strategy is problematic, as trait types are (at least partly) individuated by their functional properties, which would lead to circularity. In order to avoid this problem, I suggest a way to define the function of a trait token in terms of the properties of the very same trait token. To able to (...) allow for the possibility of malfunctioning, some of these properties need to be modal ones: a function of a trait is to do F just in case its doing F would contribute to the inclusive fitness of the organism whose trait it is. Function attributions have modal force. Finally, I explore whether and how this theory of biological function could be modified to cover artifact function. (shrink)
The pragmatic theory of truth (PTT) seeks to illuminate the concept of truth by focusing on concepts like usefulness or adaptivity. However, contrary to common opinion, PTT does not merely face a narrow band of (perhaps) rather artificial counterexamples (as in a case of empirically unfounded but life-extending optimism in a cancer patient); instead, PTT is faced with a fast psychological research literature which suggests that inaccurate beliefs are both (1) pervasive in human beings and, nonetheless, (2) fully adaptive (...) in many cases. Call this the "pervasive adaptive illusions" (PAI) objection to PTT. According to PAI, the kind of connection drawn by PTT between the beliefs that we (intuitively or pretheoretically) regard as "true" and the beliefs we regard as useful is undercut by hard-nosed empirical work in psychology -- work that no empirically minded pragmatist can ignore. According to PAI, the connection drawn between truth and utility by PTT is subject to a simply overwhelming set of counterexamples (drawn from psychological research, and reviewed below). Thus, PTT is a theory any sensible theorist of truth must reject. (shrink)
According to embodied cognition, the philosophical and empirical literature on theory of mind is misguided. Embodied cognition rejects the idea that social cognition requires theory of mind. It regards the intramural debate between the TheoryTheory and the Simulation Theory as irrelevant, and it dismisses the empirical studies on theory of mind as ill conceived and misleading. Embodied cognition provides a novel deflationary account of social cognition that does not depend on theory of (...) mind. In this chapter, l describe embodied cognition’s alternative to theory of mind and discuss three challenges it faces. (shrink)
Dealing with students who cheat can be one of the most stressful interactions that faculty encounter. This study focused on faculty responses to academic integrity violations and utilized the Theory of Planned Behaviour model to predict the target behaviour of whether faculty would speak face-to-face with a student suspected of cheating. After an elicitation phase to determine modal salient beliefs, a questionnaire was developed to measure the model’s variables. The respondent database contained 206 tenured and non-tenured faculty from two (...) large comprehensive universities. A stepwise multiple regression demonstrated the usefulness of the Theory of Planned Behaviour. Overall the model explained 43 % of the variance in predicting faculty members’ intention to speak face-to-face with a student suspected of cheating. The most significant contribution was made by subjective norms ( β = 0.39), followed by attitude ( β = 0.34), and perceived behavioural control ( β = 0.24). (shrink)
Abstract This paper sketches a Levinasian theory of action. It has often been pointed out that Levinas' ethics are incapable of providing principles of adjudication for guiding actions. However, a much more profound problem affects Levinas' metaphysical ethics and negates the possibility of adjudication and that is a patent lack of freedom from the yoke of the ethical. If ?ethics is primordial? indeed, then no act can be unethical in that there is no alternative possibility to the acceptance and (...) performance of the law. In this paper, I will argue that it is from the totalization of the acceptance and performance of law ?implicit in the subject's action? that alternative possibilities become visible. This is to say, it is through totalization that the subject demarcates the locus for the emergence of principles, which can permit adjudication among different acts without negating the radical primacy of ethics, which is probably Levinas? greatest contribution to the field. (shrink)
Having entered into the problem structuring methods, system dynamics (SD) is an approach, among systems’ methodologies, which claims to recognize the main structures of socio-economic behaviors. However, the concern for building or discovering strong philosophical underpinnings of SD, undoubtedly playing an important role in the modeling process, is a long-standing issue, in a way that there is a considerable debate about the assumptions or the philosophical foundations of it. In this paper, with a new perspective, we have explored theory (...) of knowledge in SD models and found strange similarities between classic epistemological concepts such as justification and truth, and the mechanism of obtaining knowledge in SD models. In this regard, we have discussed related theories of epistemology and based on this analysis, have suggested some implications for moderating common problems in the modeling process of SD. Furthermore, this research could be considered a reword of system dynamics modeling principles in terms of theory of knowledge. (shrink)
In this thesis I argue that the psychological study of concepts and categorisation, and the philosophical study of reference are deeply intertwined. I propose that semantic intuitions are a variety of categorisation judgements, determined by concepts, and that because of this, concepts determine reference. I defend a dual theory of natural kind concepts, according to which natural kind concepts have distinct semantic cores and non-semantic identification procedures. Drawing on psychological essentialism, I suggest that the cores consist of externalistic placeholder (...) essence beliefs. The identification procedures, in turn, consist of prototypes, sets of exemplars, or possibly also theory-structured beliefs. I argue that the dual theory is motivated both by experimental data and theoretical considerations. The thesis consists of three interrelated articles. Article I examines philosophical causal and description theories of natural kind term reference, and argues that they involve, or need to involve, certain psychological elements. I propose a unified theory of natural kind term reference, built on the psychology of concepts. Article II presents two semantic adaptations of psychological essentialism, one of which is a strict externalistic Kripkean-Putnamian theory, while the other is a hybrid account, according to which natural kind terms are ambiguous between internalistic and externalistic senses. We present two experiments, the results of which support the strict externalistic theory. Article III examines Fodor’s influential atomistic theory of concepts, according to which no psychological capacities associated with concepts constitute them, or are necessary for reference. I argue, contra Fodor, that the psychological mechanisms are necessary for reference. (shrink)
The theory of mind (ToM) deficit associated with autism has been a central topic in the debate about the modularity of the mind. Most involved in the debate about the explanation of the ToM deficit have failed to notice that autism’s status as a spectrum disorder has implications about which explanation is more plausible. In this paper, I argue that the shift from viewing autism as a unified syndrome to a spectrum disorder increases the plausibility of the explanation of (...) the ToM deficit that appeals to a domain-specific, higher-level ToM module. First, I discuss what it means to consider autism as a spectrum rather than as a unified disorder. Second, I argue for the plausibility of the modular explanation on the basis that autism is better considered as a spectrum disorder. Third, I respond to a potential challenge to my account from Philip Gerrans and Valerie Stone’s recent work (Gerrans, Biol Philos 17:305–321, 2002; Stone and Gerrans, Trends Cogn Sci 10:3–4, 2006a; Soc Neurosci 1:309–319, 2006b; Gerrans and Stone, Br J Philos Sci 59:121–141, 2008). (shrink)
In this impressive second edition of Theory of Knowledge, Keith Lehrer introduces students to the major traditional and contemporary accounts of knowing. Beginning with the traditional definition of knowledge as justified true belief, Lehrer explores the truth, belief, and justification conditions on the way to a thorough examination of foundation theories of knowledge,the work of Platinga, externalism and naturalized epistemologies, internalism and modern coherence theories, contextualism, and recent reliabilist and causal theories. Lehrer gives all views careful examination and concludes (...) that external factors must be matched by appropriate internal factors to yield knowledge. This match of internal and external factors follows from Lehrer’s new coherence theory of undefeated justification. In addition to doing justice to the living epistemological traditions, the text smoothly integrates several new lines that will interest scholars. Also, a feature of special interest is Lehrer’s concept of a justification game.This second edition of Theory of Knowledge is a thoroughly revised and updated version that contains several completely new chapters. Written by a well-known scholar and contributor to modern epistemology, this text is distinguished by clarity of structure, accessible writing, and an elegant mix of traditional material, contemporary ideas, and well-motivated innovation. (shrink)
The human ability to represent, conceptualize, and reason about mind and behavior is one of the greatest achievements of human evolution and is made possible by a “folk theory of mind” — a sophisticated conceptual framework that relates different mental states to each other and connects them to behavior. This chapter examines the nature and elements of this framework and its central functions for social cognition. As a conceptual framework, the folk theory of mind operates prior to any (...) particular conscious or unconscious cognition and provides the “framing” or interpretation of that cognition. Central to this framing is the concept of intentionality, which distinguishes intentional action (caused by the agent’s intention and decision) from unintentional behavior (caused by internal or external events without the intervention of the agent’s decision). A second important distinction separates publicly observable from publicly unobservable (i.e., mental) events. Together, the two distinctions define the kinds of events in social interaction that people attend to, wonder about, and try to explain. A special focus of this chapter is the powerful tool of behavior explanation, which relies on the folk theory of mind but is also intimately tied to social demands and to the perceiver’s social goals. A full understanding of social cognition must consider the folk theory of mind as the conceptual underpinning of all (conscious and unconscious) perception and thinking about the social world. (shrink)
Epistemology or the theory of knowledge is one of the cornerstones of analytic philosophy, and this book provides a clear and accessible introduction to the subject. It discusses some of the main theories of justification, including foundationalism, coherentism, reliabilism, and virtue epistemology. Other topics include the Gettier problem, internalism and externalism, skepticism, the problem of epistemic circularity, the problem of the criterion, a priori knowledge, and naturalized epistemology. Intended primarily for students taking a first class in epistemology, this lucid (...) and well-written text would also provide an excellent introduction for anyone interested in knowing more about this important area of philosophy. (shrink)
In an attempt to improve upon Alexander Pruss’s work (2006, pp. 240-248), I (Weaver, 2012) have argued that if all purely contingent events could be caused and something like a Lewisian analysis of causation is true (per Lewis, 2004), then all purely contingent events have causes. I dubbed the derivation of the universality of causation the “Lewisian argument”. The Lewisian argument assumed not a few controversial metaphysical theses, particularly essentialism, an incommunicable-property view of essences (per Plantinga 2003), and (...) the idea that counterfactual dependence is necessary for causation. There are, of course, substantial objections to such theses. While I think a fight against objections to the Lewisian argument can be won, I develop, in what follows, a much more intuitive argument for the universality of causation which takes as its inspiration a result from Frederic Fitch’s work (1963) (with credit to who we now know was Alonzo Church (2009)) that if all truths are such that they are knowable, then (counter-intuitively) all truths are known. The resulting Church-Fitch proof for the universality of causation is preferable to the Lewisian argument since it rests upon far weaker formal and metaphysical assumptions than those of the Lewisian argument. (shrink)
According to Uriah Kriegel’s self-representational theory of consciousness, mental state M is conscious just in case it is a complex with suitably integrated proper parts, M1 and M2, such that M1 is a higher-order representation of lower-order representation M2. Kriegel claims that M thereby “indirectly” represents itself, and he attempts to motivate this claim by appealing to what he regards as intuitive cases of indirect perceptual and pictorial representation. For example, Kriegel claims that it’s natural to say that in (...) directly perceiving the front surface of an apple one thereby perceives the apple itself. Cases such as this are supposed to provide intuitive support for the principle that if X represents Y, and Y is highly integrated into complex object Z, then X indirectly represents Z. In this paper I provide counterexamples to Kriegel’s principle of indirect representation, before going on to argue that we can explain what is going on in those cases in which the subject seems to represent a complex whole by representing one its parts without positing indirect representations anyway. I then argue that my alternative approach is superior to Kriegel’s in a number of ways, thereby rendering his theory of consciousness implausible. (shrink)
This paper explores the idea that when dealing with certain kinds of narratives, ‘like it or not’, consumers of fiction will bring the same sorts of skills (or at least a subset of them) to bear that they use when dealing with actual minds. Let us call this the ‘Same Resources Thesis’. I believe the ‘Same Resources Thesis’ is true. But this is because I defend the view that engaging in narrative practices is the normal developmental route through which children (...) acquire the capacity to make sense of what it is to act for a reason. If so, narratives are what provide crucial resources for dealing with actual minds – at least those of a certain sophisticated sort. I argue however that to the extent that we mindread at all, it is likely that we – i.e. those with the appropriate linguistically scaffolded abilities to make mental attributions – rely on our basic mind minding capacities to do so. So theory only comes into play when we mind guess, but theory of mind doesn’t come into it at all, neither when we deal with actual or fictional minds. (shrink)
Empirical assessments of Cognitive Behavioral Theory and theoretical considerations raise questions about the fundamental theoretical tenet that psychological disturbances are mediated by consciously accessible cognitive structures. This paper considers this situation in light of emotion theory in philosophy. We argue that the “perceptual theory” of emotions, which underlines the parallels between emotions and sensory perceptions, suggests a conception of cognitive mediation that can accommodate the observed empirical anomalies and one that is consistent with the dual-processing models dominant (...) in cognitive psychology. (shrink)
First published in 1984 as part of The Collected Papers of Bertrand Russell , Theory of Knowledge represents an important addition to our knowledge of Russell's thought. In this work Russell attempts to flesh out the sketch implicit in The Problems of Philosophy . It was conceived by Russell as his next major project after Principia Mathematica and was intended to provide the epistemological foundations for his work. Russell's subsequent difficulties in presenting his theory of knowledge, brought on (...) by what he considered to be devastating criticisms of Wittgenstein, led to both his abandonment of this work and to a major transformation in his thought. Theory of Knowledge , now available for the first time in paperback, gives us a picture of one of the great minds of the twentieth century at work. It is possible to see the unsolved problems left without disguise or evasion. This second edition has retained the full scholarly introduction. The photographs of the manuscript, appendices, and notes on textual matters have been eliminated to provide a concise and accessible guide to understanding both Russell's own thought and his relationship with Wittgenstein. (shrink)
An Introduction to the Theory of Knowledge guides the reader through the key issues and debates in contemporary epistemology. Lucid, comprehensive and accessible, it is an ideal textbook for students who are new to the subject and for university undergraduates. The book is divided into five parts. Part I discusses the concept of knowledge and distinguishes between different types of knowledge. Part II surveys the sources of knowledge, considering both a priori and a posteriori knowledge. Parts III and IV (...) provide an in-depth discussion of justification and scepticism. The final part of the book examines our alleged knowledge of the past, other minds, morality and God. O'Brien uses engaging examples throughout the book, taking many from literature and the cinema. He explains complex issues, such as those concerning the private language argument, non-conceptual content, and the new riddle of induction, in a clear and accessible way. This textbook is an invaluable guide to contemporary epistemology. (shrink)
Some argue that Candrakīrti is committed to rejecting all theories of perception in virtue of the rejection of the foundationalisms of the Nyāya and the Pramāṇika. Others argue that Candrakīrti endorses the Nyāya theory of perception. In this paper, I will propose an alternative non-foundationalist theory of perception for Candrakīriti. I will show that Candrakrti’s works provide us sufficient evidence to defend a typical Prāsagika’s account of perception that, I argue, complements his core non-foundationalist ontology.
“Virtue jurisprudence” is a normative and explanatory theory of law that utilises the resources of virtue ethics to answer the central questions of legal theory. The main focus of this essay is the development of a virtue–centred theory of judging. The exposition of the theory begins with exploration of defects in judicial character, such as corruption and incompetence. Next, an account of judicial virtue is introduced. This includes judicial wisdom, a form of phronesis, or sound practical (...) judgement. A virtue–centred account of justice is defended against the argument that theories of fairness are prior to theories of justice. The centrality of virtue as a character trait can be drawn out by analysing the virtue of justice into constituent elements. These include judicial impartiality (even–handed sympathy for those affected by adjudication) and judicial integrity (respect for the law and concern for its coherence). The essay argues that a virtue–centred theory accounts for the role that virtuous practical judgement plays in the application of rules to particular fact situations. Moreover, it contends that a virtue–centred theory of judging can best account for the phenomenon of lawful judicial disagreement. Finally, a virtue–centred approach best accounts for the practice of equity, departure from the rules based on the judge’s appreciation of the particular characteristics of individual fact situations. [ABSTRACT FROM AUTHOR]. (shrink)