This essay attempts to develop a psychologically informed semantics of perception reports, whose predictions match with the linguistic data. As suggested by the quotation from Miller and Johnson-Laird, we take a hallmark of perception to be its fallible nature; the resulting semantics thus necessarily differs from situation semantics. On the psychological side, our main inspiration is Marr's (1982) theory of vision, which can easily accomodate fallible perception. In Marr's theory, vision is a multi-layered process. The different layers have filters of (...) different gradation, which makes vision at each of them approximate. On the logical side, our task is therefore twofold • to formalise the layers and the ways in which they may refine each other, and • to develop logical means to let description vary with such degrees of refinement. The first task is formalised by means of an inverse systems of first order models, with reality appearing as its inverse limit. The second task is formalised by means of so-called conditional quantifiers, a new form of generalised quantification which can best be described as resource bounded quantification. We show that the logic provides for a semantics and pragmatics of direct perception reports. In particular, direct perception reports have a possibly nonveridical, approximative semantics, which becomes veridical only by virtue of our pragmatic expectation that what is perceived would continue to be the case, were we to perceive more accurately. It is a general feature of resource bounded logics that the underlying logics are weak, but that stronger principles can be obtained pragmatically, by strengthening the resource. For the logic of vision this feature is clarified by showing how changes in the resource capture different notions of partiality, and by studying how the perception verb interacts with connectives and quantifiers in different visual contexts. The inference Veridicality, which is now viewed rather as a nonmonotonic inference, is also studied in depth. We end with an attempt to buttress the proposed model by comparing it with suggestions put forward in Cognitive or Conceptual Semantics, in the literature on evidentials, and in Husserl's philosophy of perception. (shrink)
After a brief biography of Jaap van Brakel we set out his appropriation and use of the distinction between the manifest image and the scientific image of the world. In a certain sense van Brakel gives priority to the manifest image as the ultimate source of meaning in chemical discourses. He does not take sides in the debate about nominal and real essences, twin earths and so, but presents a compromise. As an active practitioner of the chemical arts (...) he emphasises the indispensability of models as a main tool for chemical thinking. We then turn to van Brakel’s interest in forging an intercultural point of view in which philosophy of chemistry plays an important part. (shrink)
How, and why, does Earth (the element) move to the centre of Aristotle's Universe? In this paper, I argue that we cannot understand why it does so by reference merely to the nature of Earth, or the attractive force of the Centre. Rather, we have to understand the role that Earth plays in the cosmic order. Thus, in Aristotle, the behaviour of the elements is explained as one explains the function of organisms in a living organism.
This paper explores the level of obligation called for by Milton Friedman’s classic essay “The Social Responsibility of Business is to Increase Profits.” Several scholars have argued that Friedman asserts that businesses have no or minimal social duties beyond compliance with the law. This paper argues that this reading of Friedman does not give adequate weight to some claims that he makes and to their logical extensions. Throughout his article, Friedman emphasizes the values of freedom, respect for law, and (...) duty. The principle that a business professional should not infringe upon the liberty of other members of society can be used by business ethicists to ground a vigorous line of ethical analysis. Any practice, which has a negative externality that requires another party to take a significant loss without consent or compensation, can be seen as unethical. With Friedman’s framework, we can see how ethics can be seen as arising from the nature of business practice itself. Business involves an ethics in which we consider, work with, and respect strangers who are outside of traditional in-groups. (shrink)
Does epistemic justification aim at truth? The vast majority of epistemologists instinctively answer 'Yes'; it's the textbook response. Joseph Cruz and John Pollock surprisingly say no. In 'The Chimerical Appeal of Epistemic Externalism' they argue that justification bears no interesting connection to truth; justification does not even aim at truth. 'Truth is not a very interesting part of our best understanding' of justification (C&P 2004, 137); it has no 'connection to the truth.' A 'truth-aimed ... epistemology is not (...) entitled to carry the day' (C&P 2004, 138, emphasis added).Pollock and Cruz's argument for this surprising conclusion is of general interest for it is 'out of step with a very common view on the .. (shrink)
Science provides us with the methodological key to wisdom. This idea goes back to the 18th century French Enlightenment. Unfortunately, in developing the idea, the philosophes of the Enlightenment made three fundamental blunders: they failed to characterize the progress-achieving methods of science properly, they failed to generalize these methods properly, and they failed to develop social inquiry as social methodology having, as its basic task, to get progress-achieving methods, generalized from science, into social life so that humanity might make progress (...) towards an enlightened world. Instead, the philosophes developed social inquiry as social science. This botched version of the Enlightenment idea was further developed throughout the 19th century, and built into academia in the early 20th century with the creation of university departments of social science. As a result, academia today seeks knowledge but does not devote reason to the task of helping humanity make progress towards a better, wiser world. Our current and impending global crises are the outcome. We urgently need to bring about a revolution in universities throughout the world so that the blunders of the Enlightenment are corrected, and universities take up their proper task of helping humanity make progress towards a wiser world. (shrink)
Some philosophers think that rationality consists in responding correctly to reasons, or alternatively in responding correctly to beliefs about reasons. This paper considers various possible interpretations of ‘responding correctly to reasons’ and of ‘responding correctly to beliefs about reasons’, and concludes that rationality consists in neither, under any interpretation. It recognizes that, under some interpretations, rationality does entail responding correctly to beliefs about reasons. That is: necessarily, if you are rational you respond correctly to your beliefs about reasons.
According to the moving spotlight theory of time, the property of being present moves from earlier times to later times, like a spotlight shone on spacetime by God. In more detail, the theory has three components. First, it is a version of eternalism: all times, past present and future, exist. (Here I use “exist” in its tenseless sense.) Second, it is a version of the A-theory of time: there are nonrelative facts about which times are past, which time is present, (...) and which times are future. That is, it is not just that the year 1066 is past relative to 2007. The year 1066 is also past full-stop, not relative to any other time. (The A-theory is opposed to the B-theory of time, which says that facts about which times are past are relative to other times.) And third, on this view the passage of time is a real phenomenon. Which moment is present keeps changing. As I will sometimes put it, the NOW moves from the past toward the future.1 And this does not mean that relative to different times, different times are present. Even the B-theory can say that 1999 is present relative to 1999 but is not present relative to 2007. No, according to the moving spotlight theory, the claim that which moment is present keeps changing is supposed to be true, even from a perspective outside time. (shrink)
Joshua Greene has argued that several lines of empirical research, including his own fMRI studies of brain activity during moral decision-making, comprise strong evidence against the legitimacy of deontology as a moral theory. This is because, Greene maintains, the empirical studies establish that “characteristically deontological” moral thinking is driven by prepotent emotional reactions which are not a sound basis for morality in the contemporary world, while “characteristically consequentialist” thinking is a more reliable moral guide because it is characterized by greater (...) cognitive command and control. In this essay, I argue that Greene does not succeed in drawing a strong statistical or causal connection between prepotent emotional reactions and deontological theory, and so does not undermine the legitimacy of deontological moral theories. The results that Greene relies on from neuroscience and social psychology do not establish his conclusion that consequentialism is superior to deontology. (shrink)
According to the Constitution View of persons, a human person is wholly constituted by (but not identical to) a human organism. This view does justice both to our similarities to other animals and to our uniqueness. As a proponent of the Constitution View, I defend the thesis that the coming-into-existence of a human person is not simply a matter of the coming-into-existence of an organism, even if that organism ultimately comes to constitute a person. Marshalling some support from developmental (...) psychology, I give a broadly materialistic account of the coming-into-existence of a human person. I argue for the metaphysical superiority of the Constitution View to Biological Animalism, Thomistic Animalism, and other forms of Substance Dualism. I conclude by discussing the single implication of the Constitution View for thinking about abortion. Footnotesa Thanks to Gareth Matthews and Catherine E. Rudder for comments. I am also grateful to other contributors to this volume, especially Robert A. Wilson, Marya Schechtman, David Oderberg, Stephen Braude, and John Finnis. (shrink)
In this paper I argue against Twin-Earth externalism. The mistake that Twin Earth arguments rest on is the failure to appreciate the force of the following dilemma. Some features of things around us do matter for the purposes of conceptual classification, and others do not. The most plausible way to draw this distinction is to see whether a certain feature enters the cognitive perspective of the experiencing subject in relation to the kind in question or not. If it does, (...) we can trace conceptual differences to internal differences. If it doesn’t, we do not have a case of conceptual difference. Neither case supports Twin Earth externalism. (shrink)
One of the principal lessons of The Concept of Law is that legal systems are not only comprised of rules, but founded on them as well. As Hart painstakingly showed, we cannot account for the way in which we talk and think about the law - that is, as an institution which persists over time despite turnover of officials, imposes duties and confers powers, enjoys supremacy over other kinds of practices, resolves doubts and disagreements about what is to be done (...) in a community and so on - without supposing that it is at bottom regulated by what he called the secondary rules of recognition, change and adjudication. Given this incontrovertible demonstration that every legal system must contain rules constituting its foundation, it might seem puzzling that many philosophers have contested Hart's view. In particular, they have objected to his claim that every legal system contains a rule of recognition. More surprisingly, these critiques span different jurisprudential schools. Positivists such as Joseph Raz, as well as natural lawyers such as Ronald Dworkin and John Finnis, have been among Hart's most vocal critics. In this essay, I would like to examine the opposition to the rule of recognition. What is objectionable about Hart's doctrine? Why deny that every legal system necessarily contains a rule setting out the criteria of legal validity? And are these objections convincing? Does the rule of recognition actually exist? This essay has five parts. In Part One, I try to state Hart's doctrine of the rule of recognition with some precision. As we will see, this task is not simple, insofar as Hart's position on this crucial topic is often frustratingly unclear. I also explore in this part whether the United States Constitution, or any of its provisions, can be considered the Hartian rule of recognition for the United States legal system. In Part Two, I attempt to detail the many roles that the rule of recognition plays within Hart's theory of law. In addition to the function that Hart explicitly assigned to it, namely, the resolution of normative uncertainty within a community, I argue that the rule of recognition, and the secondary rules more generally, also account for the law's dexterity, efficiency, normativity, continuity, persistence, supremacy, independence, identity, validity, content and existence. In Part Three, I examine three important challenges to Hart's doctrine of the rule of recognition. They are: 1) Hart's rule of recognition is under- and over-inclusive; 2) Hart cannot explain how social practices are capable of generating rules that confer powers and impose duties and hence cannot account for the normativity of law; 3) Hart cannot explain how disagreements about the criteria of legal validity that occur within actual legal systems, such as in American law, are possible. In Parts Four and Five, I address these various objections. I argue that although Hart's particular account of the rule of recognition is flawed and should be rejected, a related notion can be fashioned and should be substituted in its place. The idea, roughly, is to treat the rule of recognition as a shared plan which sets out the constitutional order of a legal system. As I try to show, understanding the rule of recognition in this new way allows the legal positivist to overcome the challenges lodged against Hart's version while still retaining the power of the original idea. (shrink)
In this paper, I argue against Peter van Inwagen’s claim (in “Free Will Remains a Mystery”), that agent-causal views of free will could do nothing to solve the problem of free will (specifically, the problem of chanciness). After explaining van Inwagen’s argument, I argue that he does not consider all possible manifestations of the agent-causal position. More importantly, I claim that, in any case, van Inwagen appears to have mischaracterized the problem in some crucial ways. Once we are clear (...) on the true nature of the problem of chanciness, agent-causal views do much to eradicate it. (shrink)
Does synesthesia undermine representationalism? Gregg Rosenberg (2004) argues that it does. On his view, synesthesia illustrates how phenomenal properties can vary independently of representational properties. So, for example, he argues that sound/color synesthetic experiences show that visual experiences do not always represent spatial properties. I will argue that the representationalist can plausibly answer Rosenberg.
This article addresses the following problems: What is a mechanism, how can it be discovered, and what is the role of the knowledge of mechanisms in scientific explanation and technological control? The proposed answers are these. A mechanism is one of the processes in a concrete system that makes it what it is for example, metabolism in cells, interneuronal connections in brains, work in factories and offices, research in laboratories, and litigation in courts of law. Because mechanisms are largely (...) or totally imperceptible, they must be conjectured. Once hypothesized they help explain, because a deep scientific explanation is an answer to a question of the form, "How does it work, that is, what makes it tickwhat are its mechanisms?" Thus, by contrast with the subsumption of particulars under a generalization, an explanation proper consists in unveiling some lawful mechanism, as when political stability is explained by either coercion, public opinion manipulation, or democratic participation. Finding mechanisms satisfies not only the yearning for understanding, but also the need for control. Key Words: explanation function mechanism process system systemism. (shrink)
Composition as Identity is the view that, in some sense, a whole is numerically identical with its parts. Compositional universalism is the view that, whenever there are some things, there is a whole composed of those things. Despite the claims of many philosophers, these views are logically independent. Here, I will show that composition as identity does not entail compositional universalism.
Recent empirical research seems to show that emotions play a substantial role in moral judgment. Perhaps the most important line of support for this claim focuses on disgust. A number of philosophers and scientists argue that there is adequate evidence showing that disgust significantly influences various moral judgments. And this has been used to support or undermine a range of philosophical theories, such as sentimentalism and deontology. I argue that the existing evidence does not support such arguments. At best (...) it suggests something rather different: that moral judgment can have a minor emotive function, in addition to a substantially descriptive one. (shrink)
In his recent book The Stream of Consciousness, Dainton provides what must surely count as one of the most comprehensive discussions of time-consciousness in analytical philosophy. In the course of doing so, he also challenges Husserl's classical account in a number of ways. In the following contribution, I will compare Dainton's and Husserl's respective accounts. Such a comparison will not only make it evident why an analysis of time-consciousness is so important, but will also provide a neat opportunity to appraise (...) the contemporary relevance of Husserl's analysis. How does it measure up against one of the more recent analytical accounts? (shrink)
Falsehood can preclude knowledge in many ways. A false proposition cannot be known. A false ground can prevent knowledge of a truth, or so we argue, but not every false ground deprives its subject of knowledge. A falsehood that is not a ground for belief can also prevent knowledge of a truth. This paper provides a systematic account of just when falsehood precludes knowledge, and hence when it does not. We present the paper as an approach to the Gettier (...) problem and arrive at a relatively simple theory with virtues linked to several issues at the heart of contemporary epistemology. (shrink)
How Does the Environment Affect the Person? Mark H. Bickhard invited chapter in Children's Development within Social Contexts: Metatheoretical, Theoretical and Methodological Issues, Erlbaum. edited by L. T. Winegar, J. Valsiner, in press.
Following hints in the writings of Isaiah Berlin, some political theorists hold that the thesis of value pluralism is true and that this truth provides support for political liberalism of a sort that prescribes wide guarantees of individual liberty.1 There are many different goods, and they are incommensurable. Hence, people should be left free to live their own lives as they choose so long as they don’t harm others in certain ways. In a free society there is a strong presumption (...) in favor of letting individuals act as they choose without interference by others. William A. Galston has developed this argument with exemplary clarity.2 He is wrong. The idea that value incommensurability is a reason for toleration of diverse ways of life and protection of the individual’s freedom to choose among diverse ways of life is a mistake. In his paper for this volume, What Value Pluralism Means for Legal Constitutional Orders, Galston undertakes to resolve a further problem, namely, whether the presumption in favor of individual liberty that value pluralism establishes can be kept within bounds. In his words, “Within the pluralist framework, how is the basis for a viable political <span class='Hi'>community</span> to be secured?”3 On one construal of these words, Galston is seeking the solution to a non-problem. Value pluralism does not establish any normative presumption in favor of liberty, so the worry “does this presumption hold without limit” or, “are there good reasons that constrain it at some point”, is otiose. On another construal, Galston is addressing a different question: if most of the members of society came to believe that given value pluralism, they ought to be left free to live according to their own conception of values, then would a “decently ordered public life” be impossible to sustain?4 On the second construal, the issue being posed is a genuine empirical question, which philosophical arguments cannot answer. (shrink)
Metaethical expressivists claim that we can explain what moral words like ‘wrong’ mean without having to know what they are about – but rather by saying what it is to think that something is wrong – namely, to disapprove of it. Given the close connection between expressivists’ theory of the meaning of moral words and our attitudes of approval and disapproval, expressivists have had a hard time shaking the intuitive charge that theirs is an objectionably subjectivist or mind-dependent view of (...) morality. Expressivism, critics have charged over and again, is committed to the view that what is wrong somehow depends on or at least correlates with the attitudes that we have toward it. Arguments to this effect are sometimes subtle, and sometimes rely on fancy machinery, but they all share a common flaw. They all fail to respect the fundamental idea of expressivism: that ‘stealing is wrong’ bears exactly the same relationship to disapproval of stealing as ‘grass is green’ bears to the belief that grass is green. In this paper I rehearse the motivations for the fundamental idea of expressivism and show how the arguments of Frank Jackson and Philip Pettit , Russ Shafer-Landau , Jussi Suikkanen , and Christopher Peacocke  all fail on this same rock. In part 1 I’ll rehearse the motivation for expressivism – a motivation which directly explains why it does not have subjectivist consequences. Then in each of parts 2-5 I’ll illustrate how each of Jackson and Pettit’s, Peacocke’s, Shafer-Landau’s, and Suikkanen’s arguments work, respectively, and why each of them fails to respect the fundamental parity at the heart of expressivism. Though others have tried before me to explain why expressivism is not committed to any kind of subjectivism or mind-dependence – prominently including Blackburn , , Horgan and Timmons , and, in response to Pettit and Jackson, Dreier  and Smith and Stoljar , the explanation offered in this article is distinguished by its scope and generality.. (shrink)
The relations between rationality and optimization have been widely discussed in the wake of Herbert Simon's work, with the common conclusion that the rationality concept does not imply the optimization principle. The paper is partly concerned with adding evidence for this view, but its main, more challenging objective is to question the converse implication from optimization to rationality, which is accepted even by bounded rationality theorists. We discuss three topics in succession: (1) rationally defensible cyclical choices, (2) the revealed (...) preference theory of optimization, and (3) the infinite regress of optimization. We conclude that (1) and (2) provide evidence only for the weak thesis that rationality does not imply optimization. But (3) is seen to deliver a significant argument for the strong thesis that optimization does not imply rationality. (shrink)
Does quantum mechanics clash with the equivalence principle—and does it matter? Content Type Journal Article Pages 133-145 DOI 10.1007/s13194-010-0009-z Authors Elias Okon, Philosophy Department, UC San Diego, 9500 Gilman Dr., La Jolla CA, 92093, USA Craig Callender, Philosophy Department, UC San Diego, 9500 Gilman Dr., La Jolla CA, 92093, USA Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912 Journal Volume Volume 1 Journal Issue Volume 1, Number 1.
The last 15 years or so has seen the development of a fascinating new area of cognitive science: the cognitive science of religion (CSR). Scientists in this field aim to explain religious beliefs and various other religious human activities by appeal to basic cognitive structures that all humans possess. The CSR scientific theories raise an interesting philosophical question: do they somehow show that religious belief, more specifically belief in a god of some kind, is irrational? In this paper I investigate (...) this question and argue that CSR does not show that belief in god is irrational. (shrink)
Critics of functionalism about the mind often rely on the intuition that collectivities cannot be conscious in motivating their positions. In this paper, we consider the merits of appealing to the intuition that there is nothing that it’s like to be a collectivity. We demonstrate that collective mentality is not an affront to commonsense, and we report evidence that demonstrates that the intuition that there is nothing that it’s like to be a collectivity is, to some extent, culturally specific rather (...) than universally held. This being the case, we argue that mere appeal to the intuitive implausibility of collective consciousness does not offer any genuine insight into the nature of mentality in general, nor the nature of consciousness in particular. (shrink)
Fundamental to Quine’s philosophy of logic is the thesis that substitutional quantification does not express existence. This paper considers the content of this claim and the reasons for thinking it is true.
In a recent paper S. McCall adds another link to a chain of attempts to enlist Gödel’s incompleteness result as an argument for the thesis that human reasoning cannot be construed as being carried out by a computer.1 McCall’s paper is undermined by a technical oversight. My concern however is not with the technical point. The argument from Gödel’s result to the no-computer thesis can be made without following McCall’s route; it is then straighter and more forceful. Yet the argument (...) fails in an interesting and revealing way. And it leaves a remainder: if some computer does in fact simulate all our mathematical reasoning, then, in principle, we cannot fully grasp how it works. Gödel’s result also points out a certain essential limitation of self-reflection. The resulting picture parallels, not accidentally, Davidson’s view of psychology, as a science that in principle must remain “imprecise”, not fully spelt out. What is intended here by “fully grasp”, and how all this is related to self-reflection, will become clear at the end of this comment. (shrink)
Quine claims that holism (i.e., the Quine-Duhem thesis) prevents us from defining synonymy and analyticity (section 2). In Word and Object, he dismisses a notion of synonymy which works well even if holism is true. The notion goes back to a proposal from Grice and Strawson and runs thus: R and S are synonymous iff for all sentences T we have that the logical conjunction of R and T is stimulus-synonymous to that of S and T. Whereas Grice and Strawson (...) did not attempt to defend this definition, I try to show that it indeed gives us a satisfactory account of synonymy. Contrary to Quine, the notion is tighter than stimulus-synonymy – particularly when applied to sentences with less than critical semantic mass (section 3). Now according to Quine, analyticity could be defined in terms of synonymy, if synonymy were to make sense: A sentence is analytic iff synonymous to self-conditionals. This leads us to the following notion of analyticity: S is analytic iff, for all sentences T, the logical conjunction of S and T is stimulus-synonymous to T; an analytic sentence does not change the semantic mass of any theory to which it may be conjoined (section 4). This notion is tighter than Quine's stimulus-analyticity; unlike stimulus-analyticity, it does not apply to those sentences from the very center of our theories which can be assented to come what may, even though they are not synthetic in the intuitive sense (section 5). Conclusion: We can have well-defined notions of synonymy and analyticity even if we embrace Quine's holism, naturalism, behaviorism, and radical translation. Quine's meaning skepticism is to be repudiated on Quinean grounds. (shrink)
This paper considers the meaning and use of the English particle man . It is shown that the particle does quite different things when it appears in sentence-initial and sentence-final position; the first use involves expression of an emotional attitude as well as, on a particular intonation, intensification; this use is analyzed using a semantics for degree predicates along with a separate dimension for the expressive aspect. Further restrictions on modification with the sentence-initial particle involving monotonicity and evidence are (...) introduced and analyzed. The sentence-final use can be viewed as strengthening the action performed by the sentence. A formal semantics is given by making use of dynamic techniques and, in a sense, dynamically simulating the modification of certain speech acts. Some empirical and theoretical extensions of the analyses are proposed and some consequences discussed. (shrink)
The traditional approach to the abortion debate revolves around numerous issues, such as whether the fetus is a person, whether the fetus has rights, and more. Don Marquis suggests that this traditional approach leads to a standoff and that the abortion debate “requires a different strategy.” Hence his “future of value” strategy, which is summarized as follows: (1) A normal fetus has a future of value. (2) Depriving a normal fetus of a future of value imposes a misfortune on it. (...) (3) Imposing a misfortune on a normal fetus is prima facie wrong. (4) Therefore, depriving a normal fetus of a future of value is prima facie wrong. (5) Killing a normal fetus deprives it of a future of value. (6) Therefore, killing a normal fetus is prima facie wrong. In this paper, I argue that Marquis’s strategy is not different since it involves the concept of person—a concept deeply rooted in the traditional approach. Specifically, I argue that futures are valuable insofar as they are not only dominated by goods of consciousness, but are experienced by psychologically continuous persons. Moreover, I argue that his strategy is not sound since premise (1) is false. Specifically, I argue that a normal fetus, at least during the first trimester, is not a person. Thus, during that stage of development it is not capable of experiencing its future as a psychologically continuous person and, hence, it does not have a future of value. (shrink)
Abstract: “Consciousness” seems to be a polysemic, ambiguous, term. Because of this, theorists have sought to distinguish the different kinds of phenomena that “consciousness” denotes, leading to a proliferation of terms for different kinds of consciousness. However, some philosophers—univocalists about consciousness—argue that “consciousness” is not polysemic or ambiguous. By drawing upon the history of philosophy and psychology, and some resources from semantic theory, univocalism about consciousness is shown to be implausible. This finding is important, for if we accept the univocalist (...) account then we are less likely to subject our thought and talk about the mind to the kind of critical analysis that it needs. The exploration of the semantics of “consciousness” offered here, by way of contrast, clarifies and fine-tunes our thought and talk about consciousness and conscious mentality and explains why “consciousness” means what it does, and why it means a number of different, but related, things. (shrink)
Autism is a neurodevelopmental condition characterized by difficulties in social interaction (APA, 2000). Successful social interaction relies, in part, on determining the thoughts and feelings of others, an ability commonly attributed to our faculty of folk or common-sense psychology. Because the symptoms of autism should be present by around the second birthday, it follows that the study of autism should tell us something about the early emerging mechanisms necessary for the development of an intact faculty of folk psychology. Our aims (...) in this chapter are threefold; (1) to examine the literature on "socialunderstanding" mechanisms in autism, particularly those assumed to develop in the first years of life; (2) to examine the related literature on typically developing infants and toddlers, and (3) to examine the theoretical approaches that attempt to characterize the early stages and development of this impressive skill. In doing so, we hope to help resolve some of the disagreements and sticking points that riddle the topic. In particular we will attempt to shift the focus from whether children have this or that specific mental-state concept (which they use to predict behavior of others) to a more developmentally friendly approach centered around the notion of reasons, recognizing that they may well exist before they are represented, and hence before they can be appreciated, or expressed. The peer commentary in Behavioral and Brain Sciences following Premack and Woodruff (1978) - "Does the chimpanzee have a theory of mind'" - not only introduced the "falsebelief' task (Dennett, 1978; Wimmer & Perner, 1983), but addressed a host of issues surrounding the characterization of second-order intentional systems, systems that may (or must) be interpreted as having beliefs about beliefs (or desires or intentions .... (shrink)
Non-moral ignorance can exculpate: if Anne spoons cyanide into Bill's coffee, but thinks she is spooning sugar, then Anne may be blameless for poisoning Bill. Gideon Rosen argues that moral ignorance can also exculpate: if one does not believe that one's action is wrong, and one has not mismanaged one's beliefs, then one is blameless for acting wrongly. On his view, many apparently blameworthy actions are blameless. I discuss several objections to Rosen. I then propose an alternative view on (...) which many agents who act wrongly are blameworthy despite believing they are acting morally permissibly, and despite not having mismanaged their moral beliefs.1. (shrink)
What are the relationships between philosophy and the history of philosophy, the history of science and the philosophy of science? This selection of essays by Lorenz Krüger (1932-1994) presents exemplary studies on the philosophy of John Locke and Immanuel Kant, on the history of physics and on the scope and limitations of scientific explanation, and a realistic understanding of science and truth. In his treatment of leading currents in 20th century philosophy, Krüger presents new and original arguments for a deeper (...) understanding of the continuity and dynamics of the development of scientific theory. These result in significant consequences for the claim of the sciences that they understand reality in a rational manner. The case studies are complemented by fundamental thoughts on the relationship between philosophy, science, and their common history. (shrink)
Are we perhaps in the "matrix", or anyway, victims of perfect and permanent computer simulation? No. The most convincing—and shortest—version of Putnam's argument against the possibility of our eternal envattment is due to Crispin Wright (1994). It avoids most of the misunderstandings that have been elicited by Putnam's original presentation of the argument in "Reason, Truth and History" (1981). But it is still open to the charge of question-begging. True enough, the premisses of the argument (disquotation and externalism) can be (...) formulated and defended without presupposing external objects whose existence appears doubtful in the light of the very skeptical scenario which Putnam wants to repudiate. However, the argument is only valid if we add an extra premiss as to the existence of some external objects. In order to avoid circularity, we should run the argument with external objects which must exist even if we are brains in a vat, e.g. with computers rather than with trees. As long as the skeptic is engaged in a discussion of the brain-in-a-vat scenario, she should neither deny the existence of computers nor the existence of causal relations; for if she does, she is in fact denying that we are brains in a vat. (shrink)
In seeking to answer the question "How does that which is other become evil?" the author provides a discussion of four entwined aspects of the issue at stake: (1) difficulty in achieving clarity on the grammar of evil; (2) genocide as a striking illustration of otherness becoming evil; (3) the challenge of postnationalism as a resource for dealing with otherness in the socio-political arena; and (4) the ethico-religious dimension as it relates to the wider problem of evil.
In this paper I distinguish interpretations of the question ``How fast does time pass?’’ that are important for the debate over the reality of objective becoming from interpretations that are not. Then I discuss how one theory that incorporates objective becoming—the moving spotlight theory of time—answers this question. It turns out that there are several ways to formulate the moving spotlight theory of time. One formulation says that time passes but it makes no sense to ask how fast; another (...) formulation says that time passes at one second per supersecond; and a third says that time passes at one second per second. I defend the intelligibility of this final version of the theory. (shrink)
Walter Sinnott-Armstrong argues that 'ought' does not entail 'can', but instead conversationally implicates it. I argue that Sinnott-Armstrong is actually committed to a hybrid view about the relation between 'ought' and 'can'. I then give a tensed formulation of the view that 'ought' entails 'can' that deals with Sinnott-Armstrong's argument and that is more unified than Sinnott-Armstrong's view.
Studies of normal individuals reveal an asymmetry in the folk concept of intentional action: an action is more likely to be thought of as intentional when it is morally bad than when it is morally good. One interpretation of these results comes from the hypothesis that emotion plays a critical mediating role in the relationship between an action’s moral status and its intentional status. According to this hypothesis, the negative emotional response triggered by a morally bad action drives the attribution (...) of intent to the actor, or the judgment that the actor acted intentionally. We test this hypothesis by presenting cases of morally bad and morally good action to seven individuals with deficits in emotional processing resulting from damage to the ventromedial prefrontal cortex (VMPC). If normal emotional processing is necessary for the observed asymmetry, then individuals with VMPC lesions should show no asymmetry. Our results provide no support for this hypothesis: like normal individuals, those with VMPC lesions showed the same asymmetry, tending to judge that an action was intentional when it was morally bad but not when it was morally good. Based on this finding, we suggest that normal emotional processing is not responsible for the observed asymmetry of intentional attributions and thus does not mediate the relationship between an action’s moral status and its intentional status. (shrink)
forthcoming in reisner and steglich-peterson, eds., Reasons for Belief If I believe, for no good reason, that P and I infer (correctly) from this that Q, I don’t think we want to say that I ‘have’ P as evidence for Q. Only things that I believe (or could believe) rationally, or perhaps, with justification, count as part of the evidence that I have. It seems to me that this is a good reason to include an epistemic acceptability constraint on evidence (...) possessed…1 It is a truism that adopting an unjustified belief does not put you in a better evidential position with respect to believing its consequences. This truism has led many philosophers to assume that there must, at a minimum, be a justification condition (and perhaps even a knowledge condition) on what it takes to count as having evidence. This is the best (or only) possible explanation of the truism, these philosophers have believed. This paper explores an alternative explanation for the truism. According to the alternative explanation that I will offer, unjustified beliefs do not put you in a better evidential position with respect to believing their consequences because any evidence you have in virtue of having an unjustified belief is guaranteed to be defeated. Since the lack of justification for a belief guarantees its defeat, I will suggest, we don't need to postulate a special justification condition (much less a knowledge condition) on what it takes to count as having evidence. Why is this important? It is important because the assumption that there must be a justification condition (or perhaps a knowledge condition) on what it takes to count as having evidence places a high bar on what it takes to have evidence - such a high bar that it is difficult to see how this bar could be met in the case of basic, perceptually justified beliefs. As a result, the high bar set by this condition plays a fundamental role, I will claim, in central features of a core dialectic from the epistemology of basic perceptual belief which plays a central role in the debates between internalism and externalism, foundationalism and coherentism, and rationalism and empiricism.. (shrink)