Alan Millar's paper (2011) involves two parts, which I address in order, first taking up the issues concerning the goal of inquiry, and then the issues surrounding the appeal to reflective knowledge. I argue that the upshot of the considerations Millar raises count in favour of a more important role in value-driven epistemology for the notion of understanding and for the notion of epistemic justification, rather than for the notions of knowledge and reflective knowledge.
John Stuart Mill argued, in his Principles of Political Economy, that existing laws and customs of private property ought to be reformed to promote a far more egalitarian form of capitalism than hitherto observed anywhere. He went on to suggest that such an ideal capitalism might evolve spontaneously into a decentralized socialism involving a market system of competing worker co-operatives. That possibility of market socialism emerged only as the working classes gradually developed the intellectual and moral qualities required for worker (...) co-operatives to succeed against private firms. Workers would tend to reject the hierarchical wage relation as they developed the requisite personal qualities, he believed, and capitalists, facing escalating wages for skilled labour as a result of the diminishing supply of high-quality workers for hire, would tend to lend their capital to the worker co-operatives ‘at a diminishing rate of interest, and at last, perhaps, even to exchange their capital for terminable annuities. In this or some such mode’, he speculated, ‘the existing accumulations of capital might honestly, and by a kind of spontaneous process, become in the end the joint property of all who participate in their productive employment: a transformation which, thus effected, would be the nearest approach to social justice, and the most beneficial ordering of industrial affairs for the universal good, which it is possible at present to foresee.’. (shrink)
Arrhenius and Rabinowicz have argued that Millian qualitative superiorities are possible without assuming that any pleasure, or type of pleasure, is infinitely superior to another. But AR's analysis is fatally flawed in the context of ethical hedonism, where the assumption in question is necessary and sufficient for Millian qualitative superiorities. Marginalist analysis of the sort pressed by AR continues to have a valid role to play within any plausible version of hedonism, provided the fundamental incoherence that infects AR's use of (...) such analysis is removed. But what AR call ‘Millian superiorities’ are never genuine qualitative superiorities in Mill's sense. Mill scholars need to appreciate this point and recognize that the interpretation of qualitative superiorities as infinite superiorities is the only interpretation which is compatible with the text of Mill's Utilitarianism. The continuing failure to appreciate the possibility of infinite superiorities has precluded any adequate understanding of the extraordinary structure of Mill's pluralistic hedonistic utilitarianism. (shrink)
Jonathan Dancy works within almost all fields of philosophy but is best known as the leading proponent of moral particularism. Particularism challenges “traditional” moral theories, such as Contractualism, Kantianism and Utilitarianism, in that it denies that moral thought and judgement relies upon, or is made possible by, a set of more or less well-defined, hierarchical principles. During the summer of 2006, the Philosophy Departments of Lund University (Sweden) and the University of Reading (England) began a series of exchanges to (...) take place every other year, alternating between the departments. Andreas Lind and Johan Brännmark arranged to meet Dancy during the first meeting in Lund to talk about questions regarding particularism, moral theory and the shape of the analytical tradition. The major part of the conversation is printed below. (shrink)
I continue my argument that Millian qualitative superiorities are infinite superiorities: one pleasant feeling, or type of pleasant feeling, is qualitatively superior to another in Mill's sense if and only if even a bit of the superior is more pleasant than any finite quantity of the inferior, however large. This gives rise to a hierarchy of higher and lower pleasures such that a reasonable hedonist always refuses to sacrifice a higher for a lower irrespective of the finite amounts of each. (...) Some indication of why this absolute refusal may be reasonable is provided in the course of outlining the content of the Millian hierarchy. It emerges that Mill's hedonistic utilitarianism has an extraordinary structure because it gives absolute priority over competing considerations to a code of justice that distributes equal rights and correlative duties for all. His utilitarianism also recognizes that certain aesthetic and spiritual pleasures may be qualitatively superior even to the pleasant feeling of security associated with the moral sentiment of justice. Thus, for instance, a noble individual may reasonably choose to waive his own rights so as to perform beautiful supererogatory actions that provide great benefits for others at the sacrifice of the right-holder's own vital interests. (shrink)
The dominant approach to environmental policy endorsed by conservative and libertarian policy thinkers, so-called “free market environmentalism”, is grounded in the recognition and protection of property rights in environmental resources. Despite this normative commitment to property rights, most self-described FME advocates adopt a utilitarian, welfare-maximization approach to climate change policy, arguing that the costs of mitigation measures could outweigh the costs of climate change itself. Yet even if anthropogenic climate change is decidedly less than catastrophic, human-induced climate change is likely (...) to contribute to environmental changes that violate traditional conceptions of property rights. Viewed globally, the actions of some countries—primarily industrialized nations—are likely to increase environmental harms suffered by other countries—less developed nations that have not made any significant contribution to climate change. It may well be that aggregate human welfare would be maximized in a warmer, wealthier world, or that the gains from climate change will offset environmental losses. Yet such claims, even if demonstrated, would not address the normative concern that the consequences of anthropogenic global warming would infringe upon the rights of people in less-developed nations. As a consequence, this paper calls for a rethinking of FME approaches to climate change policy. (shrink)
In my Practical Reality I argued that the reasons for which we act are not to be conceived of as psychological states of ourselves, but as real states of the world. The main reason for saying this was that only thus can we make sense of the idea that it is possible to act for a good reason. The good reasons we have for doing this action rather than that one consist mainly of features of the situations in which we (...) find ourselves; they do not consist in our believing certain things about those situations. For instance, the reason for my helping that person is that she is in trouble and I am the only person around. It is not that I believe both that she is in trouble and that I am the only person around. Give that the reason to help is that she is in trouble etc., it must be possible for my reason for helping to be just that, if it is indeed possible for one to act for a good reason. In fact, this sort of thing must be the normal arrangement. The reasons why we act, therefore, that is, our reasons for doing what we do, are not standardly to be conceived as states of ourselves, but as features of our situations. (shrink)
In 1970 Amartya Sen exposed an apparent antinomy that has come to be known as the Paradox of the Paretian Libertarian. Sen introduced his paradox by establishing a simple but startling theorem. Roughly put, what he proved was that if a mechanism for selecting social choice functions satisfies two standard adequacy conditions, there are possible situations in which it will violate either the very weak libertarian precept that every individual has at least some rights or the seemingly innocuous Paretian principle (...) that an option should be judged unacceptable if there is an available alternative that everyone prefers to it. Many economists and philosophers have proposed solutions to Sen's problem, but there is no general consensus on what solution is correct. In the present paper I argue that Sen's original theorem fails to establish the existence of any conflict between libertarianism and Paretianism. Furthermore, I contend that Sen has misinterpreted certain other theorems which he has used to defend the existence of a paradoxical conflict between these two doctrines. In general, I try to show that whenever Sen posits a Paretian-libertarian conflict to explain an apparently troubling result in social choice theory, the difficulty can be better dealt with either by claiming that the theorem in question imposes overly strong background constraints on the form of social choice functions or by claiming that it relies on an unacceptable construal of individual rights. (shrink)
Consider a circle and a pair of its semicircles. Which is prior, the whole or its parts? Are the semicircles dependent abstractions from their whole, or is the circle a derivative construction from its parts? Now in place of the circle consider the entire cosmos (the ultimate concrete whole), and in place of the pair of semicircles consider the myriad particles (the ultimate concrete parts). Which if either is ultimately prior, the one ultimate whole or its many ultimate parts?
On the now dominant Quinean view, metaphysics is about what there is. Metaphysics so conceived is concerned with such questions as whether properties exist, whether meanings exist, and whether numbers exist. I will argue for the revival of a more traditional Aristotelian view, on which metaphysics is about what grounds what. Metaphysics so revived does not bother asking whether properties, meanings, and numbers exist (of course they do!) The question is whether or not they are fundamental.
Grounding is often glossed as metaphysical causation, yet no current theory of grounding looks remotely like a plausible treatment of causation. I propose to take the analogy between grounding and causation seriously, by providing an account of grounding in the image of causation, on the template of structural equation models for causation.
Presents an analysis of Jonathan Edwards' theological position. This book includes a study of his life and the intellectual issues in the America of his time, and examines the problem of free will in connection with Leibniz, Locke, and Hume.
Few stage plays have much to do with analytic philosophy: Tom Stoppard has written two of them— Rosencrantz and Guildenstern are Dead and Jumpers . The contrast between these, especially in how they involve philosophy, could hardly be greater. Rosencrantz does not parade its philosophical content; but the philosophy is there all the same, and it is solid, serious and functional. In contrast with this, the philosophy which is flaunted throughout Jumpers is thin and uninteresting, and it serves the play (...) only in a decorative and marginal way. Its main effect has been to induce timidity in reviewers who could not see the relevance to the play of the large stretches of academic philosophy which it contains. Since the relevance doesn't exist, the timidity was misplaced, and so the kid gloves need not have been used. Without doubting that I would have enjoyed the work as performed on the London stage, aided by the talent of Michael Hordern and the charm of Diana Rigg, I don't doubt either that Jumpers is a poor effort which doesn't deserve its current success. I shan't argue for that, however. I want only to explain why Jumpers is not a significantly philosophical play, before turning to the more important and congenial task of showing that Rosencrantz and Guildenstern are Dead is one. (shrink)
The law tends to think that there is no difficulty about identifying humans. When someone is born, her name is entered into a statutory register. She is ‘X’ in the eyes of the law. At some point, ‘X’ will die and her name will be recorded in another register. If anyone suggested that the second X was not the same as the first, the suggestion would be met with bewilderment. During X's lifetime, the civil law assumed that the X who (...) entered into a contract was the same person who breached it. The criminal law assumed that X, at the age of 80, was liable for criminal offences ‘she’ committed at the age of 18. This accords with the way we talk. ‘She's not herself today’, we say; or ‘When he killed his wife he wasn't in his right mind’. The intuition has high authority: ‘To thine own self be true’, urged Polonius.1 It sounds as if we believe in souls—immutable, core essences that constitute our real selves. Medicine conspires in the belief. If you become mentally ill, a psychiatrist will seek to get you back to your right mind. The Mental Capacity Act 1985 states that when a patient loses capacity the only lawful interventions will be interventions which are in that patient's best interests,2 and that in determining what those interests are the decision-maker must have …. (shrink)
In this paper we propose to argue for two claims. The first is that a sizeable group of epistemological projects – a group which includes much of what has been done in epistemology in the analytic tradition – would be seriously undermined if one or more of a cluster of empirical hypotheses about epistemic intuitions turns out to be true. The basis for this claim will be set out in Section 2. The second claim is that, while the jury is (...) still out, there is now a substantial body of evidence suggesting that some of those empirical hypotheses are true. Much of this evidence derives from an ongoing series of experimental studies of epistemic intuitions that we have been conducting. A preliminary report on these studies will be presented in Section 3. In light of these studies, we think it is incumbent on those who pursue the epistemological projects in question to either explain why the truth of the hypotheses does not undermine their projects, or to say why, in light of the evidence we will present, they nonetheless assume that the hypotheses are false. In Section 4, which is devoted to Objections and Replies, we’ll consider some of the ways in which defenders of the projects we are criticizing might reply to our challenge. Our goal, in all of this, is not to offer a conclusive argument demonstrating that the epistemological projects we will be criticizing are untenable. Rather, our aim is to shift the burden of argument. (shrink)
An argument that takes issue with the contemporary epistemological consensus that justification is distinct from knowledge, proposing instead that justified belief simply is knowledge, and arguing in detail that a belief is justified when ...
Jonathan Dancy aims to establish the possibility of reasoning to action, by showing how similar it is to reasoning to belief. He offers a general theory of reasoning, which smoothly admits the differences there may be between the two types, while also considering the possibility of reasoning to hope, to fear, to doubt, and to intention.
‘‘Thus I believe that there is no part of matter which is not—I do not say divisible—but actually divided; and consequently the least particle ought to be considered as a world full of an infinity of different creatures.’’ (Leibniz, letter to Foucher).
Recent experimental philosophy arguments have raised trouble for philosophers' reliance on armchair intuitions. One popular line of response has been the expertise defense: philosophers are highly-trained experts, whereas the subjects in the experimental philosophy studies have generally been ordinary undergraduates, and so there's no reason to think philosophers will make the same mistakes. But this deploys a substantive empirical claim, that philosophers' training indeed inculcates sufficient protection from such mistakes. We canvass the psychological literature on expertise, which indicates that people (...) are not generally very good at reckoning who will develop expertise under what circumstances. We consider three promising hypotheses concerning what philosophical expertise might consist in: (i) better conceptual schemata; (ii) mastery of entrenched theories; and (iii) general practical know-how with the entertaining of hypotheticals. On inspection, none seem to provide us with good reason to endorse this key empirical premise of the expertise defense. (shrink)
I argue that the one and only truthmaker is the world. This view can be seen as arisingfrom (i) the view that truthmaking is a relation of grounding holding between true propositions and fundamental entities, together with (ii) the view that the world is the one and only fundamental entity. I argue that this view provides an elegant and economical account of the truthmakers, while solving the problem of negative existentials, in a way that proves ontologically revealing.
Suppose that Ann says, “Keith knows that the bank will be open tomorrow.” Her audience may well agree. Her knowledge ascription may seem true. But now suppose that Ben—in a different context—also says “Keith knows that the bank will be open tomorrow.” His audience may well disagree. His knowledge ascription may seem false. Indeed, a number of philosophers have claimed that people’s intuitions about knowledge ascriptions are context sensitive, in the sense that the very same knowledge ascription can seem true (...) in one conversational context but false in another. This purported fact about people’s intuitions serves as one of the main pieces of evidence for epistemic contextualism. (shrink)
Causation is widely assumed to be a binary relation: c causes e. I will argue that causation is a quaternary, contrastive relation: c rather than C* causes e rather than E*, where C* and E* are nonempty sets of contrast events. Or at least, I will argue that treating causation as contrastive helps resolve some paradoxes.
What does it mean to be disadvantaged? Is it possible to compare different disadvantages? What should governments do to move their societies in the direction of equality, where equality is to be understood both in distributional and social terms? Linking rigorous analytical philosophical theory with broad empirical studies, including interviews conducted for the purpose of this book, Wolff and de-Shalit show how taking theory and practice together is essential if the theory is to be rich enough to be applied to (...) the real world, and policy systematic enough to have purpose and justification. The book is in three parts. Part 1 presents a pluralist analysis of disadvantage, modifying the capability theory of Sen and Nussbaum to produce the 'genuine opportunity for secure functioning' view. This emphasises risk and insecurity as a central component of disadvantage. Part 2 shows how to identify the least advantaged in society even on a pluralist view. The authors suggest that disadvantage 'clusters' in the sense that some people are disadvantaged in several different respects. Thus identifying the least advantaged is not as problematic as it appears to be. Conversely, a society which has 'declustered disadvantaged' - in the sense that no group lacks secure functioning on a range of functionings - has made considerable progress in the direction of equality. Part 3 explores how to decluster disadvantage, by paying special attention to 'corrosive disadvantages' - those disadvantages which cause further disadvantages - and 'fertile functionings' - those which are likely to secure other functionings. In sum this books presents a refreshing new analysis of disadvantage, and puts forward proposals to help governments improve the lives of the least advantaged in their societies, thereby moving in the direction of equality. (shrink)
Jonathan Walkan challenges cognitive science's dominant model of mental representation and proposes a novel, well-devised alternative. The traditional view in the cognitive sciences uses a linguistic model of mental representation. That logic-based model of cognition informs and constrains both the classical tradition of artificial intelligence and modeling in the connectionist tradition. It falls short, however, when confronted by the frame problem---the lack of a principled way to determine which features of a representation must be updated when new information becomes (...) available. So far, proposed alternatives, including the imagistic model, have not resolved the problem. Waskan proposes the Intrinsic Cognitive Models hypothesis, according to which representational states can be concpetualized as the cognitive equivalent of scale models.Waskan argues further that the proposal that humans harbor and manipulate cognitive counterparts to scale models offers the only viable explanation for what most clearly differentiates humans from other creatures: the capacity to engage in truth-preserving manipulation of representations. The ICM hypothesis, he claims, can be distinguished from sentence-based accounts of truth preservation in a way that is fully compatible with what is known about the brain. (shrink)
Using empirical evidence to attack intuitions can be epistemically dangerous, because various of the complaints that one might raise against them (e.g., that they are fallible; that we possess no non-circular defense of their reliability) can be raised just as easily against perception itself. But the opponents of intuition wish to challenge intuitions without at the same time challenging the rest of our epistemic apparatus. How might this be done? Let us use the term “hopefulness” to refer to the extent (...) to which we possess a good capacity for the detection and correction of the errors of any fallible source of evidence. I argue that we should not trust putative sources of evidence that are substantially lacking in hopefulness (even if they are basically reliable), and that we are indeed already operating under such a norm in our ordinary and scientific practices. I argue further that the philosophical practice of the appeal to intuitions is, in these terms, badly hopeless... (shrink)
This book presents a comprehensive guide to interpretative phenomenological analysis (IPA) which is an increasingly popular approach to qualitative inquiry taught to undergraduate and postgraduate students today. The first chapter outlines the theoretical foundations for IPA. It discusses phenomenology, hermeneutics, and idiography and how they have been taken up by IPA. The next four chapters provide detailed, step by step guidelines to conducting IPA research: study design, data collection and interviewing, data analysis, and writing up. In the next section, the (...) authors give extended worked examples from their own studies in health, sexuality, psychological distress, and identity to illustrate the breadth and depth of IPA research. The final section of the book considers how IPA connects with other contemporary qualitative approaches like discourse and narrative analysis and how it addresses issues to do with validity. (shrink)
Every religion offers both hope and fear. They offer hope in virtue of the benefits promised to adherents, and fear in virtue of costs incurred by adversaries. In traditional Christianity, the costs incurred are expressed in terms of the doctrine of hell, according to which each person consigned to hell receives the same infinite punishment. This strong view of hell involves four distinct theses. First, it maintains that those in hell exist forever in that state (the Existence Thesis) and that (...) at least some human persons will end up in hell (the Anti-Universalism Thesis). Once in hell, there is no possibility of escape (the No Escape Thesis), and the justification of and purpose for hell is to mete out punishment to those whose earthly lives and character deserve it (the Retribution Thesis). (shrink)
What is the relation between material objects and spacetime regions? Supposing that spacetime regions are one sort of substance, there remains the question of whether or not material objects are a second sort of substance. This is the question of dualistic versus monistic substantivalism. I will defend the monistic view. In particular, I will maintain that material objects should be identified with spacetime regions. There is the spacetime manifold, and the fundamental properties are pinned directly to it.
How should one understand knowledge-wh ascriptions? That is, how should one understand claims such as ‘‘I know where the car is parked,’’ which feature an interrogative complement? The received view is that knowledge-wh reduces to knowledge that p, where p happens to be the answer to the question Q denoted by the wh-clause. I will argue that knowledge-wh includes the question—to know-wh is to know that p, as the answer to Q. I will then argue that knowledge-that includes a contextually (...) implicit question. I will conclude that knowledge is a question-relative state. Knowing is knowing the answer, and whether one knows the answer depends (in part) on the question. (shrink)
Train crashes cause, on average, a handful of deaths each year in the UK. Technologies exist that would save the lives of some of those who die. Yet these technical innovations would cost hundreds of millions of pounds. Should we spend the money? How can we decide how to trade off life against financial cost? Such dilemmas make public policy is a battlefield of values, yet all too often we let technical experts decide the issues for us. Can philosophy help (...) us make better decisions? Ethics and Public Policy: A Philosophical Inquiry is the first book to subject important and controversial areas of public policy to philosophical scrutiny. Jonathan Wolff, a renowned philosopher and veteran of many public committees, such as the Gambling Review Body, introduces and assesses core problems and controversies in public policy from a philosophical standpoint. Each chapter is centred on an important area of public policy where there is considerable moral and political disagreement. Topics discussed include: Can we defend inflicting suffering on animals in scientific experiments for human benefit? What limits to gambling can be achieved through legislation? What assumptions underlie drug policy? Can we justify punishing those who engage in actions that harm only themselves? What is so bad about crime? What is the point of punishment? Other chapters discuss health care, disability, safety and the free market. Throughout the book, fundamental questions for both philosopher and policy maker recur: what are the best methods for connecting philosophy and public policy? Should thinking about public policy be guided by an ‘an ideal world’ or the world we live in now? If there are ‘knock down’ arguments in philosophy why are there none in public policy? Each chapter concludes with ‘Lessons for Philosophy’ making this book not only an ideal introduction for those coming to philosophy, ethics or public policy for the first time, but also a vital resource for anyone grappling with the moral complexity underlying policy debates. (shrink)
Contextualism treats ‘knows’ as an indexical that denotes different epistemic properties in different contexts. Contrastivism treats ‘knows’ as denoting a ternary relation with a slot for a contrast proposition. I will argue that contrastivism resolves the main philosophical problems of contextualism, by employing a better linguistic model. Contextualist insights are best understood by contrastivist theory.