Theories of economic justice are characteristically based on abstract ethical concerns often unrelated to practical distributive results. Two decades ago, Rawls's theory of justice began as a reaction against the alleged ‘sacrifices’ condoned by utilitarian theory. One variant of this objection is that utilitarianism permits gross inequalities, severe deprivations of individual liberty, or even the enslavement of society's least well-off individuals. There are, however, more subtle forms of the objection. In Rawls, it is often waged without any claim that utilitarianism (...) does in fact imply such gross deprivations in actual realworld circumstances. A second variant hinges, rather, on the milder claim that utilitarianism could condone such deprivations or sacrifices in some possible world—the objection being that utilitarianism improperly makes justice contingent, or uncertain, in this way. A third, still more abstract, variant would be that utilitarianism is flawed—not because of any practical distributive result, actual or hypothetical, but in theory —due to the way it treats individuals' interests, or the ‘concept of persons’ it presupposes. (shrink)
Perhaps the most salient feature of Rawls's theory of justice which at once attracts supporters and repels critics is its apparent egalitarian conclusion as to how economic goods are to be distributed. Indeed, many of Rawls's sympathizers may find this result intuitively appealing, and regard it as Rawls's enduring contribution to the topic of economic justice, despite technical deficiencies in Rawls's contractarian, decision-theoretic argument for it which occupy the bulk of the critical literature. Rawls himself, having proposed a “coherence” theory (...) of justification in metaethics, must regard the claim that his distributive criterion “is a strongly egalitarian conception” as independently a part of the overarching moral argument. The alleged egalitarian impact of Rawls's theory is crucial again in normative ethics where Rawls is thought to have developed a major counter-theory to utilitarianism, one of the most popular criticisms of which has been its alleged inadequacy in handling questions of distributive justice. Utilitarians can argue, however, as Brandt recently has, that the diminishing marginal utility of money, along with ignorance of income-welfare curves, would require a utility-maximizing distribution to be substantially egalitarian. The challenge is therefore for Rawls to show that his theory yields an ethically preferable degree of equality. (shrink)
Torin Alter (2013) attempts to rescue phenomenal concepts and the knowledge argument from the critique of Ball 2009 by appealing to conceptual mastery. I show that Alter’s appeal fails, and describe general features of conceptual mastery that suggest that no such appeal could succeed.
In this lively and entertaining book, Terence Ball maintains that 'classic' works in political theory continue to speak to us only if they are periodically re-read and reinterpreted from alternative perspectives. That, the author contends, is how these works became classics, and why they are regarded as such. Ball suggests a way of reading that is both 'pluralist' and 'problem-driven'--pluralist in that there is no one right way to read a text, and problem-driven in that the reinterpretation is (...) motivated by problems that emerge while reading these texts. In addition, the subsequent readings and interpretations become more and more suffused with the interpretations of others. This tour de force, always entertaining and eclectic, focuses on the core problems surrounding many of the major thinkers. Was Machiavelli really amoral? Why did language matter so much to Hobbes--and why should it matter to us? Are the roots of the totalitarian state to be found in Rousseau? Were the utilitarians sexist in their view of the franchise? The author's aim is to show how a pluralist and problem-centered approach can shed new light on old and recent works in political theory, and on the controversies that continue over their meaning and significance. Written in a lively and accessible style, the book will provoke debate among students and scholars alike. (shrink)
The too-often unhappy 'marriage' of political theory and political science has long been a source of anguish for both partners. Should this troubled partnership be dissolved? Or might this marriage yet be saved? Ball answers the former question negatively and the latter affirmatively. Playing the part of therapist instead of theorist, he selectively recounts a number of episodes which estranged the partners and strained the marriage. And yet, he concludes that the conflicts were in hindsight more constructive than destructive, (...) benefiting both partners in heretofore unexpected ways and perhaps paving a path toward reconciliation and rapprochement. (shrink)
Philip Ball explores the science of the shapes we see in nature, revealing how, from the stripes of a zebra to the development of a snowflake or even a human embryo, there is a pattern-forming tendency in the basic processes of nature, and from a few simple themes, and the repetition of simple rules, endless beautiful variations can arise.
Certain puzzling cases have been discussed in the literature recently which appear to support the thought that knowledge can be obtained by way of deduction from a falsehood; moreover, these cases put pressure, prima facie, on the thesis of counter closure for knowledge. We argue that the cases do not involve knowledge from falsehood; despite appearances, the false beliefs in the cases in question are causally, and therefore epistemologically, incidental, and knowledge is achieved despite falsehood. We also show that the (...) principle of counter closure, and the concomitant denial of knowledge from falsehood, is well motivated by considerations in epistemological theory--in particular, by the view that knowledge is first in the epistemological order of things. (shrink)
Kaplan (1989) famously claimed that monsters--operators that shift the context--do not exist in English and "could not be added to it". Several recent theorists have pointed out a range of data that seem to refute Kaplan's claim, but others (most explicitly Stalnaker 2014) have offered a principled argument that monsters are impossible. This paper interprets and resolves the dispute. Contra appearances, this is no dry, technical matter: it cuts to the heart of a deep disagreement about the fundamental structure of (...) a semantic theory. We argue that: (i) the interesting notion of a monster is not an operator that shifts some formal parameter, but rather an operator that shifts parameters that play a certain theoretical role; (ii) one cannot determine whether a given semantic theory allows monsters simply by looking at the formal semantics; (iii) theories which forbid shifting the formal "context" parameter are perfectly compatible with the existence of monsters (in the interesting sense). We explain and defend these claims by contrasting two kinds of semantic theory--Kaplan's (1989) and Lewis's (1980). (shrink)
It has long been widely agreed that some concepts can be possessed only by those who have undergone a certain type of phenomenal experience. Orthodoxy among contemporary philosophers of mind has it that these phenomenal concepts provide the key to understanding many disputes between physicalists and their opponents, and in particular offer an explanation of Mary’s predicament in the situation exploited by Frank Jackson's knowledge argument. I reject the orthodox view; I deny that there are phenomenal concepts. My arguments exploit (...) the sort of considerations that are typically used to motivate externalism about mental content. Although physicalists often appeal to phenomenal concepts to defend their view against the knowledge argument, I argue that this is a mistake. The knowledge argument depends on phenomenal concepts; if there are no phenomenal concepts, then the knowledge argument fails. (shrink)
Although widely studied in other domains, relatively little is known about the metacognitive processes that monitor and control behaviour during reasoning and decision-making. In this paper, we examined the conditions under which two fluency cues are used to monitor initial reasoning: answer fluency, or the speed with which the initial, intuitive answer is produced, and perceptual fluency, or the ease with which problems can be read. The first two experiments demonstrated that answer fluency reliably predicted Feeling of Rightness judgments to (...) conditional inferences and base rate problems, which subsequently predicted the amount of deliberate processing as measured by thinking time and answer changes; answer fluency also predicted retrospective confidence judgments. Moreover, the effect of answer fluency on reasoning was independent from the effect of perceptual fluency, establishing that these are empirically independent constructs. In five experiments with a variety of reasoning problems similar to those of Alter et al., we found no effect of perceptual fluency on FOR, retrospective confidence or accuracy; however, we did observe that participants spent more time thinking about hard to read stimuli, although this additional time did not result in answer changes. In our final two experiments, we found that perceptual disfluency increased accuracy on the CRT, but only amongst participants of high cognitive ability. As Alter et al.’s samples were gathered from prestigious universities, collectively, the data to this point suggest that perceptual fluency prompts additional processing in general, but this processing may results in higher accuracy only for the most cognitively able. (shrink)
Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)
This paper discusses some problems with the field of educational studies and considers the role of post-structuralist theory in shifting the study of education away from a 'technical rationalist' approach (as evidenced in the case of much research on educational management and school effectiveness) towards an 'intellectual intelligence' stance that stresses contingency, disidentification and risk-taking.
Stewart Cohen’s New Evil Demon argument raises familiar and widely discussed concerns for reliabilist accounts of epistemic justification. A now standard response to this argument, initiated by Alvin Goldman and Ernest Sosa, involves distinguishing different notions of justification. Juan Comesaña has recently and prominently claimed that his Indexical Reliabilism (IR) offers a novel solution in this tradition. We argue, however, that Comesaña’s proposal suffers serious difficulties from the perspective of the philosophy of language. More specifically, we show that the two (...) readings of sentences involving the word ‘justified’ which are required for Comesaña’s solution to the problem are not recoverable within the two-dimensional framework of Robert Stalnaker to which he appeals. We then consider, and reject, an attempt to overcome this difficulty by appeal to a complication of the theory involving counterfactuals, and conclude the paper by sketching our own preferred solution to Cohen’s New Evil Demon. (shrink)
In this article, I offer a new analysis of knowledge: knowledge, I claim, is normal belief. I begin with what I take to be the conceptual truth that knowledge is epistemically justified, or permissible, belief. I then argue that this in turn is simply doxastically normal belief, first clarifying what is meant by this claim, and then providing reasons to think that normal belief, so understood, must be true and safe from error, making it a good candidate for knowledge.
Physicalists about the mind are committed to claims about property identities. Following Kripke's well-known discussion, modal arguments have emerged as major threats to such claims. This paper argues that modal arguments can be resisted by adopting a counterpart theoretic account of modal claims, and in particular modal claims involving properties. Thus physicalists have a powerful motive to adopt non-Kripkean accounts of the metaphysics of modality and the semantics of modal expressions.
An experiment is reported examining dual-process models of belief bias in syllogistic reasoning using a problem complexity manipulation and an inspection-time method to monitor processing latencies for premises and conclusions. Endorsement rates indicated increased belief bias on complex problems, a finding that runs counter to the “belief-first” selective scrutiny model, but which is consistent with other theories, including “reasoning-first” and “parallel-process” models. Inspection-time data revealed a number of effects that, again, arbitrated against the selective scrutiny model. The most striking inspection-time (...) result was an interaction between logic and belief on premise-processing times, whereby belief - logic conflict problems promoted increased latencies relative to non-conflict problems. This finding challenges belief-first and reasoning-first models, but is directly predicted by parallel-process models, which assume that the outputs of simultaneous heuristic and analytic processing streams lead to an awareness of belief - logic conflicts than then require time-consuming resolution. (shrink)
Frank Hindriks has attempted to derive a variant of Timothy Williamson’s knowledge rule for assertion on the basis of a more fundamental belief expression analysis of that speech act. I show that his attempted derivation involves a crucial equivocation between two senses of ‘must,’ and therefore fails. I suggest two possible repairs; but I argue that even if they are successful, we should prefer Williamson’s fully general knowledge rule to Hindriks’s restricted moral norm.
Millians about proper names typically claim that it is knowable apriori that Hesperus is Phosphorus. We argue that they should claim instead that it is knowable only aposteriori that Hesperus is Hesperus, since the Kripke-Putnam epistemic arguments against descriptivism are special cases of Quinean arguments that nothing is knowable apriori, and Millians have no resources to resist the more general Quinean arguments.
(2013). Matching bias in syllogistic reasoning: Evidence for a dual-process account from response times and confidence ratings. Thinking & Reasoning: Vol. 19, No. 1, pp. 54-77. doi: 10.1080/13546783.2012.735622.
We report an experiment investigating the “special-process” theory of insight problem solving, which claims that insight arises from non-conscious, non-reportable processes that enable problem re-structuring. We predicted that reducing opportunities for speech-based processing during insight problem solving should permit special processes to function more effectively and gain conscious awareness, thereby facilitating insight. We distracted speech-based processing by using either articulatory suppression or irrelevant speech, with findings for these conditions supporting the predicted insight facilitation effect relative to silent working or thinking (...) aloud. The latter condition was included to investigate the currently contested effect of “verbal overshadowing” on insight, whereby thinking aloud is claimed to hinder the operation of special, non-reportable processes. Whilst verbal overshadowing was not evident in final solution rates, there was nevertheless support for verbal overshadowing up to and beyond.. (shrink)
It is widely believed that Twin-Earth-style thought experiments show that the contents of a person's thoughts fail to supervene on her intrinsic properties. Several recent philosophers have made the further claim that Twin-Earth-style thought experiments produce metaphysically necessary conditions for the possession of certain concepts. I argue that the latter view is false, and produce counterexamples to several proposed conditions. My thesis is of particular interest because it undermines some attempts to show that externalism is incompatible with privileged access.
We applaud many aspects of Elqayam & Evans' (E&E's) call for a descriptivist research programme in studying reasoning. Nevertheless, we contend that normative benchmarks are vital for understanding individual differences in performance. We argue that the presence of normative responses to particular problems by certain individuals should inspire researchers to look for converging evidence for analytic processing that may have a normative basis.
Since 1997 there has been a significant increase in the number and percentage of Kansas farmers who are women. Using Reskin and Roos’ model of “job queues and gender queues” I analyze changes in the agricultural industry in Kansas that resulted in more women becoming “principal farm operators” in the state. I find there are three changes largely responsible for women increasing their representation in the occupation: an increase in the demand for niche products, a decrease in the average farm (...) size, and greater societal acceptance of women as farmers. This study adds to the growing literature on women principal farm operators in developed countries, and is among the first to explore why women are becoming a larger percentage of the occupation in the United States. (shrink)
Contemporary bioethical theory relies upon the concept of informed consent to protect against abuses of patient autonomy. Due to the complexity of the informed consent process, however, many patients rely more on their trust in their health care providers than they do upon their own ability to decide whether or not to give informed consent. Reformation theologian John Calvin placed a strong emphasis on the decision-maker's duty to respect the trust that others repose in the decision-maker. In keeping with Calvin's (...) concept of the duty to honor that trust, bioethics would do well to complement its current emphasis on informed consent with an emphasis on the protection of patient trust. (shrink)
Two experiments are reported that employed think-aloud methods to test predictions concerning relevance effects and rationalisation processes derivable from Evans' (1996) heuristic-analytic theory of the selection task. Evans' account proposes that card selections are triggered by relevance-determining heuristics, with analytic processing serving merely to rationalise heuristically cued decisions. As such, selected cards should be associated with more references to both their facing and their hidden sides than rejected cards, which are not subjected to analytic rationalisation. Experiment 1 used a standard (...) selection-task paradigm, with negative components permuted through abstract conditional rules. Support was found for all heuristic-analytic predictions. This evidence was shown to be robust in Experiment 2, where "select - don't select" decisions were enforced for all cards. Both experiments also clarify the role played by secondary heuristics in cueing the consideration of hidden card values during rationalisation. We suggest that whilst Evans' heuristic-analytic model and Oaksford and Chater's (e.g., 2003) optimal data selection model can provide compelling accounts of our protocol findings, the mental models theory fares less well as an explanation of our full dataset. (shrink)
This paper is based on case-study research in four English secondary schools. It explores the pressure placed on English and mathematics departments because of their results being reported in annual performance tables. It examines how English and maths departments enact policies of achievement, the additional power and extra resources the pressure to achieve brings and the possibility of resistance.
Though modern non-cognitivists in ethics characteristically believe that values are irreducible to facts, they nevertheless believe that values are determined by facts, viz., those specified in functionalist, explanatory theories of the evolutionary origin of morality. The present paper probes the consistency of this position. The conventionalist theories of Hume and Harman are examined, and are seen not to establish a tight determinative reduction of values to facts. This result is illustrated by reference to recent theories of the sociobiological mechanisms involved (...) in moral evolution. Though explanatory theories have linguistic implications,exaggerated in Harman's linguistic form of social relativism, there is also failure to establish the semantic reductionism which non-cognitivists reject under the rubric of ethical naturalism. It is concluded that explanatory forms of naturalism, the best of which is a functionalist-utilitarian account, are compatible with the fact/value distinction. (shrink)