In this incisive new book one of Britain's most eminent philosophers explores the often overlooked tension between voluntariness and involuntariness in human cognition. He seeks to counter the widespread tendency for analytic epistemology to be dominated by the concept of belief. Is scientific knowledge properly conceived as being embodied, at its best, in a passive feeling of belief or in an active policy of acceptance? Should a jury's verdict declare what its members involuntarily believe or what they voluntarily accept? And (...) should statements and assertions be presumed to express what their authors believe or what they accept? Does such a distinction between belief and acceptance help to resolve the paradoxes of self-deception and akrasia? Must people be taken to believe everything entailed by what they believe, or merely to accept everything entailed by what they accept? Through a systematic examination of these problems, the author sheds new light on issues of crucial importance in contemporary epistemology, philosophy of mind, and cognitive science. (shrink)
The book was planned and written as a single, sustained argument. But earlier versions of a few parts of it have appeared separately. The object of this book is both to establish the existence of the paradoxes, and also to describe a non-Pascalian concept of probability in terms of which one can analyse the structure of forensic proof without giving rise to such typical signs of theoretical misfit. Neither the complementational principle for negation nor the multiplicative principle for conjunction applies (...) to the central core of any forensic proof in the Anglo-American legal system. There are four parts included in this book. Accordingly, these parts have been written in such a way that they may be read in different orders by different kinds of reader. (shrink)
Johnathan Cohen's book provides a lucid and penetrating treatment of the fundamental issues of contemporary analytical philosophy. This field now spans a greater variety of topics and divergence of opinion than fifty years ago, and Cohen's book addresses the presuppositions implicit to it and the patterns of reasoning on which it relies.
Two new philosophical problems surrounding the gradation of certainty began to emerge in the 17th century and are still very much alive today. One is concerned with the evaluation of inductive reasoning, whether in science, jurisprudence, or elsewhere; the other with the interpretation of the mathematical calculus of change. This book, aimed at non-specialists, investigates both problems and the extent to which they are connected. Cohen demonstrates the diversity of logical structures that are available for judgements of probability, and explores (...) the rationale for their appropriateness in different contexts of application. Thus his study deals with the complexity of the underlying philosophical issues without simply cataloging alternative conceptions or espousing a particular "favorite" theory. (shrink)
The tapestry of Wilfrid Sellars’s writings is dauntingly rich in stimulus and suggestion. I shall take up here an intriguing strand of thought that was woven into one of his early papers ‘Language, Rules and Behavior’, and I shall discuss some of the issues to which it gives rise. Sellars was concerned in that paper with the procedures by which people evaluate actions as right or wrong, arguments as valid or invalid, and cognitive claims as well or ill grounded. He (...) sought to map out a true vïa media, in his treatment of such procedures, between rationalistic apriorism and what, for want of a better term then, he called ‘descriptivism’, by which he understood the claim that all meaningful concepts and problems belong to the empirical or descriptive sciences, including the sciences of human behaviour. Sellars was thus led to ask not only ‘What sort of a thing is justification?’ but also ‘How do we come to accept the rules or laws that license justifications?’ And he saw these questions as arising in relation to any dialogue in which one party seeks to justify something to another. With regard to the justification of actions he held that the intuitionism of Ross, Prichard and Ewing was reasonably faithful to the phenomenology of moral thought and experience, though he did not agree with their belief in a nonnatural quality or relation that may belong to actions over and above their empirical characteristics. With regard to the justification of predictions he held that the relevant covering laws are ultimately rendered acceptable by an appeal to instances of their application. But he was reluctant to conceive this kind of appeal as an inductive procedure. Instead he called it an application of Socratic method, since the purpose of this method is to make explicit the rules that we have implicitly adopted for thought and action, and Sellars interpreted our judgments to the effect that A causally necessitates B as the expression of a rule governing our use of the terms ‘A’ and ‘B’: science consists in the attempt, by remodeling human language, to develop a system of rule-governed behaviour which will adjust the human organism to the environment. (shrink)
No one nowadays would deny the importance of conceptual innovation in the growth of scientific knowledge. But how is it possible? And by this I do not mean: what kinds of social, economic, or mental develop- ments are causally responsible for promoting it? That is a question for historians, sociologists and psychologists of science to answer. Instead I shall concern myself with a more philosophical issue, namely: how can the possibility of conceptual innovation be compatible with the way in which (...) we reason about language, meaning and understanding - i.e., what adjustments in, or constraints on, the framework of such reasoning are forced on us by acceptance of this possibility? In particular does it fit in with the project of reconstructing scientific reasoning in artificial languages like Leibniz proposed in the 17th century or Carnap in the 20th? (shrink)
Recent interest in the problem of verisimilitude stemmed originally from Popper's desire to provide a non-inductive criterion of merit that will select between two false theories) But the problem has also been taken up by others who are not committed to Popper's anti-inductivism. Indeed Ilkka Niiniluoto has argued that the estimated degree of truthlikeness of a generalisation g which is compatible with evidence e can be equated with the inductive probability of g on e, wherever g is a constituent in (...) Hintikka's sense. It might therefore be worth while to approach the problem afresh, in order to determine quite generally whether a measure of verisimilitude is what is principally needed for the evaluation of scientific progress. I shall argue that it is not. (shrink)
In this original and incisive book, one of Britain's most eminent philosophers contends that those who analyse the concept of knowledge do not distinguish adequately between voluntary belief and involuntary acceptance. The distinction, elucidated by the author, turns out to be vital for understanding many important issues in epistemology, philosophy of mind, and cognitive science.
The classical analysis of relevance in probabilistic terms does not fit legal, moral or conversational relevance, and, though analysis in terms of a psychological model may fit conversational relevance, it certainly does not fit legal, moral or evidential relevance. It is important to notice here that some sentences are ambiguous between conversational and non-conversational relevance. But, if and only ifR is relevant to a questionQ, R is a reason, though not necessarily a complete or conclusive reason, for accepting or rejecting (...) something as an answer toQ. Reasons of this kind are governed by appropriate covering laws or principled probabilities and a number of questions thus arise about the relationship between relevance and certain formal-logical properties. (shrink)