Abstract This paper explores the epistemological dimensions in the thinking of adolescent girls. Using two different kinds of data ?? (1) typical constructions of moral conflicts reported by adolescent girls that reveal either a justice or care (response) focus; and (2) girls? responses to a story completion exercise ?? this paper identifies epistemological perspectives in girls? thinking that link ideas of self, knowing and morality. An hypothesized model of ?learner's interests and goals? and ?approaches to knowing? related to these conceptions (...) of self and morality is presented and implications for teaching are discussed. (shrink)
Three years ago Robert Saltonstall, Jr., Associate Vice President for Operations at Harvard University, faced an increasingly common problem in business and institutions today when he severed 68 long-service, wage employees to solve a problem of low productivity in a particular trade group. He did this using relatively conventional and creative techniques. But now three years later, he asked NonaLyons of the Harvard Graduate School of Education, who is researching the ethical dimensions of executives' decisions, to assist (...) him in evaluating how these employees felt about the process. The employees' loyalty in spite of everything has caused Saltonstall to rethink the ethics of both his decision and its execution. In this article Saltonstall asks and answers many of the questions executives face when challenged to handle work reduction decisions in a more ethical way. And Lyons assists him with commentary on some of the current research on moral decision-making which will help executives to understand why they find some of their decisions to be moral dilemmas. The article challenges executives to think about reorganization decisions in a participative way and suggests seven central issues executives should consider before commencing a participative approach. The article reaches no specific conclusion, but introduces some new ways to think about lay-off decisions and their ethical implications for those affected. (shrink)
William Lyons presents an original thesis on introspection as self-interpretation in terms of a culturally influenced model. His work rests on a lucid, careful, and critical examination of the transformations that have occurred over the past century in the concepts and models of introspection in philosophy and psychology. He reviews the history of introspection in the work of Wundt, Boring, and William James, and reactions to it by behaviorists Watson, Lashley, Ryle, and Skinner.
In this study William Lyons presents a sustained and coherent theory of the emotions, and one which draws extensively on the work of psychologists and physiologists in the area. Dr Lyons starts by giving a thorough and critical survey of other principal theories, before setting out his own 'causal-evaluative' account. In addition to giving an analysis of the nature of emotion - in which, Dr Lyon argues, evaluative attitudes play a crucial part - his theory throws light on (...) the motivating role of emotions in our lives, our attitudes towards our emotions and our responsibility for them. (shrink)
Perception and Basic Beliefs brings together an important treatment of these major epistemological topics and provides a positive solution to the traditional problem of the external world.
In this study William Lyons presents a sustained and coherent theory of the emotions, and one which draws extensively on the work of psychologists and physiologists in the area. Dr Lyons starts by giving a thorough and critical survey of other principal theories, before setting out his own 'causal-evaluative' account. In addition to giving an analysis of the nature of emotion - in which, Dr Lyon argues, evaluative attitudes play a crucial part - his theory throws light on (...) the motivating role of emotions in our lives, our attitudes towards our emotions and our responsibility for them. (shrink)
Scientific realists have claimed that the posit that our theories are (approximately) true provides the best or the only explanation for their success . In response, I revive two non-realists explanations. I show that realists, in discarding them, have either misconstrued the phenomena to be explained or mischaracterized the relationship between these explanations and their own. I contend nonetheless that these non-realist competitors, as well as their realist counterparts, should be rejected; for none of them succeed in explaining a significant (...) list of successes. I propose a related non-realist explanation of success that appears to be the most suitable among those considered. (shrink)
In response to historical challenges, advocates of a sophisticated variant of scientific realism emphasize that theoretical systems can be divided into numerous constituents. Setting aside any epistemic commitment to the systems themselves, they maintain that we can justifiably believe those specific constituents that are deployed in key successful predictions. Stathis Psillos articulates an explicit criterion for discerning exactly which theoretical constituents qualify. I critique Psillos's criterion in detail. I then test the more general deployment realist intuition against a set of (...) well-known historical cases, whose significance has, I contend, been overlooked. I conclude that this sophisticated form of realism remains threatened by the historical argument that prompted it. A criterion for scientific realism Assessing the criterion A return to the crucial insight: responsibility A few case studies Assessing deployment realism. (shrink)
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
This volume collects David Lyons' well-known essays on Mill's moral theory and includes an introduction which relates the essays to prior and subsequent philosophical developments. Like the author's Forms and Limits of Utilitarianism (Oxford, 1965), the essays apply analytical methods to issues in normative ethics. The first essay defends a refined version of the beneficiary theory of rights against H.L.A. Hart's important criticisms. The central set of essays develops new interpretations of Mill's moral theory with the aim of determining (...) how far rights can be incorporated in a utilitarian framework. They Mill's analysis of moral concepts promises to accommodate the argumentative force of rights, and also provide a significant new reading of Mill's theory of liberty. The last essay argues that the promise of Mill's theory of justice cannot be fulfilled. Utilitarianism is unable to account for crucial features of moral rights, or even for the moral force of legal rights whose existence might be justified on utilitarian grounds. (shrink)
David Lyons is one of the pre-eminent philosophers of law active in the United States. This volume comprises essays written over a period of twenty years in which Professor Lyons outlines his fundamental views about the nature of law and its relation to morality and justice. The underlying theme of the book is that a system of law has only a tenuous connection with morality and justice. Contrary to those legal theorists who maintain that no matter how bad (...) the law of a community might be, strict conformity to existing law automatically dispenses 'formal' justice, Professor Lyons contends that the law must earn the respect that it demands. Moreover, we cannot, as some would suggest, interpret law in a value-neutral manner. Rather courts should interpret statutes, judicial precedents, and constitutional provisions in terms of values that would justify those laws. In this way officials can promote the justifiability of what they do to people in the name of law, and can help the law live up to its moral pretensions. (shrink)
Raftopoulos’s most recent book argues, among other things, for the cognitive impenetrability of early vision. Before we can assess any such claims, we need to know what’s meant by “early vision” and by “cognitive penetration”. In this contribution to this book symposium, I explore several different things that one might mean – indeed, that Raftopoulos might mean – by these terms. I argue that whatever criterion we choose for delineating early vision, we need a single criterion, not a mishmash of (...) distinct criteria. And I argue against defining cognitive penetration in partly epistemological terms, although it is fine to offer epistemological considerations in defending some definitions as capturing something of independent interest. Finally, I raise some questions about how we are to understand the “directness” of certain putative cognitive influences on perception and about whether there’s a decent rationale for restricting directness in the way that Raftopoulos apparently does. (shrink)
Linguistic Semantics: An Introduction is the successor to Sir John Lyons's important textbook Language, Meaning and Context (1981).While preserving the general structure of the earlier book, the author has substantially expanded its scope to introduce several topics that were not previously discussed, and to take account of new developments in linguistic semantics over the past decade. The resulting work is an invaluable guide to the subject, offering clarifications of its specialised terms and explaining its relationship to formal and philosophical (...) semantics and to contemporary pragmatics. With its clear and accessible style it will appeal to a wide student readership. Sir John Lyons is one of the most important and internationally renowned contributors to the study of linguistics. His many publications include Introduction to Theoretical Linguistics (1968) and Semantics (1977). (shrink)
What is intentionality? Intentionality is a distinguishing characteristic of states of mind : that they are about things outside themselves. About this book: William Lyons explores various ways in which philosophers have tried to explain intentionality, and then suggests a new way. Part I of the book gives a critical account of the five most comprehensive and prominent current approaches to intentionality. These approaches can be summarised as the instrumentalist approach, derived from Carnap and Quine and culminating in the (...) work of Daniel Dennett; the linguistic approach, derived from the work of Chomsky and exhibited most fully in the work of Jerry Fodor; the biological approach, developed by Ruth Garrett Millikan, Colin McGinn, and others; the information-processing approach which has been given a definitive form in the work of Fred Dretske; and the functional role approach of Brian Loar. In Part II, Professor Lyons sets out a multi-level, developmental approach to intentionality. Drawing upon work in neurophysiology and psychology, the author argues that intentionality is to be found, in different forms, at the levels of brain functioning, prelinguistic consciousness, language, and at the holistic level of `whole person performance' which is demarcated by our ordinary everyday talk about beliefs, desires, hopes, intentions, and the other `propositional attitudes'. Written in a direct, clear, and lively style, the extended survey of contemporary debate in Part I will be invaluable to the student of philosophy of mind or cognitive science as well as to the scholars and graduate students who will find an original new theory to contend with in Part II. (shrink)
UTILITARIAN GENERALIZATION Sometimes an act is criticized just because the results of everyone's acting similarly would be bad. The generalization test ...
Some forms of ethical relativism seem to endorse strict contradictions. Various forms of relativism are distinguished, And their vulnerability to such charges compared. Means of avoiding incoherence are considered. Relativistic justification seems either innocuous but nonrelativistic or else unintelligible. Relativistic analyses of moral judgments are implausible and seem required for no other purpose than to avoid charges of incoherence.
The paper offers a solution to the generality problem for a reliabilist epistemology, by developing an “algorithm and parameters” scheme for type-individuating cognitive processes. Algorithms are detailed procedures for mapping inputs to outputs. Parameters are psychological variables that systematically affect processing. The relevant process type for a given token is given by the complete algorithmic characterization of the token, along with the values of all the causally relevant parameters. The typing that results is far removed from the typings of folk (...) psychology, and from much of the epistemology literature. But it is principled and empirically grounded, and shows good prospects for yielding the desired epistemological verdicts. The paper articulates and elaborates the theory, drawing out some of its consequences. Toward the end, the fleshed-out theory is applied to two important case studies: hallucination and cognitive penetration of perception. (shrink)
The New Evil Demon Problem is supposed to show that straightforward versions of reliabilism are false: reliability is not necessary for justification after all. I argue that it does no such thing. The reliabilist can count a number of beliefs as justified even in demon worlds, others as unjustified but having positive epistemic status nonetheless. The remaining beliefs---primarily perceptual beliefs---are not, on further reflection, intuitively justified after all. The reliabilist is right to count these beliefs as unjustified in demon worlds, (...) and it is a challenge for the internalist to be able to do so as well. (shrink)
The principle of utility is Bentham's basic test for morals and legislation. But there is room for doubting what that principle is supposed to say. I shall argue that one important element of modern utilitarian doctrines Cannot be found in Bentham's. Some aspects of his views will not be questioned here. He holds, for example, that acts should be appraised by their consequences alone. The effects that count are ‘pleasures’ and ‘pains’, that is, the effects upon human happiness, interest or (...) welfare. (shrink)
An introduction to the philosophy of law, which offers a modern and critical appraisal of all the main issues and problems. This has become a very active area in the last ten years, and one on which philosophers, legal practitioners and theorists and social scientists have tended to converge. The more abstract questions about the nature of law and its relationship to social norms and moral standards are now seen to be directly relevant to more practical and indeed pressing questions (...) about the justification of punishment, civil disobedience, the enforcement of morality, and problems about justice, rights, welfare, and freedom. David Lyons is a shrewd, clear and systematic guide through this tangled area. The book presupposes no formal training in law or philosophy and is intended to serve as a textbook in a range of introductory courses. (shrink)
Broadly speaking, the contemporary scientific realist is concerned to justify belief in what we might call theoretical truth, which includes truth based on ampliative inference and truth about unobservables. Many, if not most, contemporary realists say scientific realism should be treated as ‘an overarching scientific hypothesis’ (Putnam 1978, p. 18). In its most basic form, the realist hypothesis states that theories enjoying general predictive success are true. This hypothesis becomes a hypothesis to be tested. To justify our belief in the (...) realist hypothesis, realists commonly put forward an argument known as the ‘no-miracles argument’. With respect to the basic hypothesis this argument can be stated as follows: it would be a miracle were our theories as successful as they are, were they not true; the only possible explanation for the general predictive success of our scientific theories is that they are true. (shrink)
Why is it so hard to learn critical thinking skills? Traditional textbooks focus almost exclusively on logic and fallacious reasoning, ignoring two crucial problems. As psychologists have demonstrated recently, many of our mistakes are not caused by formal reasoning gone awry, but by our bypassing it completely. We instead favor more comfortable, but often unreliable, intuitive methods. Second, the evaluation of premises is of fundamental importance, especially in this era of fake news and politicized science. This highly innovative text is (...) psychologically informed, both in its diagnosis of inferential errors, and in teaching students how to watch out for and work around their natural intellectual blind spots. It also incorporates insights from epistemology and philosophy of science that are indispensable for learning how to evaluate premises. The result is a hands-on primer for real world critical thinking. The authors bring over four combined decades of classroom experience and a fresh approach to the traditional challenges of a critical thinking course: effectively explaining the nature of validity, assessing deductive arguments, reconstructing, identifying and diagramming arguments, and causal and probabilistic inference. Additionally, they discuss in detail, important, frequently neglected topics, including testimony, the nature and credibility of science, rhetoric, and dialectical argumentation. Key Features and Benefits: Uses contemporary psychological explanations of, and remedies for, pervasive errors in belief formation. There is no other critical thinking text that generally applies this psychological approach as we do. Assesses premises, notably premises based on the testimony of others, and evaluation of news and other information sources. No other critical thinking textbook gives detailed treatment of this crucial topic. Typically, they only provide a few remarks about when to accept expert opinion / argument from authority. Covers argument evaluation before argument identification. Other textbooks generally reverse the order, and focus on the use of premise and conclusion indicator words as the primary tool for argument identification. Carefully explains the concept of validity, paying particular attention in distinguishing logical possibility from other species of possibility, and demonstrates how we may mistakenly judge invalid arguments as valid because of belief bias. Instead of assessing an argument’s validity using formal/mathematical methods, provides one technique that is generally applicable: explicitly showing that it is impossible to make the conclusion false and the premises true together. Uses frequency trees and the frequency approach to probability more generally, a simple method for understanding and evaluating quite complex probabilistic information Uses arguments maps, which have been shown to have significantly improve students’ reasoning and argument evaluation. (shrink)
This article endeavors to identify the strongest versions of the two primary arguments against epistemic scientific realism: the historical argument—generally dubbed “the pessimistic meta-induction”—and the argument from underdetermination. It is shown that, contrary to the literature, both can be understood as historically informed but logically validmodus tollensarguments. After specifying the question relevant to underdetermination and showing why empirical equivalence is unnecessary, two types of competitors to contemporary scientific theories are identified, both of which are informed by science itself. With the (...) content and structure of the two nonrealist arguments clarified, novel relations between them are uncovered, revealing the severity of their collective threat against epistemic realism and its “no-miracles” argument. The final section proposes, however, that the realist’s axiological tenet “science seeks truth” is not blocked. An attempt is made to indicate the promise for a nonepistemic, purely axiological scientific realism—here dubbed “Socratic scientific realism.”. (shrink)
Cognitive penetration of perception is the idea that what we see is influenced by such states as beliefs, expectations, and so on. A perceptual belief that results from cognitive penetration may be less justified than a nonpenetrated one. Inferentialism is a kind of internalist view that tries to account for this by claiming that some experiences are epistemically evaluable, on the basis of why the perceiver has that experience, and the familiar canons of good inference provide the appropriate standards by (...) which experiences are evaluated. I examine recent defenses of inferentialism by Susanna Siegel, Peter Markie, and Matthew McGrath and argue that the prospects for inferentialism are dim. (shrink)
The cognitive neuropsychological understanding of a cognitive system is roughly that of a ‘mental organ’, which is independent of other systems, specializes in some cognitive task, and exhibits a certain kind of internal cohesiveness. This is all quite vague, and I try to make it more precise. A more precise understanding of cognitive systems will make it possible to articulate in some detail an alternative to the Fodorian doctrine of modularity (since not all cognitive systems are modules), but it will (...) also provide a better understanding of what a module is (since all modules are cognitive systems). (shrink)
David Lyons challenges us to confront grave injustices committed in the United States, from the colonists' encroachments on Indian lands to slavery and the legacy of racism. He calls upon legal and political theorists to take these social wrongs seriously in their approaches to moral obligation under law and the justification of civil disobedience.
Can beliefs that are not consciously formulated serve as part of an agent's evidence for other beliefs? A common view says no, any belief that is psychologically immediate is also epistemically immediate. I argue that some unconscious beliefs can serve as evidence, but other unconscious beliefs cannot. Person-level beliefs can serve as evidence, but subpersonal beliefs cannot. I try to clarify the nature of the personal/subpersonal distinction and to show how my proposal illuminates various epistemological problems and provides a principled (...) framework for solving other problems. (shrink)
How things look (or sound, taste, smell, etc.) plays two important roles in the epistemology of perception.1 First, our perceptual beliefs are episte- mically justified, at least in part, in virtue of how things look. Second, whether a given belief is a perceptual belief, as opposed to, say, an infer- ential belief, is also at least partly a matter of how things look. Together, these yield an epistemically significant sense of looks. A standard view is that how things look, in (...) this epistemically significant sense, is a matter of ones present perceptual phenomenology, of what nondoxastic experiential state one is in. On this standard view, these experiential states (a) determine which of my beliefs are perceptual beliefs and (b) are centrally involved in justifying these beliefs. (shrink)
The scientific realism debate has now reached an entirely new level of sophistication. Faced with increasingly focused challenges, epistemic scientific realists have appropriately revised their basic meta-hypothesis that successful scientific theories are approximately true: they have emphasized criteria that render realism far more selective and, so, plausible. As a framework for discussion, I use what I take to be the most influential current variant of selective epistemic realism, deployment realism. Toward the identification of new case studies that challenge this form (...) of realism, I break away from the standard list and look to the history of celestial mechanics, with an emphasis on twentieth century advances. I then articulate two purely deductive arguments that, I argue, properly capture the historical threat to realism. I contend that both the content and form of these novel challenges seriously threaten selective epistemic realism. I conclude on a positive note, however, arguing for selective realism at a higher level. Even in the face of threats to its epistemic tenet, scientific realism need not be rejected outright: concern with belief can be bracketed while nonetheless advocating core realist tenets. I show that, in contrast with epistemic deployment realism, a purely axiological scientific realism can account for key scientific practices made salient in my twentieth century case studies. And embracing the realists favored account of inference, inference to the best explanation, while pointing to a set of the most promising alternative selective realist meta-hypothesis, I show how testing the latter can be immensely valuable to our understanding of science. (shrink)
The axiological tenet of scientific realism, “science seeks true theories,” is generally taken to rest on a corollary epistemological tenet, “we can justifiably believe that our successful theories achieve (or approximate) that aim.” While important debates have centered on, and have led to the refinement of, the epistemological tenet, the axiological tenet has suffered from neglect. I offer what I consider to be needed refinements to the axiological postulate. After showing an intimate relation between the refined postulate and ten theoretical (...) desiderata, I argue that the axiological postulate does not depend on its epistemological counterpart; epistemic humility can accompany us in the quest for truth. Upon contrasting my axiological postulate against the two dominant non-realist alternatives and the standard realist postulate, I contend that its explanatory and justificatory virtues render it, among the axiologies considered, the richest account of the scientific enterprise. (shrink)
Outside of philosophy, ‘intuition’ means something like ‘knowing without knowing how you know’. Intuition in this broad sense is an important epistemological category. I distinguish intuition from perception and perception from perceptual experience, in order to discuss the distinctive psychological and epistemological status of evaluative property attributions. Although it is doubtful that we perceptually experience many evaluative properties and also somewhat unlikely that we perceive many evaluative properties, it is highly plausible that we intuit many instances of evaluative properties as (...) such. The resulting epistemological status of evaluative property attributions is very much like it would be if we literally perceived such properties. (shrink)
Much of the intuitive appeal of evidentialism results from conflating two importantly different conceptions of evidence. This is most clear in the case of perceptual justification, where experience is able to provide evidence in one sense of the term, although not in the sense that the evidentialist requires. I argue this, in part, by relying on a reading of the Sellarsian dilemma that differs from the version standardly encountered in contemporary epistemology, one that is aimed initially at the epistemology of (...) introspection but which generalizes to theories of perceptual justification as well. (shrink)
In this paper I challenge and adjudicate between the two positions that have come to prominence in the scientific realism debate: deployment realism and structural realism. I discuss a set of cases from the history of celestial mechanics, including some of the most important successes in the history of science. To the surprise of the deployment realist, these are novel predictive successes toward which theoretical constituents that are now seen to be patently false were genuinely deployed. Exploring the implications for (...) structural realism, I show that the need to accommodate these cases forces our notion of “structure” toward a dramatic depletion of logical content, threatening to render it explanatorily vacuous: the better structuralism fares against these historical examples, in terms of retention, the worse it fares in content and explanatory strength. I conclude by considering recent restrictions that serve to make “structure” more specific. I show however that these refinements will not suffice: the better structuralism fares in specificity and explanatory strength, the worse it fares against history. In light of these case studies, both deployment realism and structural realism are significantly threatened by the very historical challenge they were introduced to answer. (shrink)
There are two primary arguments against scientific realism, one pertaining to underdetermination, the other to the history of science. While these arguments are usually treated as altogether distinct, P. Kyle Stanford's ‘problem of unconceived alternatives’ constitutes one kind of synthesis: I propose that Stanford's argument is best understood as a broad modus ponens underdetermination argument, into which he has inserted a unique variant of the historical pessimistic induction. After articulating three criticisms against Stanford's argument and the evidence that he offers, (...) I contend that, as it stands, Stanford's argument poses no threat to contemporary scientific realism. Nonetheless, upon identifying two useful insights present in Stanford's general strategy, I offer an alternative variant of the modus ponens underdetermination argument, one that, although historically informed by science, requires no inductive premises. I contend that this non-inductive but historically informed variant of the modus ponens clarifies and considerably strengthens the case against scientific realism. (shrink)
The most promising contemporary form of epistemic scientific realism is based on the following intuition: Belief should be directed, not toward theories as wholes, but toward particular theoretical constituents that are responsible for, or deployed in, key successes. While the debate on deployment realism is quite fresh, a significant degree of confusion has already entered into it. Here I identify five criteria that have sidetracked that debate. Setting these distractions aside, I endeavor to redirect the attention of both realists and (...) non-realists to the fundamental intuition above. In more detail: I show that Stathis Psillos (1999) has offered an explicit criterion for picking out particular constituents, which, contrary to Kyle Stanford’s (2006a) criticisms, neither assumes the truth of theories nor requires hindsight. I contend, however, that, in Psillos’s case studies, Psillos has not successfully applied his explicit criterion. After clarifying the various alternative criteria at work (in those case studies and in a second line of criticism offered by Stanford), I argue that, irrespective of Stanford’s criticisms, the explicit criterion Psillos does offer is not an acceptable one. Nonetheless, the deployment realist’s fundamental intuition withstands all of these challenges. In closing, I point in a direction toward which I’ve elsewhere focused, suggesting that, despite the legitimacy and applicability of the deployment realist’s intuition, the historical threat that prompted it remains. (shrink)
Although known as the founder of modern utilitarianism and the source of analytical jurisprudence, Bentham today is infrequently read but often caricatured. The present book offers a reinterpretation of Bentham's main philosophical doctrines, his principle of utility and his analysis of law, philosophical doctrines, as they are developed in Bentham's most important works. A new reading is also given to his theory of law, which suggests Bentham's insight, originality, and continued interest for philosophers and legal theorists. First published in 1973, (...) this revised edition contains a new Preface, a revised Bibliography, and two new Indexes, one of Names and one of Subjects, which together replace the original index. (shrink)