Contemporary accounts of what it is for an agent to be justified in holding a given belief commonly carry substantive commitments concerning what cognitive processes can and should be like. In this paper, we argue that concern for the plausiblity of such psychological commitments leads to significant epistemological results. In particular, it leads to a multi-faceted epistemology in which elements of traditionally conflicting epistemologies are vindicated within a single epistemological account. We suggest thinking of the epistemologically relevant cognitive processes in (...) terms of the metaphor of an iceberg--the accessible and articulable states that have been the exclusive focus of much epistemology must, for reasons that we explain, comprise only a proper subset of epistemologically relevant processing, even as only a part of an iceburg is exposed to view. When one focuses on the interaction of accessible states and articulable information, the structure of epistemic justification looks rather like what has been called structural contextualism (Timmons 1993, Henderson 1994b). It might also be called quasi-foundationalist. Yet, given the sort of creatures we are, in attending to our epistemological tasks we must rely on processing that is sensitive to information that we could not articulate, that is not accessible in the standard internalist sense. When one focuses on the full range of epistemologically important processes, the structure of what makes for justification may be rather more like that envisioned by some coherentists. (shrink)
Actions are done for reasons. The reasons are beliefs and desires, which are physical states that causally interact in a rather special way. Their interaction exhibits a characteristic pattern: it is rational, at least in certain important respects.
Contents/Links I. The Referentialist's Objection and the Issues it Raises II. From Uses of Descriptions to Aspects of Concepts III. A Straightforward Understanding IV. A More Sophisticated Understanding V. What is Attributively Associated with "Justification"?
In What Philosophers Know, Gary Gutting provides an epistemology of philosophical reflection. This paper focuses on the roles that various intuitive inputs are said to play in philosophical thought. Gutting argues that philosophers are defeasibly entitled to believe some of these, prior to the outcome of the philosophical reflection, and that they then rightly serve as significant (again defeasible) anchors on reflection. This paper develops a view of epistemic entitlement and applies it to argue that many prephilosophical convictions of the (...) kind Gutting discusses would be just the sort of belief for which entitlement would plausibly be defeated from the start. They then could not properly play the role in philosophical reflection that Gutting envisions for them. (shrink)
Our true home is wilderness, even the world of everyday.—Henry G. Bugbee, Jr.Henry Bugbee was Born in New York City in 1915. This may not seem the most fortuitous birthplace for an interpreter of the wild rivers of Montana, but we might also remember that John Muir, interpreter of the High Sierras, was born in Scotland. Perhaps the movement west is an important prelude for such a vocation. Bugbee studied philosophy at Princeton and then at Berkeley, but before he could (...) finish his graduate work, he was called for naval service in the Pacific. The time at sea was a formative wilderness experience, on which his writing draws heavily. Returning from sea, he finished his PhD and took a teaching position at Harvard. Not .. (shrink)
We respond to the central concerns raised by our commentators to our book, The Epistemological Spectrum. Casullo believes that our account of what we term “low-grade a priori” justification provides important clarification of a kind of philosophical reflection. However he objects to calling such reflection a priori. We explain what we think is at stake. Along the way, we comment on his idea of that there may be an epistemic payoff to making a distinction between assumptions and presumptions. In the (...) book, we argued that an epistemically important form of nonaccidental reliability can be understood as a matter of processes being “transglobally reliable under modulational control.” Graham recommends another form of nonaccidental reliability, one rooted in evolutionary etiology. We explain why we think that the reliability of perceptual processes is best understood as turning of the kinds of modulational control that we highlight. We clarify how this approach represents a kind of reasonable epistemic patience—modulational control takes time, as it must turn on agents generating information about their own capacities and foibles. Lyons raises interesting questions regarding how (what we term) morphological content possessed by the agent can do the work that we set for it. We argue that it is necessary in order for agents to accommodate the background information that is relevant to many central problems of belief formation. We clarify how it can be expected to work. (shrink)
Naturalized epistemology is not a recent invention, nor is it a philosophical invention. Rather, it is a cognitive phenomena that is pervasive and desirable in the way of human epistemic engagement with their world. It is a matter of the way that one’s cognitive processes can be modulated by information gotten from those same or wider cognitive processes. Such modulational control enhances the reliability of one’s cognitive processes in many ways ‐ and judgments about objective epistemic justification consistently evince a (...) reasonable demand for it. However, with suitable modulational control in place within an agent or a community of agents, the fitting cognitive processes take time to generate information that then engenders changes in processes and norms. Further, as there are significant historical and biographical contingencies involving trajectories through one’s environment, there are contingencies in the information and modifications that will be engendered by suitable modulational control. As a result, what makes for objectively justified belief at a time will vary ‐ as the fruits of suitable modulational control accrue over time. This is a moderate form of historicism about epistemic justification. (shrink)
This paper explores a position that combines contextualism regarding knowledge with the idea that the central point or purpose of the concept of knowledge is to feature in attributions that keep epistemic gate for contextually salient communities. After highlighting the main outlines and virtues of the suggested gate-keeping contextualism, two issues are pursued. First, the motivation for the view is clarified in a discussion of the relation between evaluative concepts and the purposes they serve. This clarifies why one's sense for (...) the point of an evaluative concept ought to constrain and inform one's understanding of the concept. Second, the paper explores ways of avoiding a problem in the author's earlier development of gate-keeping contextualism. The initial development of the view opened the door to a form of skepticism that would hobble an important facet of our social-epistemic lives. (shrink)
This paper explores the role and limits of cognitive simulation in understanding or explaining others. In simulation, one puts one's own cognitive processes to work on pretend input similar to that one supposes that the other plausibly had. Such a process is highly useful. However, it is also limited in important ways. Several limitations fall out from the various forms of cognitive diversity. Some of this diversity results from cultural differences, or from differences in individuals' cognitive biographies. Such diversity is (...) clearly important in history. Some sorts of such diversity are discussed, with attention to the results of contemporary cognitive science. It is argued that one must sometimes employ mixed (simulation-based/theory-based) strategies, and that sometimes what is done will be neither purely simulation nor purely theory-based. (shrink)
David Henderson and Terence Horgan set out a broad new approach to epistemology, which they see as a mixed discipline, having both a priori and empirical elements. They defend the roles of a priori reflection and conceptual analysis in philosophy, but their revisionary account of these philosophical methods allows them a subtle but essential empirical dimension. They espouse a dual-perspective position which they call iceberg epistemology, respecting the important differences between epistemic processes that are consciously accessible and those that are (...) not. Reflecting on epistemic justification, they introduce the notion of transglobal reliability as the mark of the cognitive processes that are suitable for humans. Which cognitive processes these are depends on contingent facts about human cognitive capacities, and these cannot be known a priori. (shrink)
Familiar accounts have it that one explains thoughts or actions by showing them to be rational. It is common to find that the standards of rationality presupposed in these accounts are drawn from what would be thought to be aprioristic sources. I advance an argument to show this must be mistaken. But, recent work in epistemology and on rationality takes a less aprioristic approach to such standards. Does the new (psychological or cognitive scientific) realism in accounts of rationality itself significantly (...) improve the prospects for unproblematic forms of rationalizing explanation? Do earlier misgivings about rationalizing explanation ring hollow when the rationality to be attributed is "naturalized"? Answer: while explanation in terms of naturalized rationality would be free of one fatal flaw possessed by explanation in terms of rationality understood in the traditional fashion, it would yet have parallel flaws. (shrink)
The night sky has been radically altered by light pollution, artificially produced light that obscures the stars. The effects and costs of this are diverse and poorly appreciated. A survey of the economically quantifiable aspects of this problem demonstrates that the value of the starry sky is immense, and yet it remains stubbornly beyond the ken of the market. The attempts to quantify this value and the ultimate impossibility of the task give lie to the economic pretense that the dollar (...) can commensurate all value. The case of light pollution exemplifies the importance of regulation to the protection of environmental value. (shrink)
The concept of knowledge is used to certify epistemic agents as good sources (on a certain point or subject matter) for an understood audience. Attributions of knowledge and denials of knowledge are used in a kind of epistemic gate keeping for (epistemic or practical) communities with which the attributor and interlocutors are associated. When combined with reflection on kinds of practical and epistemic communities, and their situated epistemic needs for gate keeping, this simple observation regarding the point and purpose of (...) the concept of knowledge has rich implications. First, it gives one general reason to prefer contextualism over various forms of sensitive invariantism. Second, when gate keeping for a select community of experts or authorities, with an associated body of results on which folk generally might then draw (when gate keeping for a general source community ) the contextual demands approximate those with which insensitive invariantists would be comfortable. (shrink)
Wilderness is often understood as land untouched by people. On this reading, wilderness management seems to be a simple contradiction, but it is in fact a thriving and functional practice. Wilderness is not simply an absence of human influence, but the presence of something else. Wilderness is land characterized by the flourishing of natural purpose. When this is understood, wilderness management becomes intelligible and several recent criticisms of wilderness preservation are defused.
We propose an approach to epistemic justification that incorporates elements of both reliabilism and evidentialism, while also transforming these elements in significant ways. After briefly describing and motivating the non-standard version of reliabilism that Henderson and Horgan call “transglobal” reliabilism, we harness some of Henderson and Horgan’s conceptual machinery to provide a non-reliabilist account of propositional justification (i.e., evidential support). We then invoke this account, together with the notion of a transglobally reliable belief-forming process, to give an account of doxastic (...) justification. (shrink)
One of the central points of contention in the epistemology of testimony concerns the uniqueness (or not) of the justification of beliefs formed through testimony--whether such justification can be accounted for in terms of, or 'reduced to,' other familiar sort of justification, e.g. without relying on any epistemic principles unique to testimony. One influential argument for the reductionist position, found in the work of Elizabeth Fricker, argues by appeal to the need for the hearer to monitor the testimony for credibility. (...) Fricker (1994) argues, first, that some monitoring for trustworthiness is required if the hearer is to avoid being gullible, and second, that reductionism but not anti-reductionism is compatible with ascribing an important role to the process of monitoring in the course of justifiably accepting observed testimony. In this paper we argue that such an argument fails. (shrink)
We here propose an account of what it is for an agent to be objectively justified in holding some belief. We present in outline this approach, which we call transglobal reliabilism, and we discuss how it is motivated by various thought experiments. While transglobal reliabilism is an externalist epistemology, we think that it accommodates traditional internalist concerns and objections in a uniquely natural and respectful way.
In recent years the literature on bioethics has begun to pose the sociological challenge of how to explore organisational processes that facilitate a systemic response to ethical concerns. The present discussion seeks to make a contribution to this important new direction in ethical research by presenting findings from an Australian pilot study. The research was initiated by the Clinical Ethics Committee of Redland Hospital at Bayside Health Service District in Queensland, Australia, and explores health professionals’ understanding of the nature of (...) ethics and their experience with ethical decision-making within an acute medical ward. This study focuses on the actual experience, understanding and attitudes of clinical professionals in a general medical ward. In particular, the discussion explores the specific findings from the study concerned with how a multi-disciplinary team of health professionals define and operationalise the notion of ethics in an acute ward hospital setting. The key issue reported is that health professionals are not only able to clearly articulate notions of ethics, but that the notions expressed by a multi-disciplinary diversity of participants share a common definitional concept of ethics as patient-centred care. The central finding is that all professional groups indicated that there is a guiding principle to address their ethical sense of the ‘good’ or the ‘ought’ and that is to act in a way that furthered the interests of patients and their families. The findings affirm the importance of a sociological perspective as a productive new direction in bioethical research. (shrink)
Descriptions of social norms can be explanatory. The erotetic approach to explanation provides a useful framework. I describe one very broad kind of explanation-seeking why-question, a genus that is common to the special sciences, and argue that descriptions of norms can serve as an answer to such why-questions. I draw upon Woodwards recent discussion of the explanatory role of generalizations with a significant degree of invariance. Descriptions of norms provide what is, in effect, a generalization regarding the kind of historically (...) contingent system a group or society, a generalization with a significant degree of invariance. Key Words: explanation invariance norms social sciences erotetic laws. (shrink)
Â Â Â Â Â Â Â Â Â Â Â Eliminative materialism, as William Lycan (this volume) tells us, is materialism plus the claim that no creature has ever had a belief, desire, intention, hope, wish, or other â€œfolk-psychologicalâ€ state. Some contemporary philosophers claim that eliminative materialism is very likely true. They sketch certain potential scenarios, for the way theory might develop in cognitive science and neuroscience, that they claim are fairly likely; and they maintain that if such scenarios (...) turned out to be the truth about humans, then eliminative materialism would be true. Â Â Â Â Â Â Â Â Â Â Â Broadly speaking, there are two ways to reply to such arguments, for those who maintain that eliminative materialism is false (or that the likelihood of its being true is very low). One way is to argue that the scenarios the eliminativists envision are themselves extremely unlikelyâ€”that we can be very confident, given what we now know (including nontendentious scientific knowledge), that those scenarios will not come to pass. The other is to argue that even if they did come to pass, this would not undermine common-sense psychology anyway. People would still have beliefs, etc. The two strategies are not incompatible; one could pursue them both. But the second strategy attacks eliminativism at a more fundamental level. And if it can be successfully carried out, then the dialectical state of play will be strikingly secure for folk psychology. For, then it will turn out that folk psychology simply is not hostage to the kinds of potential empirical-theoretical developments that the eliminativists envision. It doesnâ€™t matter, as far as the integrity of folk psychology is concerned, whether or not such scenarios are likely to come to pass. Eliminativist arguments inevitably rely, often only implicitly, on certain assumptions about what it takes for a creature to have beliefs, desires, and other folk-psychological statesâ€”assumptions about some alleged necessary condition(s) for being a true believer (to adapt this colorful usage from Dennett 1987).. (shrink)
Eliminative materialism, as William Lycan (this volume) tells us, is materialism plus the claim that no creature has ever had a belief, desire, intention, hope, wish, or other “folk-psychological” state. Some contemporary philosophers claim that eliminative materialism is very likely true. They sketch certain potential scenarios, for the way theory might develop in cognitive science and neuroscience, that they claim are fairly likely; and they maintain that if such.
It seems that hope springs eternal for the cherished idea that norms (or normativeprinciples) explain actions or regularities in actions. But it also seems thatthere are many ways of going wrong when taking norms and normative principlesas explanatory. The author argues that neither norms nor normative principlesinsofar as they are the sort of things with normative forceis explanatoryof what is done. He considers the matter using both erotetic and ontic models ofexplanation. He further considers various understandings of norms. Key Words: (...) explanation norms social science rationality. (shrink)
Reliablists have argued that the important evaluative epistemic concept of being justified in holding a belief, at least to the extent that that concept is associated with knowledge, is best understood as concerned with the objective appropriateness of the processes by which a given belief is generated and sustained. In particular, they hold that a belief is justified only when it is fostered by processes that are reliable (at least minimally so) in the believer’s actual world. Of course, reliablists typically (...) recognize other concepts of justification--typically subjective notions--which are given a noncompeting sort of epistemic legitimacy. However, they have tended to focus on the epistemically central notion of "strong justification," and have come to settle on this familiar reliablist analysis, supposing that it pretty much exhausts what there is to say about "objective justification.". (shrink)
Alvin Goldman’s contributions to contemporary epistemology are impressive—few epistemologists have provided others so many occasions for reflecting on the fundamental character of their discipline and its concepts. His work has informed the way epistemological questions have changed (and remained consistent) over the last two decades. We (the authors of this paper) can perhaps best suggest our indebtedness by noting that there is probably no paper on epistemology that either of us individually or jointly have produced that does not in its (...) notes and references bear clear testimony to the influence of Professor Goldman’s arguments. The present paper is no exception (and this would be a particularly inapt place to break with our tradition of indebtedness). Professor Goldman has produced a series of discussions that we find particularly important for coming to terms with the venerable idea that there may be truths that can be known a priori (Goldman 1992a, 1992b, 1999). We do not altogether follow his lead, while he draws on the idea that a priori justification has something to do with innateness or processess, we prefer to accentuate the idea that a priori justification turns on a conceptually grounded truths and access via acquired conceptual competence (at least in many significant philosophical cases). Still, in developing our understanding we have been aided by much that Professor Goldman says regarding concepts, conceptual competence, and related psychological processes. The influences should become progressively clear, particularly in the later sections of this paper. What would it take for there to be a priori knowledge or justification? We can begin by reflecting on a widely agreed on answer to this question—one that purports to identify something that would at least be adequate for a priori justification. The answer will then serve as one anchor for the present investigation, a bit of shared ground on which empiricists and rationalists can, and typically do, agree.. (shrink)
The doctrine is familiar. In a sentence, a priori truths are those that are knowable on the basis of reflection alone (independent of experience) by anyone who has acquired the relevant concepts. This expresses the classical conception of the a priori. Of course, there are those who despair of finding any truths that fully meet these demands. Some of the doubters are convinced, however, that the demands, are somewhat inflated by an epistemological tradition that was nevertheless on to something of (...) importance. These thinkers would then seek to reconceive the a priori somewhat--accommodating some of the classical demands within a "retentive analysis." Ultimately, we will urge a place for both the classical conception and a complementary revisionary but retentive conception as well. (shrink)
Epistemology has recently come to more and more take the articulate form of an investigation into how we do, and perhaps might better, manage the cognitive chores of producing, modifying, and generally maintaining belief-sets with a view to having a true and systematic understanding of the world. While this approach has continuities with earlier philosophy, it admittedly makes a departure from the tradition of epistemology as first philosophy.
Abstract Three prominent economists born early in the twentieth century?James Buchanan, Jack Hirshleifer, and Simon Rottenberg?switched from a belief in socialism in their twenties or thirties to strong support for free markets. Interviews show that for all three, and especially for Buchanan and Rottenberg, what changed them is what they learned in their economics classes. For Hirshleifer, another major influence was the pact between Hitler and Stalin, which caused him to be more skeptical about leftist ideas and made him more (...) open to intellectual criticisms of socialism. (shrink)
The argument I present here is an example of the manner in which naturalizing epistemology can help address fairly traditional epistemological issues. I develop one argument against coherentist epistemologies of empirical knowledge. In doing so, I draw on BonJour (1985), for that account seems to me to indicate the direction in which any plausible coherentist account would need to be developed, at least insofar as such accounts are to conceive of justification in terms of an agent (minimally) possessing articul able (...) reasons and arguments, as is standard. I end by indicating important elements of coherentist epistemology that can be salvaged in the face of my argument, provided we are willing to drop the traditional commitment to characterizing justification in terms of the structure of articulable argument. (shrink)
By a macro-level feature, I understand any feature that supervenes on, and is thus realized in, lower-level features. Recent discussions by Kim have suggested that such features cannot be causally relevant insofar as they are not classically reducible to lower-level features. This seems to render macro-level features causally irrelevant. I defend the causal relevance of some such features. Such features have been thought causally relevant in many examples that have underpinned philosophical work on causality. Additionally, in certain typical biological cases, (...) we conceive of causally relevant features at various compatible levels of analysis. When elaborated, these points make a strong prima facie case for macro-level causal relevance. However, we might abandon both the philosophical guideposts and the corresponding explanatory practice in the special sciences were we convinced that no reflective philosophical account could provide for the causal relevance there supposed. I show that such drastic measures are not necessary, for we can make sense of macro-level causal relevance by drawing on Paul Humphreys' recent work in ways suggested by the concrete examples considered here. (shrink)
Rosenberg argues that intentional generalizations in the human sciences cannot be law-like because they are not amenable to significant empirical refinement. This irrefinability is said to result from the principle that supposedly controls in intentional explanation also serving as the standard for successful interpretation. The only credible evidence bearing on such a principle would then need conform to it. I argue that psychological generalizations are refinable and can be nomic. I show how empirical refinement of psychological generalizations is possible by (...) considering concrete cases. A sufficiently detailed view of the role of psychological generalizations in interpretation allows us to find in psychological investigations instances of bootstrap testing. (shrink)