Two of the most influential theories about scientific inference are inference to the best explanation (IBE) and Bayesianism. How are they related? Bas van Fraassen has claimed that IBE and Bayesianism are incompatible rival theories, as any probabilistic version of IBE would violate Bayesian conditionalization. In response, several authors have defended the view that IBE is compatible with Bayesian updating. They claim that the explanatory considerations in IBE are taken into account by the Bayesian because the Bayesian either does or (...) should make use of them in assigning probabilities (priors and/or likelihoods) to hypotheses. I argue that van Fraassen has not succeeded in establishing that IBE and Bayesianism are incompatible, but that the existing compatibilist response is also not satisfactory. I suggest that a more promising approach to the problem is to investigate whether explanatory considerations are taken into account by a Bayesian who assigns priors and likelihoods on his or her own terms. In this case, IBE would emerge from the Bayesian account, rather than being used to constrain priors and likelihoods. I provide a detailed discussion of the case of how the Copernican and Ptolemaic theories explain retrograde motion, and suggest that one of the key explanatory considerations is the extent to which the explanation a theory provides depends on its core elements rather than on auxiliary hypotheses. I then suggest that this type of consideration is reflected in the Bayesian likelihood, given priors that a Bayesian might be inclined to adopt even without explicit guidance by IBE. The aim is to show that IBE and Bayesianism may be compatible, not because they can be amalgamated, but rather because they capture substantially similar epistemic considerations. 1 Introduction2 Preliminaries3 Inference to the Best Explanation4 Bayesianism5 The Incompatibilist View: Inference to the Best Explanation Contradicts Bayesianism5.1 Criticism of the incompatibilist view6 Constraint-Based Compatibilism6.1 Criticism of constraint-based compatibilism7 Emergent Compatibilism7.1 Analysis of Inference to the Best Explanation7.1.1 Inference to the best explanation on specific hypotheses7.1.2 Inference to the best explanation on general theories7.1 3 Copernicus versus Ptolemy7.1.4 Explanatory virtues7.1.5 Summary7.2 Bayesian account8 Conclusion. (shrink)
Shenker has claimed that Von Neumann's argument for identifying the quantum mechanical entropy with the Von Neumann entropy, S() = – ktr( log ), is invalid. Her claim rests on a misunderstanding of the idea of a quantum mechanical pure state. I demonstrate this, and provide a further explanation of Von Neumann's argument.
We propose an approach to epistemic justification that incorporates elements of both reliabilism and evidentialism, while also transforming these elements in significant ways. After briefly describing and motivating the non-standard version of reliabilism that Henderson and Horgan call “transglobal” reliabilism, we harness some of Henderson and Horgan’s conceptual machinery to provide a non-reliabilist account of propositional justification (i.e., evidential support). We then invoke this account, together with the notion of a transglobally reliable belief-forming process, to give an account (...) of doxastic justification. (shrink)
David Henderson and Terence Horgan set out a broad new approach to epistemology, which they see as a mixed discipline, having both a priori and empirical elements. They defend the roles of a priori reflection and conceptual analysis in philosophy, but their revisionary account of these philosophical methods allows them a subtle but essential empirical dimension. They espouse a dual-perspective position which they call iceberg epistemology, respecting the important differences between epistemic processes that are consciously accessible and those that (...) are not. Reflecting on epistemic justification, they introduce the notion of transglobal reliability as the mark of the cognitive processes that are suitable for humans. Which cognitive processes these are depends on contingent facts about human cognitive capacities, and these cannot be known a priori. (shrink)
Franck L. B. Meijboom: Problems of Trust: A Question of Trustworthiness Content Type Journal Article DOI 10.1007/s10806-010-9300-4 Authors Martha L. Henderson, Master of Environmental Studies Program, The Evergreen State College, Olympia, WA 98505, USA Journal Journal of Agricultural and Environmental Ethics Online ISSN 1573-322X Print ISSN 1187-7863.
Contemporary accounts of what it is for an agent to be justified in holding a given belief commonly carry substantive commitments concerning what cognitive processes can and should be like. In this paper, we argue that concern for the plausiblity of such psychological commitments leads to significant epistemological results. In particular, it leads to a multi-faceted epistemology in which elements of traditionally conflicting epistemologies are vindicated within a single epistemological account. We suggest thinking of the epistemologically relevant cognitive processes in (...) terms of the metaphor of an iceberg--the accessible and articulable states that have been the exclusive focus of much epistemology must, for reasons that we explain, comprise only a proper subset of epistemologically relevant processing, even as only a part of an iceburg is exposed to view. When one focuses on the interaction of accessible states and articulable information, the structure of epistemic justification looks rather like what has been called structural contextualism (Timmons 1993, Henderson 1994b). It might also be called quasi-foundationalist. Yet, given the sort of creatures we are, in attending to our epistemological tasks we must rely on processing that is sensitive to information that we could not articulate, that is not accessible in the standard internalist sense. When one focuses on the full range of epistemologically important processes, the structure of what makes for justification may be rather more like that envisioned by some coherentists. (shrink)
Hierarchical Bayesian models (HBMs) provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher‐level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, particularly the idea that (...) higher‐level theories guide learning at lower levels. In addition, they help resolve certain issues for Bayesians, such as scientific preference for simplicity and the problem of new theories. *Received July 2009; revised October 2009. †To contact the authors, please write to: LeahHenderson, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 32D‐808, Cambridge, MA 02139; e‐mail: firstname.lastname@example.org. (shrink)
Alvin Goldman’s contributions to contemporary epistemology are impressive—few epistemologists have provided others so many occasions for reflecting on the fundamental character of their discipline and its concepts. His work has informed the way epistemological questions have changed (and remained consistent) over the last two decades. We (the authors of this paper) can perhaps best suggest our indebtedness by noting that there is probably no paper on epistemology that either of us individually or jointly have produced that does not in its (...) notes and references bear clear testimony to the influence of Professor Goldman’s arguments. The present paper is no exception (and this would be a particularly inapt place to break with our tradition of indebtedness). Professor Goldman has produced a series of discussions that we find particularly important for coming to terms with the venerable idea that there may be truths that can be known a priori (Goldman 1992a, 1992b, 1999). We do not altogether follow his lead, while he draws on the idea that a priori justification has something to do with innateness or processess, we prefer to accentuate the idea that a priori justification turns on a conceptually grounded truths and access via acquired conceptual competence (at least in many significant philosophical cases). Still, in developing our understanding we have been aided by much that Professor Goldman says regarding concepts, conceptual competence, and related psychological processes. The influences should become progressively clear, particularly in the later sections of this paper. What would it take for there to be a priori knowledge or justification? We can begin by reflecting on a widely agreed on answer to this question—one that purports to identify something that would at least be adequate for a priori justification. The answer will then serve as one anchor for the present investigation, a bit of shared ground on which empiricists and rationalists can, and typically do, agree.. (shrink)
Familiar accounts have it that one explains thoughts or actions by showing them to be rational. It is common to find that the standards of rationality presupposed in these accounts are drawn from what would be thought to be aprioristic sources. I advance an argument to show this must be mistaken. But, recent work in epistemology and on rationality takes a less aprioristic approach to such standards. Does the new (psychological or cognitive scientific) realism in accounts of rationality itself significantly (...) improve the prospects for unproblematic forms of rationalizing explanation? Do earlier misgivings about rationalizing explanation ring hollow when the rationality to be attributed is "naturalized"? Answer: while explanation in terms of naturalized rationality would be free of one fatal flaw possessed by explanation in terms of rationality understood in the traditional fashion, it would yet have parallel flaws. (shrink)
The doctrine is familiar. In a sentence, a priori truths are those that are knowable on the basis of reflection alone (independent of experience) by anyone who has acquired the relevant concepts. This expresses the classical conception of the a priori. Of course, there are those who despair of finding any truths that fully meet these demands. Some of the doubters are convinced, however, that the demands, are somewhat inflated by an epistemological tradition that was nevertheless on to something of (...) importance. These thinkers would then seek to reconceive the a priori somewhat--accommodating some of the classical demands within a "retentive analysis." Ultimately, we will urge a place for both the classical conception and a complementary revisionary but retentive conception as well. (shrink)
Â Â Â Â Â Â Â Â Â Â Â Eliminative materialism, as William Lycan (this volume) tells us, is materialism plus the claim that no creature has ever had a belief, desire, intention, hope, wish, or other â€œfolk-psychologicalâ€ state. Some contemporary philosophers claim that eliminative materialism is very likely true. They sketch certain potential scenarios, for the way theory might develop in cognitive science and neuroscience, that they claim are fairly likely; and they maintain that if such scenarios (...) turned out to be the truth about humans, then eliminative materialism would be true. Â Â Â Â Â Â Â Â Â Â Â Broadly speaking, there are two ways to reply to such arguments, for those who maintain that eliminative materialism is false (or that the likelihood of its being true is very low). One way is to argue that the scenarios the eliminativists envision are themselves extremely unlikelyâ€”that we can be very confident, given what we now know (including nontendentious scientific knowledge), that those scenarios will not come to pass. The other is to argue that even if they did come to pass, this would not undermine common-sense psychology anyway. People would still have beliefs, etc. The two strategies are not incompatible; one could pursue them both. But the second strategy attacks eliminativism at a more fundamental level. And if it can be successfully carried out, then the dialectical state of play will be strikingly secure for folk psychology. For, then it will turn out that folk psychology simply is not hostage to the kinds of potential empirical-theoretical developments that the eliminativists envision. It doesnâ€™t matter, as far as the integrity of folk psychology is concerned, whether or not such scenarios are likely to come to pass. Eliminativist arguments inevitably rely, often only implicitly, on certain assumptions about what it takes for a creature to have beliefs, desires, and other folk-psychological statesâ€”assumptions about some alleged necessary condition(s) for being a true believer (to adapt this colorful usage from Dennett 1987).. (shrink)
Common formulations of the principle of charity in translation seem to undermine attributions of irrationality in social scientific accounts that are otherwise unexceptionable. This I call the problem of irrationality. Here I resolve the problem of irrationality by developing two complementary views of the principle of charity. First, I develop the view (ill-developed in the literature at present) that the principle of charity is preparatory, being needed in the construction of provisional first-approximation translation manuals. These serve as the basis for (...) explanatory accounts and associated refinements in the translation manual. In developing such explanatory accounts, the principle of charity is no longer constraining. Thus, the principle of charity applies only in the early stages of constructing translation manuals, and there is no problem of irrationality in the later stages of constructing translation manuals. Second, I reduce the principle of charity, where it does apply, to a special case of what I call the principle of explicability: so translate as to attribute explicable beliefs and practices to the speakers of the source-language. I show that the appropriate formulation of the principle of charity counsels just what the principle of explicability requires in the early stages of social scientific investigation. (shrink)
This paper applies Plato’s cave allegory to Enron’s success and downfall. Plato’s famous tale of cave dwellers illustrates the different levels of truth and understanding. These levels include images, the sources of images, and the ultimate reality behind both. The paper first describes these levels of perception as they apply to Plato’s cave dwellers and then provides a brief history of the rise of Enron. Then we apply Plato’s levels of understanding to Enron, showing how the company created its image (...) and presented information to support that image, and how the public eventually emerged from the cave to realize the truth about Enron’s actual accounting practices and financial state, which led to the corporation’s downfall. We find Plato’s allegory both useful in analyzing the relationship between Enron and the public and instructive about the power and moral responsibility of Enron’s executives. (shrink)
An interpretation is offered of thrasymachus' account of the nature of justice and just action in book I of the 'republic' which is internally consistent throughout on all important points. Just action is not defined in terms of its practical consequences, As many commentators assume, But rather in terms of its logical consequences 'vis-A-Vis' just agents. When one man acts justly towards another, The performance of the just act renders the just agent vulnerable to unfair or unjust exploitation by those (...) with whom he deals. The "strong man", In thrasymachus' sense, Would thus never be a just man. It is argued that socrates addresses himself to this position, But that, While the textual thrasymachus is silenced, Socrates' best arguments are, In fact, Inadequate to refute thrasymachus. (shrink)
It seems that hope springs eternal for the cherished idea that norms (or normativeprinciples) explain actions or regularities in actions. But it also seems thatthere are many ways of going wrong when taking norms and normative principlesas explanatory. The author argues that neither norms nor normative principlesinsofar as they are the sort of things with normative forceis explanatoryof what is done. He considers the matter using both erotetic and ontic models ofexplanation. He further considers various understandings of norms. Key Words: (...) explanation norms social science rationality. (shrink)
Almost a hundred years ago, John Dewey clarified the relationship between democracy and education. However, the enactment of a 'deeply democratic' educational practice has proven elusive throughout the ensuing century, overridden by managerial approaches to schooling young people and to the standardized, technical preparation and professional development of teachers and educational leaders. A powerful counter-narrative to this 'standardized management paradigm' exists in the field of curriculum studies, but is largely ignored by mainstream approaches to the professional development of educators. This (...) paper argues for a reconceptualized, differentiated, and 'disciplined' approach to the professional development of educators in democratic societies that builds capacity for curriculum leadership. In support of this proposal, we amplify the tenets of Dewey's pragmatic social and educational philosophy, which have long been at the heart of democratic educational thought, with Badiou's more contemporary thinking about the important relationships between truth as inspirational awakening, subjectification as existential commitment, and ethical fidelity as 'for all' action. (shrink)
One of the central points of contention in the epistemology of testimony concerns the uniqueness (or not) of the justification of beliefs formed through testimony--whether such justification can be accounted for in terms of, or 'reduced to,' other familiar sort of justification, e.g. without relying on any epistemic principles unique to testimony. One influential argument for the reductionist position, found in the work of Elizabeth Fricker, argues by appeal to the need for the hearer to monitor the testimony for credibility. (...) Fricker (1994) argues, first, that some monitoring for trustworthiness is required if the hearer is to avoid being gullible, and second, that reductionism but not anti-reductionism is compatible with ascribing an important role to the process of monitoring in the course of justifiably accepting observed testimony. In this paper we argue that such an argument fails. (shrink)
The concept of knowledge is used to certify epistemic agents as good sources (on a certain point or subject matter) for an understood audience. Attributions of knowledge and denials of knowledge are used in a kind of epistemic gate keeping for (epistemic or practical) communities with which the attributor and interlocutors are associated. When combined with reflection on kinds of practical and epistemic communities, and their situated epistemic needs for gate keeping, this simple observation regarding the point and purpose of (...) the concept of knowledge has rich implications. First, it gives one general reason to prefer contextualism over various forms of sensitive invariantism. Second, when gate keeping for a select community of experts or authorities, with an associated body of results on which folk generally might then draw (when gate keeping for a general source community ) the contextual demands approximate those with which insensitive invariantists would be comfortable. (shrink)
Reliablists have argued that the important evaluative epistemic concept of being justified in holding a belief, at least to the extent that that concept is associated with knowledge, is best understood as concerned with the objective appropriateness of the processes by which a given belief is generated and sustained. In particular, they hold that a belief is justified only when it is fostered by processes that are reliable (at least minimally so) in the believer’s actual world. Of course, reliablists typically (...) recognize other concepts of justification--typically subjective notions--which are given a noncompeting sort of epistemic legitimacy. However, they have tended to focus on the epistemically central notion of "strong justification," and have come to settle on this familiar reliablist analysis, supposing that it pretty much exhausts what there is to say about "objective justification.". (shrink)
This article is an attempt to develop a measure of ethical sensitivity to racial and gender intolerance that occurs in schools. Acts of intolerance that indicate ethically insensitive behaviors in American schools were identified and tied to existing professional ethical codes developed by school-based professional organizations. The Racial Ethical Sensitivity Test (REST) consists of 5 scenarios that portray acts of racial intolerance and ethical insensitivity. Participants viewed 2 videotaped scenarios and then responded to a semistructured interview protocol adapted from Bebeau (...) and Rest (1982). After a 2-week interval, this procedure was repeated. Stability of the REST across time was determined by using the overall test-retest coefficient. Internal as well as interrater consistency was also calculated for each scenario. Overall findings indicate promise for the REST as a reliable measure to assess racial and ethnic sensitivity. (shrink)
In codifying the methods of translation, several writers have formulated maxims that would constrain interpreters to construe their subjects as (more or less) rational speakers of the truth. Such maxims have come to be known as versions of the principle of charity. W. V. O. Quine suggests an empirical, not purely methodological, basis for his version of that principle. Recently, Stephen Stich has criticized Quine's attempt to found the principle of charity in translation on information about the probabilities of various (...) sorts of mistakes. Here 1 defend Quine's approach. These issues have important implications for the supposed a priori status of human rationality. (shrink)
By a macro-level feature, I understand any feature that supervenes on, and is thus realized in, lower-level features. Recent discussions by Kim have suggested that such features cannot be causally relevant insofar as they are not classically reducible to lower-level features. This seems to render macro-level features causally irrelevant. I defend the causal relevance of some such features. Such features have been thought causally relevant in many examples that have underpinned philosophical work on causality. Additionally, in certain typical biological cases, (...) we conceive of causally relevant features at various compatible levels of analysis. When elaborated, these points make a strong prima facie case for macro-level causal relevance. However, we might abandon both the philosophical guideposts and the corresponding explanatory practice in the special sciences were we convinced that no reflective philosophical account could provide for the causal relevance there supposed. I show that such drastic measures are not necessary, for we can make sense of macro-level causal relevance by drawing on Paul Humphreys' recent work in ways suggested by the concrete examples considered here. (shrink)
Accounts of what it is for an agent to be justified in holding a belief commonly carry commitments concerning what cognitive processes can and should be like. A concern for the plausibility of such commitments leads to a multi-faceted epistemology in which elements of traditionally conflicting epistemologies are vindicated within a single epistemological account. The accessible and articulable states that have been the exclusive focus of much epistemology must constitute only a proper subset of epistemologically relevant processing. The interaction of (...) such states looks rather contextualist. It might also be called quasi-foundationalist. However, in attending to our epistemological tasks we must rely on processing that is sensitive to information that we could not articulate, that is not accessible in the standard internalist sense. When focusing on the full range of epistemologically important processes, the structure of what makes for justification is rather more like that envisioned by some coherentists. (shrink)
This paper explores the role and limits of cognitive simulation in understanding or explaining others. In simulation, one puts one's own cognitive processes to work on pretend input similar to that one supposes that the other plausibly had. Such a process is highly useful. However, it is also limited in important ways. Several limitations fall out from the various forms of cognitive diversity. Some of this diversity results from cultural differences, or from differences in individuals' cognitive biographies. Such diversity is (...) clearly important in history. Some sorts of such diversity are discussed, with attention to the results of contemporary cognitive science. It is argued that one must sometimes employ mixed (simulation-based/theory-based) strategies, and that sometimes what is done will be neither purely simulation nor purely theory-based. (shrink)
Business ethics is the continuing process of re-defining the goals and rules of business activity. In times of rapid change, spurred equally by technological innovation within the business community and by societal expectations in the larger community, participants who share in that process of re-defining goals and rules should be sensitive to professional differences. Lawyers and executives, for instance, while seeking a common societal good, will utilize measurably different goals and methods based on differences in leadership style, accountability to constituents (...) and client relationship generally. Because of these differences, definitions of what is ethical will vary as well, spread across a spectrum of ethicality. (shrink)
The argument I present here is an example of the manner in which naturalizing epistemology can help address fairly traditional epistemological issues. I develop one argument against coherentist epistemologies of empirical knowledge. In doing so, I draw on BonJour (1985), for that account seems to me to indicate the direction in which any plausible coherentist account would need to be developed, at least insofar as such accounts are to conceive of justification in terms of an agent (minimally) possessing articul able (...) reasons and arguments, as is standard. I end by indicating important elements of coherentist epistemology that can be salvaged in the face of my argument, provided we are willing to drop the traditional commitment to characterizing justification in terms of the structure of articulable argument. (shrink)
Descriptions of social norms can be explanatory. The erotetic approach to explanation provides a useful framework. I describe one very broad kind of explanation-seeking why-question, a genus that is common to the special sciences, and argue that descriptions of norms can serve as an answer to such why-questions. I draw upon Woodwards recent discussion of the explanatory role of generalizations with a significant degree of invariance. Descriptions of norms provide what is, in effect, a generalization regarding the kind of historically (...) contingent system a group or society, a generalization with a significant degree of invariance. Key Words: explanation invariance norms social sciences erotetic laws. (shrink)
Rosenberg argues that intentional generalizations in the human sciences cannot be law-like because they are not amenable to significant empirical refinement. This irrefinability is said to result from the principle that supposedly controls in intentional explanation also serving as the standard for successful interpretation. The only credible evidence bearing on such a principle would then need conform to it. I argue that psychological generalizations are refinable and can be nomic. I show how empirical refinement of psychological generalizations is possible by (...) considering concrete cases. A sufficiently detailed view of the role of psychological generalizations in interpretation allows us to find in psychological investigations instances of bootstrap testing. (shrink)
To mark the 30th anniversary of Richard Dawkins’s book, OUP is to issue a collection of essays about his work. Here, professor of psychology at Harvard University, wonders if Dawkins’s big idea has not gone far enough..
I want to explore the possibility of an a posteriori approach to the elucidation of certain moral notions. These are: (a) the notion of a duty, some specific thing which it is incumbent on me to do, and (b) the notion of something that is a good thing for me to do. I want to consider these notions, so far as I can, independently of rules. There is a certain sense in which having a duty to do this or that (...) is a function of circumstances, and in which this or that's being a good thing to do is likewise a function of circumstances. I shall suggest specific examples in which this is a conspicuous feature of ‘my duty’ or of what I can, beneficially, do. In these examples what I ought to do, and what it is good to do, can be represented as special ways in which what I am to do presents itself. (shrink)
Actions are done for reasons. The reasons are beliefs and desires, which are physical states that causally interact in a rather special way. Their interaction exhibits a characteristic pattern: it is rational, at least in certain important respects.
We here propose an account of what it is for an agent to be objectively justified in holding some belief. We present in outline this approach, which we call transglobal reliabilism, and we discuss how it is motivated by various thought experiments. While transglobal reliabilism is an externalist epistemology, we think that it accommodates traditional internalist concerns and objections in a uniquely natural and respectful way.