The influence of Greek sources on the Arab philosophers is both obvious and important. What is less clear is how the quality of the translations from which the philosophers worked affected their understanding of the points that the Greek writers were making. This article investigates one small but self-contained topic from within the field of translation literature, covering the translations of poetic quotations in the Rhetoric of Aristotle in its Arabic translation, together with an analysis of the types of mistakes (...) to be found there. In itself this is of no more than curiosity value, but an application of the lessons to be learnt here to a linguistic study of Arabic philosophical commentaries, and, by extension, to philosophical theory, will be of clear importance. (shrink)
I give a brief precis of Lyons' book. I discuss the problem of delineating basic from non-basic beliefs. I argue that one of Lyons' possible solutions doesn't work - his definition of a perceptual module does not allow us to decide which beliefs are basic. And I argue that another possible solution undermines some of Lyons' motivation. The intuitive understanding of belief may not generate the Clairvoyancy troubles he fears.
There is growing interest in the use of technology to enhance the tracking and quality of clinical information available for patients in disaster settings. This paper describes the design and evaluation of the Wireless Internet Information System for Medical Response in Disasters (WIISARD).
This book offers solutions to two persistent and I believe closely related problems in epistemology. The first problem is that of drawing a principled distinction between perception and inference: what is the difference between seeing that something is the case and merely believing it on the basis of what we do see? The second problem is that of specifying which beliefs are epistemologically basic (i.e., directly, or noninferentially, justified) and which are not. I argue that what makes a belief a (...) perceptual belief, or a basic belief, is not any introspectible feature of the belief but rather the nature of the cognitive system, or "module", that is causally responsible for the belief. Thus, even zombies, who in the philosophical literature lack conscious experiences altogether, can have basic, justified, perceptual beliefs. -/- The theories of perceptual and basic beliefs developed in the monograph are embedded in a larger reliabilist epistemology. I use this theory of basic beliefs to develop a detailed reliabilist theory: Inferentialist Reliabilism, which offers a reliabilist theory of inferential justification and which solves some longstanding problems for other reliabilist views by demanding inferential support for some—but not all—beliefs. The book is an instance of a thoroughgoingly naturalistic approach to epistemology. Many of my arguments have an empirical basis, and the view that I endorse reserves a central role for the cognitive sciences—in particular, cognitive neuroscience—in filling in the details of an applied epistemological theory. (shrink)
The paper offers a solution to the generality problem for a reliabilist epistemology, by developing an “algorithm and parameters” scheme for type-individuating cognitive processes. Algorithms are detailed procedures for mapping inputs to outputs. Parameters are psychological variables that systematically affect processing. The relevant process type for a given token is given by the complete algorithmic characterization of the token, along with the values of all the causally relevant parameters. The typing that results is far removed from the typings of folk (...) psychology, and from much of the epistemology literature. But it is principled and empirically grounded, and shows good prospects for yielding the desired epistemological verdicts. The paper articulates and elaborates the theory, drawing out some of its consequences. Toward the end, the fleshed-out theory is applied to two important case studies: hallucination and cognitive penetration of perception. (shrink)
The New Evil Demon Problem is supposed to show that straightforward versions of reliabilism are false: reliability is not necessary for justification after all. I argue that it does no such thing. The reliabilist can count a number of beliefs as justified even in demon worlds, others as unjustified but having positive epistemic status nonetheless. The remaining beliefs---primarily perceptual beliefs---are not, on further reflection, intuitively justified after all. The reliabilist is right to count these beliefs as unjustified in demon worlds, (...) and it is a challenge for the internalist to be able to do so as well. (shrink)
Epistemic defeat is standardly understood in either evidentialist or responsibilist terms. The seminal treatment of defeat is an evidentialist one, due to John Pollock, who famously distinguishes between undercutting and rebutting defeaters. More recently, an orthogonal distinction due to Jennifer Lackey has become widely endorsed, between so-called doxastic (or psychological) and normative defeaters. We think that neither doxastic nor normative defeaters, as Lackey understands them, exist. Both of Lackey’s categories of defeat derive from implausible assumptions about epistemic responsibility. Although Pollock’s (...) evidentialist view is superior, the evidentialism per se can be purged from it, leaving a general structure of defeat that can be incorporated in a reliabilist theory that is neither evidentialist nor responsibilist in any way. (shrink)
Cognitive penetration of perception is the idea that what we see is influenced by such states as beliefs, expectations, and so on. A perceptual belief that results from cognitive penetration may be less justified than a nonpenetrated one. Inferentialism is a kind of internalist view that tries to account for this by claiming that some experiences are epistemically evaluable, on the basis of why the perceiver has that experience, and the familiar canons of good inference provide the appropriate standards by (...) which experiences are evaluated. I examine recent defenses of inferentialism by Susanna Siegel, Peter Markie, and Matthew McGrath and argue that the prospects for inferentialism are dim. (shrink)
Much of the intuitive appeal of evidentialism results from conflating two importantly different conceptions of evidence. This is most clear in the case of perceptual justification, where experience is able to provide evidence in one sense of the term, although not in the sense that the evidentialist requires. I argue this, in part, by relying on a reading of the Sellarsian dilemma that differs from the version standardly encountered in contemporary epistemology, one that is aimed initially at the epistemology of (...) introspection but which generalizes to theories of perceptual justification as well. (shrink)
Outside of philosophy, ‘intuition’ means something like ‘knowing without knowing how you know’. Intuition in this broad sense is an important epistemological category. I distinguish intuition from perception and perception from perceptual experience, in order to discuss the distinctive psychological and epistemological status of evaluative property attributions. Although it is doubtful that we perceptually experience many evaluative properties and also somewhat unlikely that we perceive many evaluative properties, it is highly plausible that we intuit many instances of evaluative properties as (...) such. The resulting epistemological status of evaluative property attributions is very much like it would be if we literally perceived such properties. (shrink)
To what extent are cognitive capacities, especially perceptual capacities, informationally encapsulated and to what extent are they cognitively penetrable? And why does this matter? Two reasons we care about encapsulation/penetrability are: (a) encapsulation is sometimes held to be definitional of modularity, and (b) penetrability has epistemological implications independent of modularity. I argue that modularity does not require encapsulation; that modularity may have epistemological implications independently of encapsulation; and that the epistemological implications of the cognitive penetrability of perception are messier than (...) is sometimes thought. (shrink)
Goldman, though still a reliabilist, has made some recent concessions to evidentialist epistemologies. I agree that reliabilism is most plausible when it incorporates certain evidentialist elements, but I try to minimize the evidentialist component. I argue that fewer beliefs require evidence than Goldman thinks, that Goldman should construe evidential fit in process reliabilist terms, rather than the way he does, and that this process reliabilist understanding of evidence illuminates such important epistemological concepts as propositional justification, ex ante justification, and defeat.
The Sellarsian dilemma is a famous argument that attempts to show that nondoxastic experiential states cannot confer justification on basic beliefs. The usual conclusion of the Sellarsian dilemma is a coherentist epistemology, and the usual response to the dilemma is to find it quite unconvincing. By distinguishing between two importantly different justification relations (evidential and nonevidential), I hope to show that the Sellarsian dilemma, or something like it, does offer a powerful argument against standard nondoxastic foundationalist theories. But this reconceived (...) version of the argument does not support coherentism. Instead, I use it to argue for a strongly externalist epistemology. (shrink)
Stewart Cohen argues that much contemporary epistemological theorizing is hampered by the fact that ‘epistemic justification’ is a term of art and one that is never given any serious explication in a non-tendentious, theory-neutral way. He suggests that epistemologists are therefore better off theorizing in terms of rationality, rather than in terms of ‘epistemic justification’. Against this, I argue that even if the term ‘epistemic justification’ is not broadly known, the concept it picks out is quite familiar, and partly because (...) it’s a term of art, justification talk is a better vehicle for philosophical theorizing. ‘Rational’ is too unclear for our philosophical purposes, and the fact that ‘epistemic justification’ gets fleshed out by appeal to substantive, controversial theses is no obstacle to its playing the needed role in epistemological theorizing. (shrink)
The cognitive neuropsychological understanding of a cognitive system is roughly that of a ‘mental organ’, which is independent of other systems, specializes in some cognitive task, and exhibits a certain kind of internal cohesiveness. This is all quite vague, and I try to make it more precise. A more precise understanding of cognitive systems will make it possible to articulate in some detail an alternative to the Fodorian doctrine of modularity (since not all cognitive systems are modules), but it will (...) also provide a better understanding of what a module is (since all modules are cognitive systems). (shrink)
Recent worries about possible epiphenomenalist consequences of nonreductive materialism are misplaced, not, as many have argued, because nonreductive materialism does not have epiphenomenalist implications but because the epiphenomenalist implications are actually virtues of the theory, rather than vices. It is only by showing how certain kinds of mental properties are causally impotent that cognitive scientific explanations of mentality as we know them are possible.
The project of “public reason” claims to offer an epistemological resolution to the civic dilemma created by the clash of incompatible options for the rational exercise of freedom adopted by citizens in a diverse community. The present Article proposes, via consideration of a contrast between two classical accounts of dialectical reasoning, that the employment of “public reason,” in substantive due process analysis, is unworkable in theory and contrary to more reflective Supreme Court precedent. Although logical commonalities might be available to (...) pick out from the multitude of particularized accounts of what constitutes “civic order,” no “public reason” so derived could adequately capture - and thus be able to secure in a practical sense - any single determinate civic order, much less one that would be consistent with all citizens' conceptions of public order. Part I of this Article raises a number of issues for consideration relating to the epistemology of law and focuses especially on the concept of public reason and its critique. Part II addresses alternative approaches to legal reasoning suggested by classical accounts of practical reasoning and virtue theory and considers the operation of such legal analysis outside the area of substantive due process; Part III analyzes post-Lawrence case law confirming the dilemma created by the Supreme Court's ambiguous approaches to substantive due process and concludes that only one interpretation - that articulated fully in Washington v. Glucksberg and given lip service in Lawrence v. Texas - provides a method for resolving novel substantive due process challenges that is philosophically sound as well as historically coherent. Rather than perpetuating a fiction that denies the propriety of lawmaking unless based on principles that all citizens can rationally agree upon, an appropriate model of substantive due process analysis recognizes that law must inevitably be based upon principles that cannot be agreed upon by all citizens in virtue of rationality alone. -/- Abstract Footnotes (291) Beta -/- Revise My Submission -/- -/- One-Click Download | Share | Email | Add to Briefcase -/- Facebook | Twitter | Digg | Del.icio.us | CiteULike | Permalink Using the URL or DOI link below will ensure access to this page indefinitely -/- Based on your IP address, your paper is being delivered by: New York, USA Processing request. [Processing request.] Illinois, USA Processing request. [Processing request.] Brussels, Belgium Processing request. [Processing request.] Seoul, Korea Processing request. [Processing request.] California, USA Processing request. [Processing request.] -/- If you have any problems downloading this paper, please click on another Download Location above, or view our FAQ File name: SSRN-id1004757. ; Size: 424K -/- Sample Cover You will receive a black and white printed and perfect bound version of this document in 8 1/2 x 11 inch format, with glossy color front and back covers. Currently shipping to the US addresses only. Your order will be shipped within three business days. Quantity: Total Price = $0.50 plus shipping (U.S. Only) -/- If you have any problems with this purchase, please email [email protected] or call 1-585-442-8170. Reason's Freedom and the Dialectic of Ordered Liberty -/- Edward C. Lyons University of Notre Dame Law School -/- Cleveland State Law Review, Vol. 55, p. 157, 2007 -/- Abstract: The project of "public reason" claims to offer an epistemological resolution to the civic dilemma created by the clash of incompatible options for the rational exercise of freedom adopted by citizens in a diverse community. The present Article proposes, via consideration of a contrast between two classical accounts of dialectical reasoning, that the employment of "public reason," in substantive due process analysis, is unworkable in theory and contrary to more reflective Supreme Court precedent. Although logical commonalities might be available to pick out from the multitude of particularized accounts of what constitutes "civic order," no "public reason" so derived could adequately capture - and thus be able to secure in a practical sense - any single determinate civic order, much less one that would be consistent with all citizens' conceptions of public order. -/- Part I of this Article raises a number of issues for consideration relating to the epistemology of law and focuses especially on the concept of public reason and its critique. Part II addresses alternative approaches to legal reasoning suggested by classical accounts of practical reasoning and virtue theory and considers the operation of such legal analysis outside the area of substantive due process; Part III analyzes post-Lawrence case law confirming the dilemma created by the Supreme Court's ambiguous approaches to substantive due process and concludes that only one interpretation - that articulated fully in Washington v. Glucksberg and given lip service in Lawrence v. Texas - provides a method for resolving novel substantive due process challenges that is philosophically sound as well as historically coherent. -/- Rather than perpetuating a fiction that denies the propriety of lawmaking unless based on principles that all citizens can rationally agree upon, an appropriate model of substantive due process analysis recognizes that law must inevitably be based upon principles that cannot be agreed upon by all citizens in virtue of rationality alone. -/- Keywords: substantive due process, practical reason, public reason, Rawls, Casey, Lawrence, Glucksberg, Plato, Aristotle, Kant, Hegel, dialectic, autonomy, freedom -/- . (shrink)
Morphological content (MC) is content that is implicit in the standing structure of the cognitive system. Henderson and Horgan claim that MC plays a distinctive epistemological role unrecognized by traditional epistemic theories. I consider the possibilities that MC plays this role either in central cognition or in peripheral modules. I argue that the peripheral MC does not play an interesting epistemological role and that the central MC is already recognized by traditional theories.
The traditional understanding of analyticity in terms of concept containment is revisited, but with a concept explicitly understood as a certain kind of mental representation and containment being read correspondingly literally. The resulting conception of analyticity avoids much of the vagueness associated with attempts to explicate analyticity in terms of synonymy by moving the locus of discussion from the philosophy of language to the philosophy of mind. The account provided here illustrates some interesting features of representations and explains, at least (...) in part, the special epistemic status of analytic judgments. (shrink)
An examination of the role played by general rules in Hume's positive (nonskeptical) epistemology. General rules for Hume are roughly just general beliefs. The difference between justified and unjustified belief is a matter of the influence of good versus bad general rules, the good general rules being the "extensive" and "constant" ones.
Formalised knowledge systems, including universities and research institutes, are important for contemporary societies. They are, however, also arguably failing humanity when their impact is measured against the level of progress being made in stimulating the societal changes needed to address challenges like climate change. In this research we used a novel futures-oriented and participatory approach that asked what future envisioned knowledge systems might need to look like and how we might get there. Findings suggest that envisioned future systems will need (...) to be much more collaborative, open, diverse, egalitarian, and able to work with values and systemic issues. They will also need to go beyond producing knowledge about our world to generating wisdom about how to act within it. To get to envisioned systems we will need to rapidly scale methodological innovations, connect innovators, and creatively accelerate learning about working with intractable challenges. We will also need to create new funding schemes, a global knowledge commons, and challenge deeply held assumptions. To genuinely be a creative force in supporting longevity of human and non-human life on our planet, the shift in knowledge systems will probably need to be at the scale of the enlightenment and speed of the scientific and technological revolution accompanying the second World War. This will require bold and strategic action from governments, scientists, civic society and sustained transformational intent. (shrink)
A short discussion piece arguing that the neuropsychological phenomenon of double dissociations is most revealing of underlying cognitive architecture because of the capacities that are spared, more than the capacities that are lost.
There are good reasons to think there is a universal, fundamental length, specifically, at the order of the Planck length. If this holds, we then have an empirical answer for Zeno’s paradox of Achilles and the tortoise, a potential impasse in the second premise of the kalam cosmological argument, and creation ex nihilo. In this paper, I establish metaphysical, empirical, and epistemic reasons suggesting there is a universal, fundamental length. Along the way, I propose a “contingent necessity” for such a (...) notion. I then detail how a universal, fundamental length is a preferred solution for issues –. (shrink)
Many epistemologists endorse a view I call “evidence essentialism:” if e is evidence of h, for some agent at some time, then necessarily, e is evidence of h, for any agent at any time. I argue that such a view is only plausible if we ignore cognitive diversity among epistemic agents, i.e., the fact that different agents have different—sometimes radically different—cognitive skills, abilities, and proclivities. Instead, cognitive diversity shows that evidential relations are contingent and relative to cognizers. This is especially (...) obvious in extreme cases and in connection with epistemic defeat, but it is also very plausibly true of ordinary agents, and regarding prima facie justification. (shrink)
In some recent work, Ernest Sosa rejects the “perceptual model” of rational intuition, according to which intuitions (beliefs formed by intuition) are justified by standing in the appropriate relation to a nondoxastic intellectual experience (a seeming-true, or the like), in much the way that perceptual beliefs are often held to be justified by an appropriate relation to nondoxastic sense experiential states. By extending some of Sosa’s arguments and adding a few of my own, I argue that Sosa is right to (...) reject the perceptual model of intuition, and that we should reject the “perceptual model” of perception as well. Rational intuition and perception should both receive a virtue theoretic (e.g., reliabilist) account, rather than an evidentialist one. To this end, I explicitly argue against the Grounds Principle, which holds that all justified beliefs must be based on some adequate reason, or ground. (shrink)
In this article, responding to assertions that the principle of double effect has no place in legal analysis, I explore the overlap between double effect and negligence analysis. In both, questions of culpability arise in situations where a person acts with no intent to cause harm but where reasonable foreseeability of unintended harm exists. Under both analyses, the determination of whether such conduct is permissible involves a reasonability test that balances that foreseeable harm against the good intended by the actor's (...) conduct. In both, absent a finding that the foreseeable harm is unreasonable in light of that intended good, no liability will be imposed upon the actor. Even conceding, however, such general similarity between double effect and negligence analysis - disagreement over the proper interpretation of the reasonability criterion at play in negligence poses an additional challenge for the attempt to correlate negligence with double effect. Economic efficiency interpretations of negligence, for example, purportedly based on the Learned Hand Formula and the RESTATEMENT (SECOND) OF THE LAW OF TORTS, propose that culpability depends upon a utilitarian balancing of good effects of conduct (utility) versus its harmful foreseeable consequences (magnitude of risk of injury). Based on such an interpretation of negligence, however, contrasts between actors' states of mind, and normative differences between kinds of goods and harms, ultimately fade into the background and become irrelevant as essential conditions for properly assessing liability. This article elaborates and defends the view that double effect analysis lies at the heart of negligence theory. Part I elucidates in more detail the principle of double effect and describes its prima facie operation in negligence analysis. Part II considers and rejects the economic efficiency interpretation that has been offered as a theory of negligence, overcoming the challenge that such an interpretation presents for the effort to locate double effect analysis in the law. Part III illustrates and confirms the overlap between double effect and negligence by consideration of a series of case applications. The Article proposes that the weighing of conflicting values in double effect analysis and negligence is not achieved - as proposed by law and economics theory with respect to negligence - by imposing a consequentialist-utilitarian reduction of all value to a single concept of good and eliminating the relevance of traditional state of mind distinctions between intention and foreseeability. Instead, each mode of analysis recognizes that distinct culpability determinations flow naturally and plausibly from an appreciation of the traditional legal distinctions made between various types of goods and harms, and upon whether such goods and harms come about as result of an actor's intention or mere foreseeability. Keywords: Double effect, negligence, intention, foreseeability, choice, law and economics, utilitarianism, consequentialism, weighing of values. (shrink)
Nearly everyone agrees that perception gives us justification and knowledge, and a great number of epistemologists endorse a particular two-part view about how this happens. The view is that perceptual beliefs get their justification from perceptual experiences, and that they do so by being based on them. Despite the ubiquity of these two views, I think that neither has very much going for it; on the contrary, there’s good reason not to believe either one of them.
Abstract: In Vacco v. Quill, 521 U.S. 793 (1997), the Supreme Court for the first time in American case law explicitly applied the principle of double effect to reject an equal protection claim to physician-assisted suicide. Double effect, traced historically to Thomas Aquinas, proposes that under certain circumstances it is permissible unintentionally to cause foreseen evil effects that would not be permissible to cause intentionally. The court rejected the constitutional claim on the basis of a distinction marked out by the (...) principle, i.e., between directly intending the death of a terminally ill patient as opposed to merely foreseeing that death as a consequence of medical treatment. The Court held that the distinction comports with fundamental legal principles of causation and intent. Id. at 802. -/- Critics allege that the principle itself is intrinsically flawed and that, in any event, its employment in Vacco is without legal precedent. I argue in response to contemporary objections that double effect is a valid principle of ethical reflection (Part II); claims to the contrary notwithstanding, double effect analysis is a pervasive, albeit generally unacknowledged principle employed regularly in American case law (Part III); and drawing on the preceding two sections, Vacco's application of the principle of double effect is appropriate (Part IV). -/- My conclusion is that [o]peration of some form of the principle, by whatever name, is inevitable. In an imperfect world where duties and interests collide, the possibility of choices of action foreseen to have both good and evil consequences cannot be avoided. In rare circumstances, ethics and the law require that a person refrain from acting altogether. More often, however, they provide that a determination of whether an actor may pursue a good effect although knowing it will or may unintentionally cause an harmful effect requires a more complex analysis - a double effect analysis. -/- Keywords: Equal protection, double effect, intention, physician-assisted suicide, Constitutional Law, Bioethics. (shrink)
Reflections on free choice and determinism constitute a recurring, if rarified, sphere of legal reasoning. Controversy, of course, swirls around the perennially vexing question of the propriety of punishing human persons for conduct that they are unable to avoid. Drawing upon conditions similar, if not identical, to those traditionally associated with attribution of moral fault, persons subject to such necessitating causal constraints generally are not considered responsible in the requisite sense for their conduct; and, thus, they are not held culpable (...) for its consequences. The standard argument against free choice asserts that free choice cannot exist because determinism, as a property of laws governing the cosmos, excludes such a possibility. This contingent factual claim, however, has always proven problematic. Contemporary discussions - no doubt aware of this disputed factual premise - draw upon a more novel, and arguably more devastating critique: free will must be rejected because its very conception is incoherent. Rather than assuming the existence of determinism and attempting to show its incompatibility with free will, this argument begins with consideration of the idea of free choice and concludes that, if it is to have any sense at all, it must be compatible with determinism. Obviously, no single treatment of the free will problem could address all its nuances. This Article more modestly offers one possible approach to the question. Part I elaborates in more detail the view that the traditional conception of free choice is incoherent and, thus, inevitably undermines the very responsibility it is asserted to constitute; Part II considers the resulting effort to develop a model of human freedom compatible with determinism; and Part III, drawing upon the prior discussions, describes - in terms of classical action theory - a conception of free choice justifying personal moral and legal responsibility that avoids both the incoherence of "uncaused freedom" as well as the shortcomings of determinism. (shrink)