This paper asks (a) how new scientific objects of research are onceptualized at a point in time when little is known about them, and (b) how those conceptualizations, in turn, figure in the process of investigating the phenomena in question. Contrasting my approach with existing notions of concepts and situating it in relation to existing discussions about the epistemology of experimentation, I propose to think of concepts as research tools. I elaborate on the conception of a tool that (...) informs my account. Narrowing my focus to phenomena in cognitive neuropsychology, I then illustrate my thesis with the example of the concept of implicit memory. This account is based on an original reconstruction of the nature and function of operationism in psychology. (shrink)
The question of rationality and of its role in human agency has been at the core of pragmatist concerns since the beginning of this movement. While Peirce framed the horizon of a new understanding of human reason through the idea of inquiry as aiming at belief-fixation and James stressed the individualistic drives that move individuals to action, it is in Dewey’s writing that we find the deepest understanding of the naturalistic and normative traits of rationality considered as the qualifying attribute (...) of human agency. Recent developments in moral and political philosophy as well as in general pragmatist scholarship have shown a renewal of interest in the role of human reason in agency, both with respect to control of conduct (decisions about how to act) and with respect to normative attitudes (considerations of what is good and right). In this article I will examine some features of Dewey’s epistemology which are particularly promising for the elaboration of a theory of practical rationality based on pragmatist sources. In particular, I will focus on Dewey’s notion of “judgment of practice” in order to frame a distinctively Deweyan approach to practical rationality. In order to point out the specificity of Dewey’s epistemological framework, I will refer to it as an “epistemology of practice”i. The aim of this article is to clarify the epistemological meaning of the concepts of articulation and transformation, that Dewey places at the heart of his theory of inquiry. Part of my argument consists in showing that through these notions Dewey aimed at broadening the conception of rationality, bringing it beyond the reach of the standard notions of analysis and synthesis and of induction, deduction, and abduction. Once the specificity of Dewey’s conception of rationality will have been demonstrated, I will proceed to show some of its implications in the explanation of the rationality of human agency with reference to practical reasoning and value assessment. I will then conclude the article by drawing some implications of Dewey’s theory of judgment for a broader epistemology based upon the acknowledgment of the primacy of practice. (shrink)
Philip Kitcher has advanced an epistemology of science that purports to be naturalistic. For Kitcher, this entails that his epistemology of science must explain the correctness of belief-regulating norms while endorsing a realist notion of truth. This paper concerns whether or not Kitcher's epistemology of science is naturalistic on these terms. I find that it is not but that by supplementing the account we can secure its naturalistic standing.
A central theme in the Christian contemplative tradition is that knowing God is much more like ‘unknowing’ than it is like possessing rationally acceptable beliefs. Knowledge of God is expressed, in this tradition, in metaphors of woundedness, darkness, silence, suffering, and desire. Philosophers of religion, on the other hand, tend to explore the possibilities of knowing God in terms of rational acceptability, epistemic rights, cognitive responsibility, and propositional belief. These languages seem to point to very different accounts of how it (...) is that we come to know God, and a very different range of critical concepts by which the truth of such knowledge can be assessed. In this paper, I begin to explore what might be at stake in these different languages of knowing God, drawing particularly on Alvin Plantinga’s epistemology of Christian belief. I will argue that his is a distorted account of the epistemology of Christian belief, and that this has implications for his project of demonstrating the rational acceptability of Christian faith for the 21st century. (shrink)
Peter Graham has recently given a dilemma purportedly showing the compatibility of libertarianism about free will and the anti-skeptical epistemology of testimony. In the first part of this paper I criticize his dilemma: the first horn either involves a false premise or makes the dilemma invalid. The second horn relies without argument on an implausible assumption about testimonial knowledge, and even if granted, nothing on this horn shows libertarianism does not entail skepticism about testimonial justification. I then argue for (...) the incompatibility of (i) a view entailed by Open Theism, viz., that there are no true counterfactuals of freedom, (ii) a popular form of process reliabilism about justification and knowledge, and (iii) a weak anti-skepticism about testimonial justification and knowledge. I conclude that there is a costly tension between certain views about testimony and about free will. (shrink)
We present a family of counter-examples to David Christensen's Independence Criterion, which is central to the epistemology of disagreement. Roughly, independence requires that, when you assess whether to revise your credence in P upon discovering that someone disagrees with you, you shouldn't rely on the reasoning that lead you to your initial credence in P. To do so would beg the question against your interlocutor. Our counter-examples involve questions where, in the course of your reasoning, you almost fall for (...) an easy-to-miss trick. We argue that you can use the step in your reasoning where you (barely) caught the trick as evidence that someone of your general competence level (your interlocutor) likely fell for it. Our cases show that it's permissible to use your reasoning about disputed matters to disregard an interlocutor's disagreement, so long as that reasoning is embedded in the right sort of explanation of why she finds the disputed conclusion plausible, even though it's false. (shrink)
The paper approaches the topic of what a general philosophy of science could mean today from the perspective of a historical epistemology. Consequently, in a first step, the paper looks at the notion of generality in the sciences, and how it evolved over time, on the example of the life sciences. In the second part of the paper, the urgency of a general philosophy of science is located in the history of philosophy of science. Two attempts at the beginning (...) of the twentieth century are particularly highlighted: that of Karl Popper and that of Martin Heidegger. Both of them concentrate, albeit in widely different form, on the phenomenon of research as an open-ended process. This trend is even more pronounced in Gaston Bachelard’s version of a historical epistemology, whose work is taken as a point of reference for a general historical epistemology of research. The paper concludes with a plea to look, with Georges Canguilhem, at the history of the sciences as a laboratory for epistemology. (shrink)
It is proposed that psychologists need a working theory of knowledge for conceptual and discourse purposes. Arguments are made from a pragmatist view of science for a conception of inquiry practice that may resolve current paradigm conflicts and support a viable methodological pluralism. The suggestion is made that a naturalized approach to research practice, such as historical-descriptive case study, may illuminate the judgments and intentions constitutive of our applied epistemology and methodological choices. Implications of such meta-methodological understanding for research (...) training and a contingent theory of knowledge for psychological science are discussed. (shrink)
In this paper I outline my conception of the epistemology of science, by reference to my published papers, showing how the ideas presented there fit together. In particular I discuss the aim of science, scientific progress, the nature of scientific evidence, the failings of empiricism, inference to the best (or only) explanation, and Kuhnian psychology of discovery. Throughout, I emphasize the significance of the concept of scientific knowledge.
This paper assesses the comparative reliability of two belief-revision rules relevant to the epistemology of disagreement, the Equal Weight and Stay the Course rules. I use two measures of reliability for probabilistic belief-revision rules, calibration and Brier Scoring, to give a precise account of epistemic peerhood and epistemic reliability. On the calibration measure of reliability, epistemic peerhood is easy to come by, and employing the Equal Weight rule generally renders you less reliable than Staying the Course. On the Brier-Score (...) measure of reliability, epistemic peerhood is much more difficult to come by, but employing the Equal Weight rule always renders you more reliable than Staying the Course. I conclude with some normative lessons we can draw from these formal results. (shrink)
This position paper advocates combining formal epistemology and the new paradigm psychology of reasoning in the studies of conditionals and reasoning with uncertainty. The new paradigm psychology of reasoning is characterized by the use of probability theory as a rationality framework instead of classical logic, used by more traditional approaches to the psychology of reasoning. This paper presents a new interdisciplinary research program which involves both formal and experimental work. To illustrate the program, the paper discusses recent work on (...) the paradoxes of the material conditional, nonmonotonic reasoning, and Adams’ Thesis. It also identifies the issue of updating on conditionals as an area which seems to call for a combined formal and empirical approach. (shrink)
In place of the traditional epistemological view of knowledge as justified true belief we argue that artificial intelligence and law needs an evidence-based epistemology according to which scientific knowledge is based on critical analysis of evidence using argumentation. This new epistemology of scientific evidence (ESE) models scientific knowledge as achieved through a process of marshaling evidence in a scientific inquiry that results in a convergence of scientific theories and research results. We show how a dialogue interface of argument (...) from expert opinion, along with its set of critical questions, provides the argumentation component of the ESE. It enables internal scientific knowledge to be translated over into a wider arena in which individual nonexpert citizens and groups can make use of it. The external component shows how evidence is presented and used in a legal procedural setting that includes fact-finding, weighing the credibility of expert witnesses, and critical questioning of arguments. The paper critically reviews the standards of admissibility of scientific evidence using the ESE. (shrink)
Artificial intelligence has often been seen as an attempt to reduce the natural mind to informational processes and, consequently, to naturalize philosophy. The many criticisms that were addressed to the so-called “old-fashioned AI” do not concern this attempt itself, but the methods it used, especially the reduction of the mind to a symbolic level of abstraction, which has often appeared to be inadequate to capture the richness of our mental activity. As a consequence, there were many efforts to evacuate the (...) semantical models in favor of elementary physiological mechanisms simulated by information processes. However, these views, and the subsequent criticisms against artificial intelligence that they contain, miss the very nature of artificial intelligence, which is not reducible to a “science of the nature”, but which directly impacts our culture. More precisely, they lead to evacuate the role of the semantic information. In other words, they tend to throw the baby out with the bath-water. This paper tries to revisit the epistemology of artificial intelligence in the light of the opposition between the “sciences of nature” and the “sciences of culture”, which has been introduced by German neo-Kantian philosophers. It then shows how this epistemological view opens on the many contemporary applications of artificial intelligence that have already transformed—and will continue to transform—all our cultural activities and our world. Lastly, it places those perspectives in the context of the philosophy of information and more particularly it emphasizes the role played by the notions of context and level of abstraction in artificial intelligence. (shrink)
Discussions of technoscience are bringing to light that scientific journals feature very different knowledge claims. At one end of the spectrum, there is the scientific claim that a hypothesis needs to be reevaluated in light of new evidence. At the other end of the spectrum, there is the technoscientific claim that some new measure of control has been achieved in a laboratory. The latter claim has not received sufficient attention as of yet. In what sense is the achievement of control (...) genuine knowledge in its own right; how is this knowledge acquired; and publicly validated? Notions of tacit or embodied knowledge, of knowledge by acquaintance, of engineering or thing knowledge, and reconstructions of ability or skill take us only part of the way towards answering such questions. The epistemology of technoscience needs to account for the acquisition and demonstration of a public knowledge of control that does not consist in the holding of propositions, even though it is usually communicated in writing: Technoscientific knowledge is, firstly, objective and public insofar as it is exhibited and documented. Secondly, it presupposes a specific context of technology and expertise. Thirdly, it is communicable, even where the achieved capability itself is not. Knowledge of control entails, fourthly, a knowledge of causal relationships, and it sediments itself, fifthly, as a habit of action in the sense proposed by Charles Sanders Peirce. (shrink)
Three contrasting approaches to the epistemology of argument are presented. Each one is naturalistic, drawing upon successful practices as the basis for epistemological virtue. But each looks at very different sorts of practices and they differ greatly as to the manner with which relevant practices may be described. My own contribution relies on a metamathematical reconstruction of mature science, and as such, is a radical break with the usual approaches within the theory of argument.
In this paper I attempt to defuse a set of epistemic worries commonly raised against ideal observer theories. The worries arise because of the omniscience often attributed to ideal observers -- how can we, as finite humans, ever have access to the moral judgements or reactions of omniscient beings? I argue that many of the same concerns arise with respect to other moral theories (and that these concerns do not in fact reveal genuine flaws in any of these theories), and (...) further, that we can and often do have knowledge of the reactions of ideal observers (according to standard, prominent theories in the domain of epistemology). (shrink)
This paper examines how experimental scientists choose theoretical frameworks as well as their experimental systems for doing research. I start out with Kuhn's claim that there are no (single) algorithms that could determine the choices made by individual scientists. Samir Okasha has recently provided an argument for this claim in terms of social choice theory, which I briefly discuss. Then, I show why this problem is not relevant in an experimental science. There are social mechanisms in place that make sure (...) the community chooses the best framework and a matching experimental system. As historical evidence for this claim, I present the case of classical genetics. (shrink)
Although various statistical measures may have other valid uses, the single purpose served by statistical significance testing in the epistemology of experimental science is as a peremptory rebuttal of one potential alternative interpretation of the data.
This paper examines the question whether foundational epistemology (“FE”) can be replaced by naturalized epistemology (“NE”). First, it argues that Quine's defense of NE is inadequate since it is only based on arguments showing the impossibility of the logical empiricist version of FE rather than on arguments for the impossibility of FE as such. Second, it proposes that a more promising argument for the impossibility of FE can be found in the Münchhausen-trilemma which aims at showing that ultimate (...) foundations (and, hence, FE) are unattainable. However, Karl-Otto Apel has shown that this trilemma is unconclusive since it uncritically presupposes the premise that all argumentation is deductive in nature. Apel's argument implies that FE is possible if and only if it is possible to devise a non-deductive foundation (“NDF”). It is argued, however, that the possibility of NDF cannot be demonstrated. This leads to a situation called the Multatuli-dilemma: we cannot prove the possibility of ultimate foundations nor can we prove the impossibility of ultimate foundations. This dilemma shows that the discussion about the possibility of FE is pointless. Thus, it suggests that it is legitimate to replace FE by NE. Barry Stroud and Henri Lauener, however, argue that this replacement is not feasible since NE is not capable of refuting scepticism (Stroud) or justifying methodological rules (Lauener). But these objections are shown to be mistaken: First, epistemological scepticism is practically impossible and, hence, does not pose a serious threat to NE. Second, NE is capable of justifying methodological norms if and only if it makes use of so-called internal justifications. Thus, the final conclusion of this paper is that FE can be replaced by NE. (shrink)
Wittgenstein accepts the linguistic hypothesis about science according which science is the corpus of significant propositions. The epistemological problem can be divided into the problem of demarcation and the problem of justification. The answer to the demarcation problem consists in a criterion for significant propositions. Wittgenstein proposes a syntactical criterion. A proposition has sense if it is composed of elementary propositions and logical operators. The domains that contain senseless propo- sitions must be excluded from the scientific field. Wittgenstein’s solution to (...) the justification problem consists in the hypothesis of identity between tautology and necessary truth. In this way, the logical decision methods may be extended to the epistemo- logical decision. Wittgenstein’s epistemological conclu- sion is that only mathematics and logic (but not physics) are justified, because their propositions are tautologies. (shrink)
We conducted five experiments that reveal some main contours of the folk epistemology of lotteries. The folk tend to think that you don't know that your lottery ticket lost, based on the long odds ("statistical cases"); by contrast, the folk tend to think that you do know that your lottery ticket lost, based on a news report ("testimonial cases"). We evaluate three previous explanations for why people deny knowledge in statistical cases: the justification account, the chance account, and the (...) statistical account. None of them seems to work. We then propose a new explanation of our own, the formulaic account, according to which some people deny knowledge in statistical cases due to formulaic expression. (shrink)
This essay aims to sharpen debates on the pros and cons of historical epistemology, which is now understood as a novel approach to the study of knowledge, by comparing it with the history of epistemology as traditionally pursued by philosophers. The many versions of both approaches are not always easily discernable. Yet, a reasoned comparison of certain versions can and should be made. In the first section of this article, I argue that the most interesting difference involves neither (...) the subject matter nor goal, but the methods used by the two approaches. In the second section, I ask which of the two approaches or methods is more promising given that both historical epistemologists and historians of epistemology claim to contribute to epistemology simpliciter . Using traditional problems concerning the epistemic role of perception, I argue that the historical epistemologies of Wartofsky and Daston and Galison fail to show that studying practices of perception is philosophically significant. Standard methods from the history of epistemology are more promising, as I show by means of reconstructing arguments in a debate about the relation between perception and judgment in psychological research on the famous moon illusion. (shrink)
This paper concerns Jean Piaget's (1896–1980) philosophy of science and, in particular, the picture of scientific development suggested by his theory of genetic epistemology. The aims of the paper are threefold: (1) to examine genetic epistemology as a theory concerning the growth of knowledge both in the individual and in science; (2) to explicate Piaget's view of ‘scientific progress’, which is grounded in his theory of equilibration; and (3) to juxtapose Piaget's notion of progress with Thomas Kuhn's (1922–1996). (...) Issues of scientific continuity, scientific realism and scientific rationality are discussed. It is argued that Piaget's view highlights weaknesses in Kuhn's ‘discontinuous’ picture of scientific change. (shrink)
True contradictions are taken increasingly seriously by philosophers and logicians. Yet, the belief that contradictions are always false remains deeply intuitive. This paper confronts this belief head-on by explaining in detail how one specific contradiction is true. The contradiction in question derives from Priest's reworking of Berkeley's argument for idealism. However, technical aspects of the explanation offered here differ considerably from Priest's derivation. The explanation uses novel formal and epistemological tools to guide the reader through a valid argument with, not (...) just true, but eminently acceptable premises, to an admittedly unusual conclusion: a true contradiction. The novel formal and epistemological tools concern points of view and changes in points of view. The result is an understanding of why the contradiction is true. (shrink)
This paper evaluates the claim that it is possible to use nature’s variation in conjunction with retention and selection on the one hand, and the absence of ultimate groundedness of hypotheses generated by the human mind as it knows on the other hand, to discard the ascription of ultimate certainty to the rationality of human conjectures in the cognitive realm. This leads to an evaluation of the further assumption that successful hypotheses with specific applications, in other words heuristics, seem to (...) have a firm footing because they were useful in another context. I argue that usefulness evaluated through adaptation misconstrues the search for truth, and that it is possible to generate talk of randomness by neglecting aspects of a system’s insertion into a larger situation. The framing of the problem in terms of the elimination of unfit hypotheses is found to be unsatisfying. It is suggested that theories exist in a dimension where they can be kept alive rather than dying as phenotypes do. The proposal that the subconscious could suggest random variations is found to be a category mistake. A final appeal to phenomenology shows that this proposal is orphan in the history of epistemology, not in virtue of its being a remarkable find, but rather because it is ill-conceived. (shrink)
Several major breakthroughs in the history of physics have been prompted not by new empirical data but by thought experiments. James Robert Brown and John Norton have developed accounts of how thought experiments can yield such advances. Brown argues that knowledge gained via thought experiments demands a Platonic explanation; thought experiments for Brown are a window into the Platonic realm of the laws of nature. Norton argues that thought experiments are just cleverly disguised inductive or deductive arguments, so no new (...) account of their epistemology is needed. In this paper, I argue that although we do not need to invoke any Platonic insight to explain thought experimentation, Norton’s eliminativist account fails to capture the unique epistemological importance of thought experiments qua thought experiments. I then present my own account, according to which thought experiments are a particular type of inductive inference that is uniquely suited to generate new breakthroughs. (shrink)
Fuller's program of social epistemology engages a rhetoric of inquiry that can be usefully compared and contrasted with other discursive theories of knowledge, such as that of Richard Rorty. Resisting the model of “conversation,” Fuller strikes an activist posture and lays the groundwork for normative “knowledge policy,” in which persuasion and credibility play key roles. The image of investigation is one that overtly rejects the “storehouse” conception of knowledge and invokes the metaphors of distributive economics. Productive questions arise as (...) to how notions of creation and distribution might guide this rhetoric. (shrink)
This essay argues that the really useful character of reflexivity is that it enables a radical critique of representation and its conventional material and rhetorical practices. It is uniquely able to produce paradox and thus disrupt discourses by undermining authorial privilege. Because Fuller's social epistemology is insensitive to its own reflexive implications, and limits itself to normative questions about knowledge policy, it is too limited — and limiting — to provide a context that can nurture reflexivity.
I offer an analysis of operationism in psychology, which is rooted in an historical study of the investigative practices of two of its early proponents (S. S. Stevens and E. C. Tolman). According to this analysis, early psychological operationists emphasized the importance of experimental operations and called for scientists to specify what kinds of operations were to count as empirical indicators for the referents of their concepts. While such specifications were referred to as “definitions,” I show that such definitions were (...) not taken to constitute a priori knowledge or be analytically true. Rather, they served the pragmatic function of enabling scientists to do research on a purported phenomenon. I argue that historical and philosophical discussions of problems with operationism have conflated it, both conceptually and historically, with positivism, and I raise the question of what are the “real” issues behind the debate about operationism. (shrink)
Despite their divergent metaphysical assumptions, Reformed and evolutionary epistemologists have converged on the notion of proper basicality. Where Reformed epistemologists appeal to God, who has designed the mind in such a way that it successfully aims at the truth, evolutionary epistemologists appeal to natural selection as a mechanism that favors truth-preserving cognitive capacities. This paper investigates whether Reformed and evolutionary epistemological accounts of theistic belief are compatible. We will argue that their chief incompatibility lies in the noetic effects of sin (...) and what may be termed the noetic effects of evolution, systematic tendencies wherein human cognitive faculties go awry. We propose a reconceptualization of the noetic effects of sin to mitigate this tension. (shrink)
We begin with a subsidiary question: Is reasonable disagreement ever possible? Opposing answers to one and the same question can both be reasonable, of course, if at least one of them is based on evidence that is persuasive but misleading. This much is uncontroversial. In a more interesting case, Pro and Con share all their evidence. Can they still assess the shared evidence differently? Can one affirm what the other denies, though each proceeds reasonably enough? For each to be reasonable, (...) each needs positive justification. Unlike ethics, epistemology repels arbitrariness. Facing a choice between bringing it about that p and bringing it about that not-p, you may have no sufficient reason to prefer either over the other, in which case you might well be free to take your pick. That’s how it is for practical choices or actions. By contrast, with no more reason for believing either a proposition or its negation in preference to the other, you are definitely not free to proceed either way. Here you must withhold, if you are to proceed reasonably at all, epistemically. If two opponents are both to be reasonable, then, each needs a balance of reason favoring his side. But is this compatible with their sharing all of their evidence? Not if any reason they may have, for or against believing, would have to be found in the evidence that they share. We are supposing they share all their evidence. Since the evidence cannot point in two opposite directions at once, Pro and Con cannot each have substantial positive reason for affirming what the other denies. Based on such reasoning, you may well conclude that reasonable disagreement with full disclosure is just impossible. But others will no doubt disagree. Suppose you all pool your evidence, and they remain unimpressed. On one view with substantial support in the literature, if you encounter opposition from an apparent peer, then, absent independent reason to downgrade him, you must lower your confidence, perhaps below the threshold of belief.. (shrink)
The last two decades have seen a rising interest in (a) the notion of a scientific phenomenon as distinct from theories and data, and (b) the intricacies of experimentally producing and stabilizing phenomena. This paper develops an analysis of the stabilization of phenomena that integrates two aspects that have largely been treated separately in the literature: one concerns the skills required for empirical work; the other concerns the strategies by which claims about phenomena are validated. I argue that in order (...) to make sense of the process of stabilization, we need to distinguish between two types of phenomena: phenomena as patterns in the data ( surface regularities ) and phenomena as underlying (or hidden ) regularities. I show that the epistemic relationships that data bear to each of these types of phenomena are different: Data patterns are instantiated by individual data, whereas underlying regularities are indicated by individual data, insofar as they instantiate a data pattern. Drawing on an example from memory research, I argue that neither of these two kinds of phenomenon can be stabilized in isolation. I conclude that what is stabilized when phenomena are stabilized is the fit between surface regularities and hidden regularities. (shrink)
An overview of the epistemology of perception, covering the nature of justification, immediate justification, the relationship between the metaphysics of perceptual experience and its rational role, the rational role of attention, and cognitive penetrability. The published version will contain a smaller bibliography, due to space constraints in the volume.
Although American philosophers and physicians are generally familiar with the writings of Claude Bernard (1813–1878), especially his Introduction to the Study of Experimental Medicine (1865), the medicial epistemology of Georges Canguilhem, born in 1904, is virtually unknown in English speaking nations. Although indebted to Bernard for his conception of the methods to be employed in the acquisition of medical knowledge, Canguilhem radically reformulates Bernard's concepts of ‘disease’, ‘health’, ‘illness’, and ‘pathology’. Contemporary exhortations to medical professionals and medical students that (...) they “pay more attention to the whole patient” take on significance in working through the writings of Canguilhem; of crucial importance is the relation that obtains between a patient's unique symptomatology and the proper drug regiment that is required. (shrink)
This article explores Michael Faraday’s “Historical Sketch of Electro‐Magnetism” as a fruitful source for understanding the epistemic significance of experimentation. In this work Faraday provides a catalog of the numerous experimental and theoretical developments in the early history of electromagnetism. He also describes methods that enable experimentalists to dissociate experimental results from the theoretical commitments generating their research. An analysis of the methods articulated in this sketch is instructive for confronting epistemological worries about the theory‐dependence of experimentation. †To (...) contact the author, please write to: 10289 Saint Katherine Lane, Saint Ann, MO 63074; e‐mail: email@example.com. (shrink)
We show how an epistemology informed by cognitive science promises to shed light on an ancient problem in the philosophy of mathematics: the problem of exactness. The problem of exactness arises because geometrical knowledge is thought to concern perfect geometrical forms, whereas the embodiment of such forms in the natural world may be imperfect. There thus arises an apparent mismatch between mathematical concepts and physical reality. We propose that the problem can be solved by emphasizing the ways in which (...) the brain can transform and organize its perceptual intake. It is not necessary for a geometrical form to be perfectly instantiated in order for perception of such a form to be the basis of a geometrical concept. (shrink)
In this paper I argue that virtue ethics should be understood as a form of ethics which integrates various domains of the practical in relation to which virtues are excellences. To argue this it is necessary to distinguish two senses of the “moral”: the broad sense which integrates the domains of the practical and a narrow classificatory sense. Virtue ethics, understood as above, believes that all genuine virtue should be understood as what I call virtues proper. To possess a virtue (...) proper (such as an excellent disposition of open-mindedness, an epistemic virtue) is to possess a disposition of overall excellence in relation to the sphere or field of the virtue (being open to the opinions of others). Overall excellence in turn involves excellence in integrating to a sufficient degree, standards of excellence in all relevant practical domains. Epistemic virtues, sporting virtues, moral virtues, and so on are all virtues proper. In particular it is impossible for an epistemic virtue to be a moral (narrow sense) vice. (shrink)
Preparative and analytical methods developed by separation scientists have played an important role in the history of molecular biology. One such early method is gel electrophoresis, a technique that uses various types of gel as its supporting medium to separate charged molecules based on size and other properties. Historians of science, however, have only recently begun to pay closer attention to this material epistemological dimension of biomolecular science. This paper substantiates the historiographical thread that explores the relationship between modern laboratory (...) practice and the production of scientific knowledge. It traces the historical development of gel electrophoresis from the mid-1940s to the mid-1960s, with careful attention to the interplay between technical developments and disciplinary shifts, especially the rise of molecular biology in this time-frame. Claiming that the early 1950s marked a decisive shift in the evolution of electrophoretic methods from moving boundary to zone electrophoresis, I reconstruct various trajectories in which scientists such as Oliver Smithies sought out the most desirable solid supporting medium for electrophoretic instrumentation. Biomolecular knowledge, I argue, emerged in part from this process of seeking the most appropriate supporting medium that allowed for discrete molecular separation and visualization. The early 1950s, therefore, marked not only an important turning point in the history of separation science, but also a transformative moment in the history of the life sciences as the growth of molecular biology depended in part on the epistemological access to the molecular realm available through these evolving technologies. (shrink)
The expression, ‘the culture of the artificial’ results from the confusion between nature and culture, when nature mingles with culture to produce the ‘artificial’ and science becomes ‘the science of the artificial’. Artificial intelligence can thus be defined as the ultimate expression of the crisis affecting the very foundation of the system of legitimacy in Western society, i.e. Reason, and more precisely, Scientific Reason. The discussion focuses on the emergence of the culture of the artificial and the radical forms of (...) pragmatism, sophism and marketing from a French philosophical perspective. The paper suggests that in the postmodern age of the ‘the crisis of the systems of legitimacy’, the question of social acceptability of any action, especially actions arising out of the application of AI, cannot be avoided. (shrink)
Intelligent design creationism (ID) is a religious belief requiring a supernatural creator's interventions in the natural order. ID thus brings with it, as does supernatural theism by its nature, intractable epistemological difficulties. Despite these difficulties and despite ID's defeat in Kitzmiller v. Dover Area School District (2005), ID creationists' continuing efforts to promote the teaching of ID in public school science classrooms threaten both science education and the separation of church and state guaranteed by the U. S. Constitution. I examine (...) the ID movement's failure to provide either a methodology or a functional epistemology to support their supernaturalism, a deficiency that consequently leaves them without epistemic support for their creationist claims. My examination focuses primarily on ID supporter Francis Beckwith, whose published defenses of teaching ID, as well as his other relevant publications concerning education, law, and public policy, have been largely exempt from critical scrutiny. Beckwith's work exhibits the epistemological deficiencies of the supernaturally grounded views of his ID associates and of supernaturalists in general. I preface my examination of Beckwith's arguments with (1) philosopher of science Susan Haack's clarification of the established naturalistic methodology and epistemology of science and (2) discussions of the views of Beckwith's ID associates Phillip Johnson and William Dembski. Finally, I critique the religious exclusionism that Beckwith shares with his ID associates and the implications of his exclusionism for public policy. (shrink)
The paper provides an explanation of our knowledge of metaphysical modality, or modal knowledge, from our ability to evaluate counterfactual conditionals. The latter ability lends itself to an evolutionary explanation since it enables us to learn from mistakes. Different logical principles linking counterfactuals to metaphysical modality can be employed to extend this explanation to the epistemology of modality. While the epistemological use of some of these principles is either philosophically implausible or empirically inadequate, the equivalence of ‘Necessarily p’ with (...) ‘For all q, if q were the case, p would be the case’ is a suitable starting-point for an explanation of modal knowledge. (shrink)
In modern, Western societies the purpose of schooling is to ensure that school-goers acquire knowledge of pre-existing practices, events, entities and so on. The knowledge that is learned is then tested to see if the learner has acquired a correct or adequate understanding of it. For this reason, it can be argued that schooling is organised around a representational epistemology: one which holds that knowledge is an accurate representation of something that is separate from knowledge itself. Since the object (...) of knowledge is assumed to exist separately from the knowledge itself, this epistemology can also be considered ‘spatial.’ In this paper we show how ideas from complexity have challenged the spatial epistemology’ of representation and we explore possibilities for an alternative ‘temporal’ understanding of knowledge in its relationship to reality. In addition to complexity, our alternative takes its inspiration from Deweyan ‘transactional realism’ and deconstruction. We suggest that ‘knowledge’ and ‘reality’ should not be understood as separate systems which somehow have to be brought into alignment with each other, but that they are part of the same emerging complex system which is never fully ‘present’ in any (discrete) moment in time. This not only introduces the notion of time into our understanding of the relationship between knowledge and reality, but also points to the importance of acknowledging the role of the ‘unrepresentable’ or ‘incalculable’. With this understanding knowledge reaches us not as something we receive but as a response, which brings forth new worlds because it necessarily adds something (which was not present anywhere before it appeared) to what came before. This understanding of knowledge suggests that the acquisition of curricular content should not be considered an end in itself. Rather, curricular content should be used to bring forth that which is incalculable from the perspective of the present. The epistemology of emergence therefore calls for a switch in focus for curricular thinking, away from questions about presentation and representation and towards questions about engagement and response. (shrink)
Reflective practice is one of the most popular theories of professional knowledge in the last 20 years and has been widely adopted by nursing, health, and social care professions. The term was coined by Donald Schön in his influential books The Reflective Practitioner , and Educating the Reflective Practitioner , and has garnered the unprecedented attention of theorists and practitioners of professional education and practice. Reflective practice has been integrated into professional preparatory programmes, continuing education programmes, and by the regulatory (...) bodies of a wide range of health and social care professions. Yet, despite its popularity and widespread adoption, a problem frequently raised in the literature concerns the lack of conceptual clarity surrounding the term reflective practice. This paper seeks to respond to this problem by offering an analysis of the epistemology of reflective practice as revealed through a critical examination of philosophical influences within the theory. The aim is to discern philosophical underpinnings of reflective practice in order to advance increasingly coherent interpretations, and to consider the implications for conceptions of professional knowledge in professional life. The paper briefly examines major philosophical underpinnings in reflective practice to explicate central themes that inform the epistemological assumptions of the theory. The study draws on the work of Donald Schön, and on texts from four philosophers: John Dewey, Nelson Goodman, Michael Polanyi, and Gilbert Ryle. Five central epistemological themes in reflective practice are illuminated: (1) a broad critique of technical rationality; (2) professional practice knowledge as artistry; (3) constructivist assumptions in the theory; (4) the significance of tacit knowledge for professional practice knowledge; and (5) overcoming mind body dualism to recognize the knowledge revealed in intelligent action. The paper reveals that the theory of reflective practice is concerned with deep epistemological questions of significance to conceptions of knowledge in health and social care professions. (shrink)
This work develops an epistemology of measurement, that is, an account of the conditions under which measurement and standardization methods produce knowledge as well as the nature, scope, and limits of this knowledge. I focus on three questions: (i) how is it possible to tell whether an instrument measures the quantity it is intended to? (ii) what do claims to measurement accuracy amount to, and how might such claims be justified? (iii) when is disagreement among instruments a sign of (...) error, and when does it imply that instruments measure different quantities? Based on a series of case studies conducted in collaboration with the US National Institute of Standards and Technology (NIST), I argue for a model-based approach to the epistemology of physical measurement. To measure a physical quantity, I argue, is to estimate the value of a parameter in an idealized model of a physical process. Such estimation involves inference from the final state (‘indication’) of a process to the value range of a parameter (‘outcome’) in light of theoretical and statistical assumptions. Contrary to contemporary philosophical views, measurement outcomes cannot be obtained by mapping the structure of indications. Instead, measurement outcomes as well as claims to accuracy, error and quantity individuation can only be adjudicated relative to a choice of idealized modelling assumptions. (shrink)
In this paper I argue, first, that the most influential (and perhaps only acceptable) account of the epistemology of self-knowledge, developed and defended at great length in Wright (1989b) and (1989c) (among other places), leaves unanswered a question about the psychology of self-knowledge; second, that without an answer to this question about the psychology of self-knowledge, the epistemic account cannot be considered acceptable; and third, that neither Wright's own answer, nor an interpretation-based answer (based on a proposal from Jacobsen (...) (1997)), will suffice as an acceptable answer to the psychological question. My general ambition is thus to establish that more work is needed if we are to have a full account of self-knowledge in both its epistemological and psychological aspects. I conclude by suggesting how my thesis bears on those who aim to provide an empirical account of the cognition involved in self-knowledge. (shrink)