Frank Jackson champions the cause of conceptualanalysis as central to philosophical inquiry. In recent years conceptualanalysis has been undervalued and widely misunderstood, suggests Jackson. He argues that such analysis is mistakenly clouded in mystery, preventing a whole range of important questions from being productively addressed. He anchors his argument in discussions of specific philosophical issues, starting with the metaphysical doctrine of physicalism and moving on, via free will, meaning, personal identity, motion, and change, (...) to ethics and the philosophy of color. In this way the book not only offers a methodological program for philosophy, but also casts new light on some much-debated problems and their interrelations. (shrink)
The explanatory gap . Consciousness is a mystery. No one has ever given an account, even a highly speculative, hypothetical, and incomplete account of how a physical thing could have phenomenal states. Suppose that consciousness is identical to a property of the brain, say activity in the pyramidal cells of layer 5 of the cortex involving reverberatory circuits from cortical layer 6 to the thalamus and back to layers 4 and 6,as Crick and Koch have suggested for visual consciousness. .) (...) Still, that identity itself calls out for explanation! Proponents of an explanatory gap disagree about whether the gap is permanent. Some say that we are like the scientifically naive person who is told that matter = energy, but does not have the concepts required to make sense of the idea. If we can acquire these concepts, the gap is closable. Others say the gap is uncloseable because of our cognitive limitations. Still others say that the gap is a consequence of the fundamental nature of consciousness. (shrink)
That a philosophical thesis entails a vicious regress is commonly taken to be decisive evidence that the thesis is false. In this paper, I argue that the existence of a vicious regress is insufficient to reject a proposed analysis provided that certain constraints on the analysis are met. When a vicious regress is present, some further consequence of the thesis must be established that, together with the presence of the vicious regress, shows the thesis to be false. The (...) argument is provided largely through the examination of Michael Bergmann's vicious regress argument against strong awareness internalism and a partial defense of that thesis against Bergmann. (shrink)
Philosophical conceptualanalysis is an experimental method. Focusing on this helps to justify it from the skepticism of experimental philosophers who follow Weinberg, Nichols & Stich. To explore the experimental aspect of philosophical conceptualanalysis, I consider a simpler instance of the same activity: everyday linguistic interpretation. I argue that this, too, is experimental in nature. And in both conceptualanalysis and linguistic interpretation, the intuitions considered problematic by experimental philosophers are necessary but epistemically (...) irrelevant. They are like variables introduced into mathematical proofs which drop out before the solution. Or better, they are like the hypotheses that drive science, which do not themselves need to be true. In other words, it does not matter whether or not intuitions are accurate as descriptions of the natural kinds that undergird philosophical concepts; the aims of conceptualanalysis can still be met. (shrink)
What is the rationale for the methodological innovations of experimental philosophy? This paper starts from the contention that common answers to this question are implausible. It then develops a framework within which experimental philosophy fulfills a specific function in an otherwise traditionalist picture of philosophical inquiry. The framework rests on two principal ideas. The first is Frank Jackson’s claim that conceptualanalysis is unavoidable in ‘serious metaphysics’. The second is that the psychological structure of concepts is extremely intricate, (...) much more so than early practitioners of conceptualanalysis had realized. This intricacy has implications for the activity of analyzing concepts: while the central, coarser, more prominent contours of a concept may be identified from the armchair, the finer details of the concept’s structure require experimental methods to detect. (shrink)
Conceptualanalysis of health and disease is portrayed as consisting in the confrontation of a set of criteria—a “definition”—with a set of cases, called instances of either “health” or “ disease.” Apart from logical counter-arguments, there is no other way to refute an opponent’s definition than by providing counter-cases. As resorting to intensional stipulation is not forbidden, several contenders can therefore be deemed to have succeeded. This implies that conceptualanalysis alone is not likely to decide (...) between naturalism and normativism. An alternative to this approach would be to examine whether the concept of disease can be naturalized. (shrink)
Conceptualanalysis is undergoing a revival in philosophy, and much of the credit goes to Frank Jackson. Jackson argues that conceptualanalysis is needed as an integral component of so-called serious metaphysics and that it also does explanatory work in accounting for such phenomena as categorization, meaning change, communication, and linguistic understanding. He even goes so far as to argue that opponents of concep- tual analysis are implicitly committed to it in practice. We show that (...) he is wrong on all of these points and that his case for conceptualanalysis doesn. (shrink)
This essay concerns the question of how we make genuine epistemic progress through conceptualanalysis. Our way into this issue will be through consideration of the paradox of analysis. The paradox challenges us to explain how a given statement can make a substantive contribution to our knowledge, even while it purports merely to make explicit what one’s grasp of the concept under scrutiny consists in. The paradox is often treated primarily as a semantic puzzle. However, in “Sect. (...) 1” I argue that the paradox raises a more fundamental epistemic problem, and in “Sects.1 and 2” I argue that semantic proposals—even ones designed to capture the Fregean link between meaning and epistemic significance—fail to resolve that problem. Seeing our way towards a real solution to the paradox requires more than semantics; we also need to understand how the process of analysis can yield justification for accepting a candidate conceptualanalysis. I present an account of this process, and explain how it resolves the paradox, in “Sect. 3”. I conclude in “Sect. 4” by considering the implications for the present account concerning the goal of conceptualanalysis, and by arguing that the apparent scarcity of short and finite illuminating analyses in philosophically interesting cases provides no grounds for pessimism concerning the possibility of philosophical progress through conceptualanalysis. (shrink)
In this paper I discuss the claim that the successful reduction of qualitative to physical states requires some sort of intelligible connection between our qualitative and physical concepts, which in turn requires a conceptualanalysis of our qualitative concepts in causal-functional terms. While I defend this claim against some of its recent critics, I ultimately dispute it, and propose a different way to get the requisite intelligible connection between qualitative and physical concepts.
There is a line of reasoning in metaepistemology that is congenial to naturalism and hard to resist, yet ultimately misguided: that knowledge might be a natural kind, and that this would undermine the use of conceptualanalysis in the theory of knowledge. In this paper, I first bring out various problems with Hilary Kornblith’s argument from the causal–explanatory indispensability of knowledge to the natural kindhood of knowledge. I then criticize the argument from the natural kindhood of knowledge against (...) the method of conceptualanalysis in the theory of knowledge. A natural motivation for this argument is the following seemingly plausible principle: if knowledge is a natural kind, then the concept of knowledge is a natural kind concept. Since this principle lacks adequate support, the crucial semantic claim that the concept of knowledge is a natural kind concept must be defended in some more direct way. However, there are two striking epistemic disanalogies between the concept of knowledge and paradigmatic natural kind concepts that militate against this semantic claim. Conceptual analyses of knowledge are not affected by total error, and the proponents of such analyses are not subject to a priori conceptual obliviousness. I conclude that the argument from natural kindhood does not succeed in undermining the use of conceptualanalysis in the theory of knowledge. (shrink)
It would be nice if good old a priori conceptualanalysis were possible. For many years conceptualanalysis was out of fashion, in large part because of the excessive ambitions of verificationist theories of meaning._ _However, those days are over._ _A priori conceptualanalysis is once again part of the philosophical mainstream._ _This renewed popularity, moreover, is well-founded. Modern philosophical analysts have exploited developments in philosophical semantics to formulate analyses which avoid the counterintuitive consequences (...) of verificationism, while vindicating our ability to know a priori precisely what it is our words and thoughts represent._ _Despite its apparent promise, however, I. (shrink)
One strategy for blocking Chalmers's overall case against physicalism has been to deny his claim that showing that phenomenal properties are in some sense physical requires an a priori entailment of the phenomenal truths from the physical ones. Here I avoid this well-trodden ground and argue instead that an a priori entailment of the phenomenal truths from the physical ones does not require an analysis in the Jackson/Chalmers sense. This is to sever the dualist's link between conceptual (...) class='Hi'>analysis and a priori entailment by showing that the lack of the former does not imply the absence of the latter. Moreover, given the role of the argument from conceptualanalysis in Chalmers's overall case for dualism, undermining that argument effectively undermines that case as a whole in a way that, I'll argue, undermining the conceivability arguments as stand-alone arguments does not. (shrink)
In this paper, I explore the implications of recent empirical research on concept representation for the philosophical enterprise of conceptualanalysis. I argue that conceptualanalysis, as it is commonly practiced, is committed to certain assumptions about the nature of our intuitive categorization judgments. I then try to show how these assumptions clash with contemporary accounts of concept representation in cognitive psychology. After entertaining an objection to my argument, I close by considering ways in which (...) class='Hi'>conceptualanalysis might be altered to accord better with the empirical work. (shrink)
Is conceptualanalysis required for reductive explanation? If there is no a priori entailment from microphysical truths to phenomenal truths, does reductive explanation of the phenomenal fail? We say yes . Ned Block and Robert Stalnaker say no.
The main purpose of this article is to undertake a conceptual investigation of the Berlin Wisdom Paradigm: a psychological project initiated by Paul Baltes and intended to study the complex phenomenon of wisdom. Firstly, in order to provide a wider perspective for the subsequent analyses, a short historical sketch is given. Secondly, a meta-theoretical issue of the degree to which the subject matter of the Baltesian study can be identified with the traditional philosophical wisdom is addressed. The main result (...) yielded by a careful conceptualanalysis is that the philosophical and psychological concepts of wisdom, though not entirely the same, are at least parallel. Finally, one of the revealed aspects of the Berlin Wisdom Paradigm, i.e. its relative neglect of the non-cognitive and personal aspects of wisdom is brought to the fore. This deficiency, it is suggested, can be remedied by the application of the virtue ethics' conceptual framework. (shrink)
Conceptualanalysis, like any exclusively theoretical activity, is far from overrated in current psychology. Such a situation can be related both to the contingent influences of contextual and historical character and to the more essential metatheoretical reasons. After a short discussion of the latter it is argued that even within a strictly empirical psychology there are non-trivial tasks that can be attached to well-defined and methodologically reliable, conceptual work. This kind of method, inspired by the ideas of (...) Ludwig Wittgenstein, Peter Strawson (conceptual grammar), and Gilbert Ryle (conceptual geography), is proposed and formally depicted as being holistic, descriptive, and connective. Finally, the newly presented framework of connective conceptualanalysis is defended against the “Charge from Psychology,” in a version developed by William Ramsey, claiming that conceptualanalysis is based on psychological assumptions that have already been refuted by empirical psychology. (shrink)
This article argues, against contemporary experimentalist criticism, that conceptualanalysis has epistemic value, with a structure that encourages the development of interesting hypotheses which are of the right form to be valuable in diverse areas of philosophy. The article shows, by analysis of the Gettier programme, that conceptualanalysis shares the proofs and refutations form Lakatos identified in mathematics. Upon discovery of a counterexample, this structure aids the search for a replacement hypothesis. The search is (...) guided by heuristics. The heuristics of conceptualanalysis are similar to those in other interesting areas of scholarship, and so hypotheses generated by it are of the right form to be applicable to diverse areas. The article shows that the explanationist criterion in epistemology was developed and applied in this way. The epistemic value of conceptualanalysis is oblique because it contributes not towards the main purpose of conceptualanalysis but towards the reliable development of epistemically valuable hypotheses in philosophy and scholarship. (shrink)
In this article the standard philosophical method involving intuition-driven conceptualanalysis is challenged in a new way. This orthodox approach to philosophy takes analysanda to be the specifications of the content of concepts in the form of sets of necessary and sufficient conditions. Here it is argued that there is no adequate account of what necessary and sufficient conditions are. So, the targets of applications of the standard philosophical method so understood are not sufficiently well understood for this (...) method to be dependable. (shrink)
In my paper I identify both the conceptual tools needed to establish claims for the existence of conceptual ties, as well as the principles governing the use of those tools, and present a model of conceptualanalysis. I identify and justify those principles in light of the conditions for the meaningfulness of expressions in language, which I extract from an analysis of the concept of meaning. The conclusions of this analysis are organized into a (...) schematic model of the workings of a language. According to this model, the meaning of every word in any language is determined by its role in the systematic mapping of all possible states of affairs included in its conceptual scheme. (shrink)
The philosophical method of conceptualanalysis has been criticised on the grounds that empirical psychological research has cast severe doubt on whether concepts exist in the form traditionally assumed, and that conceptualanalysis therefore is doomed. This objection may be termed the Charge from Psychology. After a brief characterisation of conceptualanalysis, I discuss the Charge from Psychology and argue that it is misdirected.
This paper does two things. First, it argues for a metaphilosophical view of conceptualanalysis questions; in particular, it argues that the facts that settle conceptual-analysis questions are facts about the linguistic intentions of ordinary folk. The second thing this paper does is argue that if this metaphilosophical view is correct, then experimental philosophy is a legitimate methodology to use in trying to answer conceptual-analysis questions.
Semantic externalism about a class of expressions is often thought to make conceptualanalysis about members of that class impossible. In particular, since externalism about natural kind terms makes the essences of natural kinds empirically discoverable, it seems that mere reflection on one's natural kind concept will not be able to tell one anything substantial about what it is for something to fall under one's natural kind concepts. Many hold the further view that one cannot even know anything (...) substantial about the reference-fixers of one's natural kind concepts by armchair reflection. In this paper I want to question this latter view and claim that, because of the way our standard methodology of doing theories of reference relies on semantic intuitions, typical externalists in fact presuppose that one can know the reference-fixers of one's natural kind concepts by mere armchair reflection. The more interesting question is how substantial such knowledge can be. I also take some steps toward answering this question. (shrink)
This essay attempts to provide an accessible introduction to the topic area of conceptualanalysis of legal concepts and its methodology. I attempt to explain, at a fairly foundational level, what conceptualanalysis is, how it is done and why it is important in theorizing about the law. I also attempt to explain how conceptualanalysis is related to other areas in philosophy, such as metaphysics and epistemology. Next, I explain the enterprise of (...) class='Hi'>conceptual jurisprudence, as concerned to provide an account of those properties that distinguish things that are law from things that are not law which constitute the former things as law, illustrating this explanation with what I hope are intuitive examples. Three different methodological approaches are also explained and evaluated. Finally, the practical importance of conceptual jurisprudence is discussed. (shrink)
In this paper I argue, contra Fraser MacBride, that conceptualanalysis, and in particular the distinction between numerical and qualitative identity, can solve the Problem of Universals, whether understood as the One over Many or the as the Many over One. In this paper I show why the solutions needed to solve either version of the problem must be in terms of truthmakers, and that the distinction between numerical and qualitative identity is not sufficient to solve them.
Three proponents of the Canberra Plan, namely Jackson, Pettit, and Smith, have developed a collective functionalist program—Canberra Functionalism—spanning from philosophical psychology to ethics. They argue that conceptualanalysis is an indispensible tool for research on cognitive processes since it reveals that there are some folk concepts, like belief and desire, whose functional roles must be preserved rather than eliminated by future scientific explanations. Some naturalists have recently challenged this indispensability argument, though the point of that challenge has been (...) blunted by a mutual conflation of metaphysical and methodological strands of naturalism. I argue that the naturalist’s challenge to the indispensability argument, like naturalism itself, ought to be reformulated as a strictly methodological thesis. So understood, the challenge succeeds by showing (1) that we cannot know a priori on the basis of conceptualanalysis of folk platitudes that something must occupy the functional roles specified for beliefs and desires, and (2) that proponents of Canberra Functionalism sometimes tacitly concede this point by treating substantive psychological theories as the deliverances of a priori platitudes analysis. (shrink)
The late scholastics, from the fourteenth to the seventeenth centuries, contributed to many ﬁelds of knowledge other than philosophy. They developed a method of conceptualanalysis that was very productive in those disciplines in which theory is relatively more important than empirical results. That includes mathematics, where the scholastics developed the analysis of continuous motion, which fed into the calculus, and the theory of risk and probability. The method came to the fore especially in the social sciences. (...) In legal theory they developed, for example, the ethical analyses of the conditions of validity of contracts, and natural rights theory. In political theory, they introduced constitutionalism and the thought experiment of a “state of nature”. Their contributions to economics included concepts still regarded as basic, such as demand, capital, labour, and scarcity. Faculty psychology and semiotics are other areas of signiﬁ cance. In such disciplines, later developments rely crucially on scholastic concepts and vocabulary. (shrink)
David Henderson and Terry Horgan offer a detailed account of the structure of conceptualanalysis that is embedded within a more general account of a priori justification. Their account highlights an important feature of conceptualanalysis that has been overlooked in the recent debate. Although it is generally recognized that conceptualanalysis involves an inference from premises to the effect that some concept does (or does not) apply to a range of particular cases to (...) a general conclusion about the nature of the concept itself, the details of that inference have not been fully articulated. Henderson and Horgan offer an articulation of the inferential connection which, taken in conjunction with their account of a priori justification, yields a startling conclusion about the epistemic status of the results of conceptualanalysis. My view is that their account of a priori justification misses the mark and, as a consequence, they arrive at a conclusion about the epistemic status of the results of conceptualanalysis that many will find implausible. Moreover, I fear that because many will find their overall conclusion implausible, they will either overlook or dismiss the important feature of the inferential connection that Henderson and Horgan correctly highlight. Hence, I have two goals. First, I want to track down the source of their mischaracterization of the a priori and to show why it is a mischaracterization. Second, I want to highlight the important feature of the inferential connection involved in conceptualanalysis that they identify, articulate its bearing on the debate over the existence of a priori knowledge and assess its implications. (shrink)
In order to understand why analytic aesthetics has lost a lot of its former intellectual stature it is necessary to combine historical reconstruction with systematic consideration. In the middle of the twentieth century analytic philosophers came to the conclusion that essentialist theories of the “nature” of art are no longer tenable. As a consequence they felt compelled to move to the meta-level of conceptualanalysis. Then they tried to show how a purely classificatory concept of art is used. (...) The presupposition, however, that there actually is such a concept can only appear plausible at first sight. Upon closer inspection it turns out to be utterly misguided. (shrink)
My aim here is threefold: to show that conceptual facts play a more significant role in justifying explanatory reductions than most of the contributors to the current debate realize; to furnish an account of that role, and to trace the consequences of this account for conceivability arguments about the mind.
Philosophers expend considerable effort on the analysis of concepts, but the value of such work is not widely appreciated. This paper principally analyses some arguments, beliefs, and presuppositions about the nature of design and the relations between design and science common in the literature to illustrate this point, and to contribute to the foundations of design theory.
When is there no fact of the matter about a metaphysical question? When multiple candidate meanings are equally eligible, in David Lewis's sense, and fit equally well with ordinary usage. Thus given certain ontological schemes, there is no fact of the matter whether the criterion of personal identity over time is physical or psychological. But given other ontological schemes there is a fact of the matter; and there is a fact of the matter about which ontological scheme is correct.
The following is a transcript of the interview I (Yasuko Kitano) conducted with Neil Levy (The Centre for Applied Philosophy and Public Ethics, CAPPE) on the 23rd in July 2009, while he was in Tokyo to give a series of lectures on neuroethics at The University of Tokyo Center for Philosophy. I edited his words for publication with his approval.
While philosophers of language have traditionally relied upon their intuitions about cases when developing theories of reference, this methodology has recently been attacked on the grounds that intuitions about reference, far from being universal, show significant cultural variation, thus undermining their relevance for semantic theory. I’ll attempt to demonstrate that (1) such criticisms do not, in fact, undermine the traditional philosophical methodology, and (2) our underlying intuitions about the nature of reference may be more universal than the authors suppose.
Gödel argued that intuition has an important role to play in mathematical epistemology, and despite the infamy of his own position, this opinion still has much to recommend it. Intuitions and folk platitudes play a central role in philosophical enquiry too, and have recently been elevated to a central position in one project for understanding philosophical methodology: the so-called ‘Canberra Plan’. This philosophical role for intuitions suggests an analogous epistemology for some fundamental parts of mathematics, which casts a number of (...) themes in recent philosophy of mathematics (concerning a priority and fictionalism, for example) in revealing new light. (shrink)
Ordinary Language Philosophy has largely fallen out of favour, and with it the belief in the primary importance of analyses of ordinary language for philosophical purposes. Still, in their various endeavours, philosophers not only from analytic but also from other backgrounds refer to the use and meaning of terms of interest in ordinary parlance. In doing so, they most commonly appeal to their own linguistic intuitions. Often, the appeal to individual intuitions is supplemented by reference to dictionaries. In recent times, (...) Internet search engine queries for expressions of interest have become quite popular. Apparently, philosophers attempt to surpass the limits of their own linguistic intuitions by appealing to experts or to factual uses of language. I argue that this attempt is commendable but that its execution is wanting. Instead of appealing to dictionaries or Internet queries, philosophers should employ computer-based linguistic corpora in order to confirm or falsify hypotheses about the factual use of language. This approach also has some advantages over methods employed by experimental philosophers. If the importance of ordinary language is stressed, the use of linguistic corpora is hardly avoidable. (shrink)
An important tradition in metaphysics takes its job to be finding a limited number of ingredients with which we can tell the complete story of the world (or some subject matter). Physicalism, for example, claims that the list of ingredients sufficient to tell the complete story about the very small, or about the non-sentient, is sufficient to tell the complete story about all of the world. Some people take the moral of this kind of metaphysics to be eliminativist; that we (...) can tell the complete story of the world without meanings, or inflations, shows that meaning and inflation do not exist. Most people are not so blasé about rejecting commonsense opinions. Inflations, wars, rivers and beliefs all exist, but there is nothing but atoms in the void, so we must find a way of showing that the arrangement of atoms in the void makes true the stories about inflations and so on. (shrink)
This paper argues against both conceptual and linguistic analysis as sources of a priori knowledge. The key claim is that none of the main views about what concepts are can underwrite the possibility of such knowledge.
Many philosophical naturalists eschew analysis in favor of discovering metaphysical truths from the a posteriori, contending that analysis does not lead to philosophical insight. A countercurrent to this approach seeks to reconcile a certain account of conceptualanalysis with philosophical naturalism; prominent and influential proponents of this methodology include the late David Lewis, Frank Jackson, Michael Smith, Philip Pettit, and David Armstrong. Naturalistic analysis is a tool for locating in the scientifically given world objects and (...) properties we quantify over in everyday discourse. This collection gathers work from a range of prominent philosophers who are working within this tradition, offering important new work as well as critical evaluations of the methodology. Its centerpiece is an important posthumous paper by David Lewis, "Ramseyan Humility," published here for the first time. The contributors first address issues of philosophy of mind, semantics, and the new methodology's a priori character, then turn to matters of metaphysics, and finally consider problems regarding normativity. ConceptualAnalysis and Philosophical Naturalism is one of the first efforts to apply this approach to such a wide range of philosophical issues. _Contributors: _David Braddon-Mitchell, Mark Colyvan, Frank Jackson, Justine Kingsbury, Fred Kroon, David Lewis, Dustin Locke, Kelby Mason, Jonathan McKeown-Green, Peter Menzies, Robert Nola, Daniel Nolan, Philip Pettit, Huw Price, Denis Robinson, Steve Stich, Daniel Stoljar The hardcover edition does not include a dust jacket. (shrink)
Henderson and Horgan set out a broad new approach to epistemology. They defend the roles of the a priori and conceptualanalysis, but with an essential empirical dimension. 'Transglobal reliability' is the key to epistemic justification. The question of which cognitive processes are reliable depends on contingent facts about human capacities.
During the past three decades, there has been an ongoing debate on the quality of health care. Defining quality is an important part of it. This paper offers a review of definitions and a conceptualanalysis in order to understand and explain the differences between them. The analysis results in a semantic rule, expressing the meaning of quality as an optimal balance between possibilities realised and a framework of norms and values. This rule is postulated as a (...) formal criterion of meaning, e.g. when (correctly) applied people understand each other. The rule suits the abstract nature of the term quality. Quality doesn't exist as such. It is constructed in an interaction between people. This interaction is guided by rules in order to transfer information, e.g. communicate on quality. The rule improves our ability to discuss the debate on quality and to develop a theory grounding actions such as quality assurance or quality improvement. (shrink)
Gottlob Frege famously rejects the methodology for consistency and independence proofs offered by David Hilbert in the latter's Foundations of Geometry. The present essay defends against recent criticism the view that this rejection turns on Frege's understanding of logical entailment, on which the entailment relation is sensitive to the contents of non-logical terminology. The goals are (a) to clarify further Frege's understanding of logic and of the role of conceptualanalysis in logical investigation, and (b) to point out (...) the extent to which his understanding of logic differs importantly from that of the model-theoretic tradition that grows out of Hilbert's work. (shrink)
Church's thesis asserts that a number-theoretic function is intuitively computable if and only if it is recursive. A related thesis asserts that Turing's work yields a conceptualanalysis of the intuitive notion of numerical computability. I endorse Church's thesis, but I argue against the related thesis. I argue that purported conceptual analyses based upon Turing's work involve a subtle but persistent circularity. Turing machines manipulate syntactic entities. To specify which number-theoretic function a Turing machine computes, we must (...) correlate these syntactic entities with numbers. I argue that, in providing this correlation, we must demand that the correlation itself be computable. Otherwise, the Turing machine will compute uncomputable functions. But if we presuppose the intuitive notion of a computable relation between syntactic entities and numbers, then our analysis of computability is circular. (shrink)
It is argued here that the question of whether compatibilism is true is irrelevant to metaphysical questions about the nature of human decision-making processes—for example, the question of whether or not humans have free will—except in a very trivial and metaphysically uninteresting way. In addition, it is argued that two other questions—namely, the conceptual-analysis question of what free will is and the question that asks which kinds of freedom are required for moral responsibility—are also essentially irrelevant to metaphysical (...) questions about the nature of human beings. (shrink)
One way to do socially relevant investigations of science is through conceptualanalysis of scientific terms used in special-interest science (SIS). SIS is science having welfare-related consequences and funded by special interests, e.g., tobacco companies, in order to establish predetermined conclusions. For instance, because the chemical industry seeks deregulation of toxic emissions and avoiding costly cleanups, it funds SIS that supports the concept of "hormesis" (according to which low doses of toxins/carcinogens have beneficial effects). Analyzing the hormesis concept (...) of its main defender, chemical-industry-funded Edward Calabrese, the paper shows Calabrese and others fail to distinguish three different hormesis concepts, H, HG, and HD. H requires toxin-induced, short-term beneficial effects for only one biological endpoint, while HG requires toxin-induced, net-beneficial effects for all endpoints/responses/subjects/ages/conditions. HD requires using the risk-assessment/regulatory default rule that all low-dose toxic exposures are net-beneficial, thus allowable. Clarifying these concepts, the paper argues for five main claims. (1) Claims positing H are trivially true but irrelevant to regulations. (2) Claims positing HG are relevant to regulation but scientifically false. (3) Claims positing HD are relevant to regulation but ethically/scientifically questionable. (4) Although no hormesis concept (H, HG, or HD) has both scientific validity and regulatory relevance, Calabrese and others obscure this fact through repeated equivocation, begging the question, and data-tri mm ing. Consequently (5) their errors provide some undeserved rhetorical plausibility for deregulating low-dose toxins. (shrink)
The influence of Michael Polanyi on William H. Poteat’s teaching from 1967 to 1976 was apparent but not paramount. Cultural conceptualanalysis as taught and practiced by Poteat during this period included Polanyian texts, themes, and concepts, but drew extensively from other major conceptual innovators who provided radical alternatives to key cultural conceptual commitments of modernity. This was the period roughly between the completion of Intellect and Hope and the writing of Polanyian Meditations.