Conceptualanalysis is undergoing a revival in philosophy, and much of the credit goes to Frank Jackson. Jackson argues that conceptualanalysis is needed as an integral component of so-called serious metaphysics and that it also does explanatory work in accounting for such phenomena as categorization, meaning change, communication, and linguistic understanding. He even goes so far as to argue that opponents of concep- tual analysis are implicitly committed to it in practice. We show that (...) he is wrong on all of these points and that his case for conceptualanalysis doesn. (shrink)
It would be nice if good old a priori conceptualanalysis were possible. For many years conceptualanalysis was out of fashion, in large part because of the excessive ambitions of verificationist theories of meaning._ _However, those days are over._ _A priori conceptualanalysis is once again part of the philosophical mainstream._ _This renewed popularity, moreover, is well-founded. Modern philosophical analysts have exploited developments in philosophical semantics to formulate analyses which avoid the counterintuitive consequences (...) of verificationism, while vindicating our ability to know a priori precisely what it is our words and thoughts represent._ _Despite its apparent promise, however, I. (shrink)
It is easy to become battle-weary in metaphysics. In the face of seemingly unresolvable disputes and unanswerable questions, it is tempting to cast aside one’s sword, proclaiming: “there is no fact of the matter who is right!” Sometimes that is the right thing to do. As a case study, consider the search for the criterion of personal identity over time. I say there is no fact of the matter whether the correct criterion is bodily or psychological continuity.1 There exist two (...) candidate meanings for talk of persisting persons, one corresponding to each criterion, and there is simply no fact of the matter which candidate we mean. An argument schema for this sort of “no fact of the matter” thesis will be constructed. An instance of the schema will be defended in the case of personal identity. But scrutiny of this instance will reveal limits of the schema. Questions not settled by conceptualanalysis—in particular, some very difficult questions of fundamental ontology—have answers. So do certain questions that can be settled by conceptualanalysis, namely those that would be answered definitively by ideal philosophical inquiry. Whether there is a fact of the matter is not easily ascertained merely by looking to see whether disputes seem unresolvable or questions unanswerable: sometimes the truth is out there, however hard (or even impossible) it may be to discover. (shrink)
In this paper, I explore the implications of recent empirical research on concept representation for the philosophical enterprise of conceptualanalysis. I argue that conceptualanalysis, as it is commonly practiced, is committed to certain assumptions about the nature of our intuitive categorization judgments. I then try to show how these assumptions clash with contemporary accounts of concept representation in cognitive psychology. After entertaining an objection to my argument, I close by considering ways in which (...) class='Hi'>conceptualanalysis might be altered to accord better with the empirical work. (shrink)
Semantic externalism about a class of expressions is often thought to make conceptualanalysis about members of that class impossible. In particular, since externalism about natural kind terms makes the essences of natural kinds empirically discoverable, it seems that mere reflection on one's natural kind concept will not be able to tell one anything substantial about what it is for something to fall under one's natural kind concepts. Many hold the further view that one cannot even know anything (...) substantial about the reference-fixers of one's natural kind concepts by armchair reflection. In this paper I want to question this latter view and claim that, because of the way our standard methodology of doing theories of reference relies on semantic intuitions, typical externalists in fact presuppose that one can know the reference-fixers of one's natural kind concepts by mere armchair reflection. The more interesting question is how substantial such knowledge can be. I also take some steps toward answering this question. (shrink)
Philosophers often hold that the aim of conceptualanalysis is to discover the representational content of a given concept such as free will , belief , or law . In From Metaphysics to Ethics and other recent work, Frank Jackson has developed a theory of conceptualanalysis that is one of the most advanced systematizations of this widespread idea. I argue that this influential way of characterizing conceptualanalysis is too narrow. I argue that (...) it is possible that an expressivist account could turn out to be correct as a genuine conceptualanalysis of a genuine concept . I claim that since an expressivist analysis does not aim to discover the representational content of a given conceptâ€”and, indeed, might itself be based on the idea that the concept in question is not even representational in natureâ€”the possibility of expressivist conceptualanalysis shows that Jacksonâ€™s theory of conceptualanalysis is incomplete as it currently stands. I conclude that Jackson needs to either shift his basic understanding of the nature of conceptualanalysis or commit to a particular normative reinterpretation of his project. (shrink)
The main purpose of this article is to undertake a conceptual investigation of the Berlin Wisdom Paradigm: a psychological project initiated by Paul Baltes and intended to study the complex phenomenon of wisdom. Firstly, in order to provide a wider perspective for the subsequent analyses, a short historical sketch is given. Secondly, a meta-theoretical issue of the degree to which the subject matter of the Baltesian study can be identified with the traditional philosophical wisdom is addressed. The main result (...) yielded by a careful conceptualanalysis is that the philosophical and psychological concepts of wisdom, though not entirely the same, are at least parallel. Finally, one of the revealed aspects of the Berlin Wisdom Paradigm, i.e. its relative neglect of the non-cognitive and personal aspects of wisdom is brought to the fore. This deficiency, it is suggested, can be remedied by the application of the virtue ethics' conceptual framework. (shrink)
This essay concerns the question of how we make genuine epistemic progress through conceptualanalysis. Our way into this issue will be through consideration of the paradox of analysis. The paradox challenges us to explain how a given statement can make a substantive contribution to our knowledge, even while it purports merely to make explicit what one’s grasp of the concept under scrutiny consists in. The paradox is often treated primarily as a semantic puzzle. However, in “Sect. (...) 1” I argue that the paradox raises a more fundamental epistemic problem, and in “Sects.1 and 2” I argue that semantic proposals—even ones designed to capture the Fregean link between meaning and epistemic significance—fail to resolve that problem. Seeing our way towards a real solution to the paradox requires more than semantics; we also need to understand how the process of analysis can yield justification for accepting a candidate conceptualanalysis. I present an account of this process, and explain how it resolves the paradox, in “Sect. 3”. I conclude in “Sect. 4” by considering the implications for the present account concerning the goal of conceptualanalysis, and by arguing that the apparent scarcity of short and finite illuminating analyses in philosophically interesting cases provides no grounds for pessimism concerning the possibility of philosophical progress through conceptualanalysis. (shrink)
The philosophical method of conceptualanalysis has been criticised on the grounds that empirical psychological research has cast severe doubt on whether concepts exist in the form traditionally assumed, and that conceptualanalysis therefore is doomed. This objection may be termed the Charge from Psychology. After a brief characterisation of conceptualanalysis, I discuss the Charge from Psychology and argue that it is misdirected.
Conceptualanalysis, like any exclusively theoretical activity, is far from overrated in current psychology. Such a situation can be related both to the contingent influences of contextual and historical character and to the more essential metatheoretical reasons. After a short discussion of the latter it is argued that even within a strictly empirical psychology there are non-trivial tasks that can be attached to well-defined and methodologically reliable, conceptual work. This kind of method, inspired by the ideas of (...) Ludwig Wittgenstein, Peter Strawson (conceptual grammar), and Gilbert Ryle (conceptual geography), is proposed and formally depicted as being holistic, descriptive, and connective. Finally, the newly presented framework of connective conceptualanalysis is defended against the “Charge from Psychology,” in a version developed by William Ramsey, claiming that conceptualanalysis is based on psychological assumptions that have already been refuted by empirical psychology. (shrink)
This article argues, against contemporary experimentalist criticism, that conceptualanalysis has epistemic value, with a structure that encourages the development of interesting hypotheses which are of the right form to be valuable in diverse areas of philosophy. The article shows, by analysis of the Gettier programme, that conceptualanalysis shares the proofs and refutations form Lakatos identified in mathematics. Upon discovery of a counterexample, this structure aids the search for a replacement hypothesis. The search is (...) guided by heuristics. The heuristics of conceptualanalysis are similar to those in other interesting areas of scholarship, and so hypotheses generated by it are of the right form to be applicable to diverse areas. The article shows that the explanationist criterion in epistemology was developed and applied in this way. The epistemic value of conceptualanalysis is oblique because it contributes not towards the main purpose of conceptualanalysis but towards the reliable development of epistemically valuable hypotheses in philosophy and scholarship. (shrink)
Three proponents of the Canberra Plan, namely Jackson, Pettit, and Smith, have developed a collective functionalist program—Canberra Functionalism—spanning from philosophical psychology to ethics. They argue that conceptualanalysis is an indispensible tool for research on cognitive processes since it reveals that there are some folk concepts, like belief and desire, whose functional roles must be preserved rather than eliminated by future scientific explanations. Some naturalists have recently challenged this indispensability argument, though the point of that challenge has been (...) blunted by a mutual conflation of metaphysical and methodological strands of naturalism. I argue that the naturalist’s challenge to the indispensability argument, like naturalism itself, ought to be reformulated as a strictly methodological thesis. So understood, the challenge succeeds by showing (1) that we cannot know a priori on the basis of conceptualanalysis of folk platitudes that something must occupy the functional roles specified for beliefs and desires, and (2) that proponents of Canberra Functionalism sometimes tacitly concede this point by treating substantive psychological theories as the deliverances of a priori platitudes analysis. (shrink)
In this paper I argue, contra Fraser MacBride, that conceptualanalysis, and in particular the distinction between numerical and qualitative identity, can solve the Problem of Universals, whether understood as the One over Many or the as the Many over One. In this paper I show why the solutions needed to solve either version of the problem must be in terms of truthmakers, and that the distinction between numerical and qualitative identity is not sufficient to solve them.
In order to understand why analytic aesthetics has lost a lot of its former intellectual stature it is necessary to combine historical reconstruction with systematic consideration. In the middle of the twentieth century analytic philosophers came to the conclusion that essentialist theories of the “nature” of art are no longer tenable. As a consequence they felt compelled to move to the meta-level of conceptualanalysis. Then they tried to show how a purely classificatory concept of art is used. (...) The presupposition, however, that there actually is such a concept can only appear plausible at first sight. Upon closer inspection it turns out to be utterly misguided. (shrink)
The late scholastics, from the fourteenth to the seventeenth centuries, contributed to many ﬁelds of knowledge other than philosophy. They developed a method of conceptualanalysis that was very productive in those disciplines in which theory is relatively more important than empirical results. That includes mathematics, where the scholastics developed the analysis of continuous motion, which fed into the calculus, and the theory of risk and probability. The method came to the fore especially in the social sciences. (...) In legal theory they developed, for example, the ethical analyses of the conditions of validity of contracts, and natural rights theory. In political theory, they introduced constitutionalism and the thought experiment of a “state of nature”. Their contributions to economics included concepts still regarded as basic, such as demand, capital, labour, and scarcity. Faculty psychology and semiotics are other areas of signiﬁ cance. In such disciplines, later developments rely crucially on scholastic concepts and vocabulary. (shrink)
Conceptualanalysis of health and disease is portrayed as consisting in the confrontation of a set of criteria—a “definition”—with a set of cases, called instances of either “health” or “disease.” Apart from logical counter-arguments, there is no other way to refute an opponent’s definition than by providing counter-cases. As resorting to intensional stipulation (stipulation of meaning) is not forbidden, several contenders can therefore be deemed to have succeeded. This implies that conceptualanalysis alone is not likely (...) to decide between naturalism and normativism. An alternative to this approach would be to examine whether the concept of disease can be naturalized. (shrink)
David Henderson and Terry Horgan offer a detailed account of the structure of conceptualanalysis that is embedded within a more general account of a priori justification. Their account highlights an important feature of conceptualanalysis that has been overlooked in the recent debate. Although it is generally recognized that conceptualanalysis involves an inference from premises to the effect that some concept does (or does not) apply to a range of particular cases to (...) a general conclusion about the nature of the concept itself, the details of that inference have not been fully articulated. Henderson and Horgan offer an articulation of the inferential connection which, taken in conjunction with their account of a priori justification, yields a startling conclusion about the epistemic status of the results of conceptualanalysis. My view is that their account of a priori justification misses the mark and, as a consequence, they arrive at a conclusion about the epistemic status of the results of conceptualanalysis that many will find implausible. Moreover, I fear that because many will find their overall conclusion implausible, they will either overlook or dismiss the important feature of the inferential connection that Henderson and Horgan correctly highlight. Hence, I have two goals. First, I want to track down the source of their mischaracterization of the a priori and to show why it is a mischaracterization. Second, I want to highlight the important feature of the inferential connection involved in conceptualanalysis that they identify, articulate its bearing on the debate over the existence of a priori knowledge and assess its implications. (shrink)
My aim here is threefold: (a) to show that conceptual facts play a more significant role in justifying explanatory reductions than most of the contributors to the current debate realize; (b) to furnish an account of that role, and (c) to trace the consequences of this account for conceivability arguments about the mind.
Philosophers expend considerable effort on the analysis of concepts, but the value of such work is not widely appreciated. This paper principally analyses some arguments, beliefs, and presuppositions about the nature of design and the relations between design and science common in the literature to illustrate this point, and to contribute to the foundations of design theory.
The explanatory gap . Consciousness is a mystery. No one has ever given an account, even a highly speculative, hypothetical, and incomplete account of how a physical thing could have phenomenal states. (Nagel, 1974, Levine, 1983) Suppose that consciousness is identical to a property of the brain, say activity in the pyramidal cells of layer 5 of the cortex involving reverberatory circuits from cortical layer 6 to the thalamus and back to layers 4 and 6,as Crick and Koch have suggested (...) for visual consciousness. (See Crick (1994).) Still, that identity itself calls out for explanation! Proponents of an explanatory gap disagree about whether the gap is permanent. Some (e.g. Nagel, 1974) say that we are like the scientifically naive person who is told that matter = energy, but does not have the concepts required to make sense of the idea. If we can acquire these concepts, the gap is closable. Others say the gap is uncloseable because of our cognitive limitations. (McGinn, 1991) Still others say that the gap is a consequence of the fundamental nature of consciousness. (shrink)
The following is a transcript of the interview I (Yasuko Kitano) conducted with Neil Levy (The Centre for Applied Philosophy and Public Ethics, CAPPE) on the 23rd in July 2009, while he was in Tokyo to give a series of lectures on neuroethics at The University of Tokyo Center for Philosophy. I edited his words for publication with his approval.
While philosophers of language have traditionally relied upon their intuitions about cases when developing theories of reference, this methodology has recently been attacked on the grounds that intuitions about reference, far from being universal, show significant cultural variation, thus undermining their relevance for semantic theory. I’ll attempt to demonstrate that (1) such criticisms do not, in fact, undermine the traditional philosophical methodology, and (2) our underlying intuitions about the nature of reference may be more universal than the authors suppose.
The multiplicity of definitions and conceptions of self-regulation that typifies contemporary research on self-regulation in psychology and educational psychology is examined. This examination is followed by critical analyses of theory and research in educational psychology that reveal not only conceptual confusions, but misunderstandings of conceptual versus empirical issues, individualistic biases to the detriment of an adequate consideration of social and cultural contexts, and a tendency to reify psychological states and processes as ontologically foundational to self-regulation. The essay concludes (...) with a consideration of educational research and intervention in the area of students’ self-regulated learning in terms of the scientific and professional interests of psychologists and educators, and the disguised manipulation of student self-surveillance in the service of the institutional mandates of schools. (shrink)
Is conceptualanalysis required for reductive explanation? If there is no a priori entailment from microphysical truths to phenomenal truths, does reductive explanation of the phenomenal fail? We say yes (Chalmers 1996; Jackson 1994, 1998). Ned Block and Robert Stalnaker say no (Block and Stalnaker 1999).
During the past three decades, there has been an ongoing debate on the quality of health care. Defining quality is an important part of it. This paper offers a review of definitions and a conceptualanalysis in order to understand and explain the differences between them. The analysis results in a semantic rule, expressing the meaning of quality as an optimal balance between possibilities realised and a framework of norms and values. This rule is postulated as a (...) formal criterion of meaning, e.g. when (correctly) applied people understand each other. The rule suits the abstract nature of the term quality. Quality doesn't exist as such. It is constructed in an interaction between people. This interaction is guided by rules in order to transfer information, e.g. communicate on quality. The rule improves our ability to discuss the debate on quality and to develop a theory grounding actions such as quality assurance or quality improvement. (shrink)
It is argued here that the question of whether compatibilism is true is irrelevant to metaphysical questions about the nature of human decision-making processes—for example, the question of whether or not humans have free will—except in a very trivial and metaphysically uninteresting way. In addition, it is argued that two other questions—namely, the conceptual-analysis question of what free will is and the question that asks which kinds of freedom are required for moral responsibility—are also essentially irrelevant to metaphysical (...) questions about the nature of human beings. (shrink)
One strategy for blocking Chalmers's overall case against physicalism has been to deny his claim that showing that phenomenal properties are in some sense physical requires an a priori entailment of the phenomenal truths from the physical ones. Here I avoid this well-trodden ground and argue instead that an a priori entailment of the phenomenal truths from the physical ones does not require an analysis in the Jackson/Chalmers sense. This is to sever the dualist's link between conceptual (...) class='Hi'>analysis and a priori entailment by showing that the lack of the former does not imply the absence of the latter. Moreover, given the role of the argument from conceptualanalysis in Chalmers's overall case for dualism, undermining that argument effectively undermines that case as a whole in a way that, I'll argue, undermining the conceivability arguments as stand-alone arguments does not. (shrink)
Church's thesis asserts that a number-theoretic function is intuitively computable if and only if it is recursive. A related thesis asserts that Turing's work yields a conceptualanalysis of the intuitive notion of numerical computability. I endorse Church's thesis, but I argue against the related thesis. I argue that purported conceptual analyses based upon Turing's work involve a subtle but persistent circularity. Turing machines manipulate syntactic entities. To specify which number-theoretic function a Turing machine computes, we must (...) correlate these syntactic entities with numbers. I argue that, in providing this correlation, we must demand that the correlation itself be computable. Otherwise, the Turing machine will compute uncomputable functions. But if we presuppose the intuitive notion of a computable relation between syntactic entities and numbers, then our analysis of computability is circular. (shrink)
This paper defends Pragmatic ConceptualAnalysis , a proposed empirical methodology for explicating philosophical concepts. This methodology attributes to our shared concepts whatever application conditions they would need to have in order best to continue delivering benefits in the ways they have regularly delivered benefits in the past. In the first stage of my argument I argue that Pragmatic ConceptualAnalysis has what I call normative authority : we have practical and epistemic reason to adopt the (...) explications that it delivers even if we think doing so requires stipulative revisions in the meanings of our concepts. I then use this normative authority to argue that Pragmatic ConceptualAnalysis also has what I call descriptive authority : when we understand concept-meaning in the way we ought to understand it (in the way licensed by the normative authority of Pragmatic ConceptualAnalysis) we see that, rather than being revisionary, Pragmatic ConceptualAnalysis is a semantically conservative tool that uncovers (what we should think of as being) the meanings our concepts already have. (shrink)
Gottlob Frege famously rejects the methodology for consistency and independence proofs offered by David Hilbert in the latter's Foundations of Geometry. The present essay defends against recent criticism the view that this rejection turns on Frege's understanding of logical entailment, on which the entailment relation is sensitive to the contents of non-logical terminology. The goals are (a) to clarify further Frege's understanding of logic and of the role of conceptualanalysis in logical investigation, and (b) to point out (...) the extent to which his understanding of logic differs importantly from that of the model-theoretic tradition that grows out of Hilbert's work. (shrink)
Tarski is famous for his widely accepted conceptualanalysis (or, in his terms, “explication”) of the notion of truth for formal languages and the allied notions of satisfaction, definability, and logical consequence. From an historical point of view, two questions are of interest. First, what motivated Tarski to make these analyses, and second, what led to their particular form? The latter question is easy to answer at one level: Tarski was heavily influenced by the visible success of (...) class='Hi'>conceptualanalysis in set-theoretic topology as practiced by the leading mathematicians at the University of Warsaw in the 1920s, and so formulated his analyses of semantical concepts in general set-theoretical terms. But the actual forms which his definitions took are puzzling in a number of respects. The question of motivation is also difficult because there was no prima facie compelling reason for dealing in precise terms with the semantical notions. These had been used quite confidently, without any such explication, by a number of Tarski’s contemporaries, including Skolem and Gödel. The aim of this paper is to throw greater light on both the “why” and “how” questions concerning Tarski’s conceptualanalysis of semantical notions, especially that of truth. (shrink)
This is a paper on George Rey’s views of conceptualanalysis (as presented in two version of his paper on philosophical analysis, the second bearing a telling title “Philosophical Analysis as Cognitive Psychology: Thinking About Nothing»), and on his views on a priori. Let me fist mention that I am very happy to comment on these views, and to discuss it with Georges in a conference.[i] I have personally learned a lot from him; in particular, his (...) computationalist view of a priori knowledge has influenced a lot my own thinking on the subject. (shrink)
Lewis considers (Postscript B to 'Causation') the objection that what he calls a plain case of probabilistic causation is really a probable case of plain causation. He replies that the objection rests on the false metaphysical assumption that counterfactuals whose consequents are about events (rather than chances) can be true under indeterminism. The present note argues that this is the wrong kind of reply, because metaphysics is never relevant to conceptualanalysis.
One way to do socially relevant investigations of science is through conceptualanalysis of scientific terms used in special-interest science (SIS). SIS is science having welfare-related consequences and funded by special interests, e.g., tobacco companies, in order to establish predetermined conclusions. For instance, because the chemical industry seeks deregulation of toxic emissions and avoiding costly cleanups, it funds SIS that supports the concept of "hormesis" (according to which low doses of toxins/carcinogens have beneficial effects). Analyzing the hormesis concept (...) of its main defender, chemical-industry-funded Edward Calabrese, the paper shows Calabrese and others fail to distinguish three different hormesis concepts, H, HG, and HD. H requires toxin-induced, short-term beneficial effects for only one biological endpoint, while HG requires toxin-induced, net-beneficial effects for all endpoints/responses/subjects/ages/conditions. HD requires using the risk-assessment/regulatory default rule that all low-dose toxic exposures are net-beneficial, thus allowable. Clarifying these concepts, the paper argues for five main claims. (1) Claims positing H are trivially true but irrelevant to regulations. (2) Claims positing HG are relevant to regulation but scientifically false. (3) Claims positing HD are relevant to regulation but ethically/scientifically questionable. (4) Although no hormesis concept (H, HG, or HD) has both scientific validity and regulatory relevance, Calabrese and others obscure this fact through repeated equivocation, begging the question, and data-tri mm ing. Consequently (5) their errors provide some undeserved rhetorical plausibility for deregulating low-dose toxins. (shrink)
David Henderson and Terence Horgan set out a broad new approach to epistemology, which they see as a mixed discipline, having both a priori and empirical elements. They defend the roles of a priori reflection and conceptualanalysis in philosophy, but their revisionary account of these philosophical methods allows them a subtle but essential empirical dimension. They espouse a dual-perspective position which they call iceberg epistemology, respecting the important differences between epistemic processes that are consciously accessible and those (...) that are not. Reflecting on epistemic justification, they introduce the notion of transglobal reliability as the mark of the cognitive processes that are suitable for humans. Which cognitive processes these are depends on contingent facts about human cognitive capacities, and these cannot be known a priori. (shrink)
It is argued that conceptualanalysis as practiced by the philosophers of ordinary language, is an empirical procedure that relies on a version of Garfinkel's ethnomethodological experiment. The ethnomethodological experiment is presented as a procedure in which the existence and nature of a social norm is demonstrated by flouting the putative convention and observing what reaction that produces in the social group within which the convention is assumed to operate. Examples are given of the use of ethnomethodological experiments, (...) both in vivo and as a thought experiment, in order to demonstrate the existence of otherwise invisible conventions governing human social behavior. Comparable examples are cited from the writings of ordinary language philosophers of ethnomethodological thought experiments designed to demonstrate the existence of linguistic conventions. (shrink)
Rey’s project of rescuing conceptualanalysis within a naturalistic computationalist framework, equipped with a Putnamian account of reference, is an interesting and valuable project. However, his extremepessimism about fundamental philosophical concepts, according to which they mostly tended to be empty, amounts to sacrificing philosophical analysis after having it rescued from the Quineans. An alternative is proposed, which accepts most of the naturalistic computationalist Putnamian framework, rejects the traditional view of analyticity, but secures more space for a constructive, (...) as opposed to merely destructive, philosophical analysis. (shrink)
Conceptualanalysis is currently out of favour, especially in North America. This is partly through misunderstanding of its nature. Properly understood, conceptualanalysis is not a mysterious activity discredited by Quine that seeks after the a priori in some hard‐to‐understand sense. It is, rather, something familiar to everyone, philosophers and non‐philosophers alike—or so I argue. Another reason for its unpopularity is a failure to appreciate the need for conceptualanalysis. The cost of repudiating it (...) has not been sufficiently appreciated; without it, we cannot address a whole raft of important questions. (shrink)
It is a contested question in contemporary theories of religion whether the concept of religion can be defined in a sound way or not. Many theorists maintain that a universal but delimiting definition is impossible. In this study, by contrast, it is argued that a conceptualanalysis of religion that holds universally is perfectly possible because the following thesis can be seen as a necessary and sufficient conceptual condition of what religion is: (R) X is a religion (...) if and only if X is a collection of artifacts which has the proper function of representing a supraphysical world. On this thesis, it is argued that artifacts such as pictorial and verbal representations, rituals, symbols, and various tools constitute religion as a cultural object, which, as a collection of artifacts, has the proper function of representing a conceived world that is not entirely physical, and which, allegedly, is a prerequisite for existential welfare in relation to observance. It is here important to understand what is constitutive for these kinds of conceived worlds. Supraphysical world is defined as follows. Given that the actual world is a physical world, a conception S is a construction of a supraphysical world if and only if both of the following conditions apply to S: (1) Metaphysical component: S is a duplicate of the actual world with the addition of an anti-physical substance. (2) Existential-normative component: S is an alleged prerequisite for existential welfare in relation to observance. The core argument of the study is that (R) holds a priori for the concept of religion and as an a posteriori necessity for every instance of a religion. Apart from discussing the methodological problems of defining religion, the study introduces a new theory of religion in terms of (R). It addresses issues in the theory of artifacts; in the theory of representations; and in the theory of conceptualanalysis. (shrink)
We illustrate the application of the conceptualanalysis (CA) method outlined in Part I by the example of quantum mechanics. In the present part the Hilbert space structure of conventional quantum mechanics is deduced as a consequence of postulates specifying further idealized concepts. A critical discussion of the idealizations of quantum mechanics is proposed. Quantum mechanics is characterized as a “statistically complete” theory and a simple and elegant formal recipe for the construction of the fundamental mathematical apparatus of (...) quantum mechanics is formulated. Our analysis may also lead to a criticism of quantum mechanics as a “strongly idealized” theory. A critical analysis of the fundamental structure of quantum mechanics seems an indispensable and natural starting point for the construction of new theories. A major technical problem in a more general application of the CA method is the lack of mathematical representation theorems for more general algebraic structures. (shrink)
The application of the conceptualanalysis (CA) method outlined in Part I is illustrated on the example of quantum mechanics. In Part II, we deduce the complete-lattice structure in quantum mechanics from postulates specifying the idealizations that are accepted in the theory. The idealized abstract concepts are introduced by means of a topological extension of the basic structure (obtained in Part I) in accord with the “approximation principle”; the relevant topologies are not arbitrarily chosen; they are fixed by (...) the choice of the idealizations. There is a typical topological asymmetry in the mathematical scheme. Convexity or linear structures do not play any role in the mathematical methods of this approach. The essential concept in Part II is the idealization of “perfect measurement” suggested by our conceptualanalysis in Part I. The Hilbert-space representation will be deduced in Part III. In our papers, we keep to the tenet: The mathematical scheme of a physical theory must be rigorously formulated. However, for physics, mathematics is only a nice and useful tool; it is not purpose. (shrink)
A method is proposed that should facilitate the construction of theories of “submicroscopic particles” (denoted as “theories of microchannels”) in a way similar to the use of group-theoretical methods. The “conceptualanalysis” (CA) method is based on the analysis of the basic concepts of a theory; it permits a determination of necessary conditions imposed on the mathematical apparatus (of the theory) which then appear as a mathematical representation of the structures obtained in a formal scheme of a (...) theory. A pertinent conceptualanalysis leads to a new definition (“relativization”) of the concept “empirical implication.” The approach may be characterized as “realistic” and “operational.” The application of the CA method is illustrated on the example of quantum theory. In Part I the algebraic structure of a partially ordered, up-ward directed, bounded set is deduced from the rudimentary concepts. In Parts II and III, we shall deduce the Hilbert-space structure (well established in quantum mechanics) from postulates on some essential idealizations accepted in the theory. Whereas Part II is concerned with the idealizations of existing quantum theories based on the Hilbert-space formalism, Part I may be considered as a general basis for a wider class of theories. (shrink)
A conceptualanalysis of basic notions of addictiology, i.e., Euphoria, Ecstasy, Inebriation, Abuse, Dependence, and Addiction was presented. Three different forms of dependence were distinguished: purely psychic, psycho-physiological, and purely somatic dependence. Two kinds of addiction were differentiated, i.e. appetitive and deprivative addiction. The conceptual requirements of addiction were discussed. Keeping these in mind some ethical problems of drug therapy and psychotherapy were explained. Criteria for the assessment of therapeutic approaches are suggested: effectiveness, side effects, economic, ethic, (...) and esthetic valuation. (shrink)