One of the central problems in the philosophy of psychology is an updated version of the old mind-body problem: how levels of theories in the behavioral and brain sciences relate to one another. Many contemporary philosophers of mind believe that cognitive-psychological theories are not reducible to neurological theories. However, this antireductionism has not spawned a revival of dualism. Instead, most nonreductive physicalists prefer the idea of a one-way dependence of the mental on the physical.In Psychoneural Reduction, John Bickle presents (...) a new type of reductionism, one that is stronger than one-way dependency yet sidesteps the arguments that sank classical reductionism. Although he makes some concessions to classical antireductionism, he argues for a relationship between psychology and neurobiology that shares some of the key aims, features, and consequences of classical reductionism. Parts of Bickle's "new wave" reductionism have emerged piecemeal over the past two decades; this is his first comprehensive statement and defense of it to appear. (shrink)
Through careful analysis of phenomenological texts by Husserl and Heidegger, Marion argues for the necessity of a third phenomenological reduction that concerns what is fully implied but left largely unthought by the phenomenologies of both ...
This volume investigates the notion of reduction. Building on the idea that philosophers employ the term ‘reduction’ to reconcile diversity and directionality with unity, without relying on elimination, the book offers a powerful explication of an “ontological” notion of reduction the extension of which is (primarily) formed by properties, kinds, individuals, or processes. It argues that related notions of reduction, such as theory-reduction and functional reduction, should be defined in terms of this explication. Thereby, (...) the book offers a coherent framework, which sheds light on the history of the various reduction debates in the philosophy of science and in the philosophy of mind, and on related topics such as reduction and unification, the notion of a scientific level, and physicalism. (shrink)
In this paper I examine Chalmers and Jackson’s defence of the a priori entailment thesis, that is, the claim that microphysical truths a priori entail ordinary non-phenomenal truths such as ‘water covers 60% of the Earth surface’, which they use as a premise for an argument against the possibility of a reductive explanation of consciousness. Their argument relies on a certain view about the possession conditions of macroscopic concepts such as WATER, known as ascriptivism. In the paper I distinguish two (...) versions of ascriptivism: reductive versus non-reductive ascriptivism. According to reductive ascriptivism, competent users of a concept have the ability to infer truths involving such concept from lower-level truths, whereas according to non-reductive ascriptivism, all that is required in order to be a competent user of a concept is to be able to infer truths involving that concept from other truths, which need not be lower-level truths. I argue, first, that the a priori entailment thesis is committed to reductive ascriptivism, and secondly, that reductive ascriptivism is problematic because it trivializes the notion of a priori knowledge. Therefore, I conclude that Chalmers and Jackson have not presented a convincing case for the claim that microphysical truths entail ordinary non-phenomenal truths a priori, especially when we understand this claim in the sense that is relevant for their argument against the possibility of a reductive explanation of consciousness. (shrink)
This is one of two papers about emergence, reduction and supervenience. It expounds these notions and analyses the general relations between them. The companion paper analyses the situation in physics, especially limiting relations between physical theories. I shall take emergence as behaviour that is novel and robust relative to some comparison class. I shall take reduction as deduction using appropriate auxiliary definitions. And I shall take supervenience as a weakening of reduction, viz. to allow infinitely long definitions. (...) The overall claim of this paper will be that emergence is logically independent both of reduction and of supervenience. In particular, one can have emergence with reduction, as well as without it; and emergence without supervenience, as well as with it. Of the subsidiary claims, the four main ones are: : I defend the traditional Nagelian conception of reduction ; : I deny that the multiple realizability argument causes trouble for reductions, or ``reductionism'' ; : I stress the collapse of supervenience into deduction via Beth's theorem ; : I adapt some examples already in the literature to show supervenience without emergence and vice versa. (shrink)
I argue that an adequate account of non-reductive realization must guarantee satisfaction of a certain condition on the token causal powers associated with (instances of) realized and realizing entities---namely, what I call the 'Subset Condition on Causal Powers' (first introduced in Wilson 1999). In terms of states, the condition requires that the token powers had by a realized state on a given occasion be a proper subset of the token powers had by the state that realizes it on that occasion. (...) Accounts of non-reductive realization conforming to this condition are implementing what I call 'the powers-based subset strategy'. I focus on the crucial case involving mental and brain states; the results may be generalized, as appropriate. I ﬁrst situate and motivate the strategy by attention to the problem of mental causation; I make the case, in schematic terms, that implementation of the strategy makes room (contra Kim 1989, 1993, 1998, and elsewhere) for mental states to be ontologically and causally autonomous from their realizing physical states, without inducing problematic causal overdetermination, and compatible with both Physicalism and Non-reduction; and I show that several contemporary accounts of non-reductive realization (in terms of functional realization, parthood, and the determinable/determinate relation) are plausibly seen as implementing the strategy. As I also show, implementation of the powers-based strategy does not require endorsement of any particular accounts of either properties or causation---indeed, a categoricalist contingentist Humean can implement the strategy. The schematic location of the strategy in the space of available responses to the problem of mental (more generally, higher-level) causation, as well as the fact that the schema may be metaphysically instantiated, strongly suggests that the strategy is, appropriately generalized and instantiated, sufficient and moreover necessary for non-reductive realization. I go on to defend the sufficiency and necessity claims against a variety of objections, considering, along the way, how the powers-based subset strategy fares against competing accounts of purportedly non-reductive realization in terms of supervenience, token identity, and constitution. (shrink)
Object-Oriented Ontology is a contemporary form of realism concerned with the investigation of “objects” broadly construed. It may be characterised in terms of a metaphysical pluralism to the extent that it recognises infinitely many different kinds of emergent entities, and this fact in turn leads to a number of questions concerning the nature of objects and emergence in OOO: what is the precise meaning of an emergent entity in OOO? How has emergence been denied throughout the history of Western thought? (...) Is there a specific object-oriented account of emergence? What is the causal mechanism which provides the conditions of possibility for the generation of emergent entities? In this article, I aim to answer all these questions by constructing the first extensive account of real emergence in the context of Object-Oriented Ontology, and I also seek to tie this analysis to the notion of “vicarious” or indirect causation. (shrink)
Philosophers of neuroscience have traditionally described interfield integration using reduction models. Such models describe formal inferential relations between theories at different levels. I argue against reduction and for a mechanistic model of interfield integration. According to the mechanistic model, different fields integrate their research by adding constraints on a multilevel description of a mechanism. Mechanistic integration may occur at a given level or in the effort to build a theory that oscillates among several levels. I develop this alternative (...) model using a putative exemplar of reduction in contemporary neuroscience: the relationship between the psychological phenomena of learning and memory and the electrophysiological phenomenon known as Long-Term Potentiation. A new look at this historical episode reveals the relative virtues of the mechanistic model over reduction as an account of interfield integration. (shrink)
Some claim that Non- reductive Physicalism is an unstable position, on grounds that NRP either collapses into reductive physicalism, or expands into emergentism of a robust or ‘strong’ variety. I argue that this claim is unfounded, by attention to the notion of a degree of freedom—roughly, an independent parameter needed to characterize an entity as being in a state functionally relevant to its law-governed properties and behavior. I start by distinguishing three relations that may hold between the degrees of freedom (...) needed to characterize certain special science entities, and those needed to characterize their composing physical entities; these correspond to what I call ‘reductions’, ‘restrictions’, and ‘eliminations’ in degrees of freedom. I then argue that eliminations in degrees of freedom, in particular—when strictly fewer degrees of freedom are required to characterize certain special science entities than are required to characterize their composing physical entities—provide a basis for making sense of how certain special science entities can be both physically acceptable and ontologically irreducible to physical entities. (shrink)
Most philosophical accounts of emergence are incompatible with reduction. Most scientists regard a system property as emergent relative to properties of the system's parts if it depends upon their mode of organization--a view consistent with reduction. Emergence can be analyzed as a failure of aggregativity--a state in which "the whole is nothing more than the sum of its parts." Aggregativity requires four conditions, giving tools for analyzing modes of organization. Differently met for different decompositions of the system, and (...) in different degrees, these conditions provide powerful evaluation criteria for choosing decompositions, and heuristics for detecting biases of vulgar reductionisms. This analysis of emergence is compatible with reduction. (shrink)
Grand debates over reduction and emergence are playing out across the sciences, but these debates have reached a stalemate, with both sides declaring victory on empirical grounds. In this book, Carl Gillett provides new theoretical frameworks with which to understand these debates, illuminating both the novel positions of scientific reductionists and emergentists and the recent empirical advances that drive these new views. Gillett also highlights the flaws in existing philosophical frameworks and reorients the discussion to reflect the new scientific (...) advances and issues, including the nature of 'parts' and 'wholes', the character of aggregation, and thus the continuity of nature itself. Most importantly, Gillett shows how disputes about concrete scientific cases are empirically resolvable and hence how we can break the scientific stalemate. Including a detailed glossary of key terms, this volume will be valuable for researchers and advanced students of the philosophy of science and metaphysics, and scientific researchers working in the area. (shrink)
Is conceptual analysis required for reductive explanation? If there is no a priori entailment from microphysical truths to phenomenal truths, does reductive explanation of the phenomenal fail? We say yes . Ned Block and Robert Stalnaker say no.
Ethicists struggle to take reductive views seriously. They also have trouble conceiving of some supervenience failures. Understanding why provides further evidence for a kind of hybrid view of normative concept use.
Back cover: This book develops a philosophical account that reveals the major characteristics that make an explanation in the life sciences reductive and distinguish them from non-reductive explanations. Understanding what reductive explanations are enables one to assess the conditions under which reductive explanations are adequate and thus enhances debates about explanatory reductionism. The account of reductive explanation presented in this book has three major characteristics. First, it emerges from a critical reconstruction of the explanatory practice of the life sciences itself. (...) Second, the account is monistic since it specifies one set of criteria that apply to explanations in the life sciences in general. Finally, the account is ontic in that it traces the reductivity of an explanation back to certain relations that exist between objects in the world (such as part-whole relations and level relations), rather than to the logical relations between sentences. Beginning with a disclosure of the meta-philosophical assumptions that underlie the author’s analysis of reductive explanation, the book leads into the debate about reduction(ism) in the philosophy of biology and continues with a discussion on the two perspectives on explanatory reduction that have been proposed in the philosophy of biology so far. The author scrutinizes how the issue of reduction becomes entangled with explanation and analyzes two concepts, the concept of a biological part and the concept of a level of organization. The results of these five chapters constitute the ground on which the author bases her final chapter, developing her ontic account of reductive explanation. (shrink)
Though most contemporary philosophers and scientists accept a physicalist view of mind, the recent surge of interest in the problem of consciousness has put the mind /body problem back into play. The physicalists' lack of success in dispelling the air of residual mystery that surrounds the question of how consciousness might be physically explained has led to a proliferation of options. Some offer alternative formulations of physicalism, but others forgo physicalism in favour of views that are more dualistic or that (...) bring in mentalistic features at the ground- floor level of reality as in pan-proto-psychism. My aim here is to give an overview of the recent philosophic discussion to serve as a map in locating issues and options. I will not offer a comprehensive survey of the debate or mark every important variant to be found in the recent literature. I will mark the principal features of the philosophic landscape that one might use as general orientation points in navigating the terrain. I will focus in particular on three central and interrelated ideas: those of emergence, reduction, and nonreductive physicalism. The third of these, which has emerged as more or less the majority view among current philosophers of mind, combines a pluralist view about the diversity of what needs to be explained by science with an underlying metaphysical commitment to the physical as the ultimate basis of all that is real. The view has been challenged from both left and right, on one side from dualists and on the other from hard core reductive materialists. Despite their differences, those critics agree in finding nonreductive physicalism an unacceptable and perhaps even incoherent position. They agree as well in treating reducibility as the essential criterion for physicality; they differ only about whether the criterion can be met. Reductive physicalists argue that it can, and dualists deny it. (shrink)
A conventional wisdom about the progress of physics holds that successive theories wholly encompass the domains of their predecessors through a process that is often called reduction. While certain influential accounts of inter-theory reduction in physics take reduction to require a single "global" derivation of one theory's laws from those of another, I show that global reductions are not available in all cases where the conventional wisdom requires reduction to hold. However, I argue that a weaker (...) "local" form of reduction, which defines reduction between theories in terms of a more fundamental notion of reduction between models of a single fixed system, is available in such cases and moreover suffices to uphold the conventional wisdom. To illustrate the sort of fixed-system, inter-model reduction that grounds inter-theoretic reduction on this picture, I specialize to a particular class of cases in which both models are dynamical systems. I show that reduction in these cases is underwritten by a mathematical relationship that follows the broad prescriptions of Nagel/Schaffner reduction, and support this claim with several examples. Moreover, I show that this broadly Nagelian analysis of inter-model reduction encompasses several cases that are sometimes cited as instances of the "physicist's" limit-based notion of reduction. (shrink)
It is a truth universally acknowledged that a claim of metaphysical modality, in possession of good alethic standing, must be in want of an essentialist foundation. Or at least so say the advocates of thereductive-essence-firstview, according to which all modality is to be reductively defined in terms of essence. Here, I contest this bit of current wisdom. In particular, I offer two puzzles—one concerning the essences of non-compossible, complementary entities, and a second involving entities whose essences are modally ‘loaded’—that together (...) strongly call into question the possibility of reducing modality to essence. (shrink)
Philosophy and Neuroscience: A Ruthlessly Reductive Account is the first book-length treatment of philosophical issues and implications in current cellular and molecular neuroscience. John Bickle articulates a philosophical justification for investigating "lower level" neuroscientific research and describes a set of experimental details that have recently yielded the reduction of memory consolidation to the molecular mechanisms of long-term potentiation (LTP). These empirical details suggest answers to recent philosophical disputes over the nature and possibility of psycho-neural scientific reduction, including the (...) multiple realization challenge, mental causation, and relations across explanatory levels. Bickle concludes by examining recent work in cellular neuroscience pertaining to features of conscious experience, including the cellular basis of working memory, the effects of explicit selective attention on single-cell activity in visual cortex, and sensory experiences induced by cortical microstimulation. (shrink)
Nagel’s official model of theory-reduction and the way it is represented in the literature are shown to be incompatible with the careful remarks on the notion of reduction Nagel gave while developing his model. Based on these remarks, an alternative model is outlined which does not face some of the problems the official model faces. Taking the context in which Nagel developed his model into account, it is shown that the way Nagel shaped his model and, thus, its (...) well-known deficiencies, are best conceived of as a mere by-product of his philosophical background. (shrink)
In a recent critique of the doctrine of emergentism championed by its classic advocates up to C. D. Broad, Jaegwon Kim (Philosophical Studies 63:31–47, 1999) challenges their view about its applicability to the sciences and proposes a new account of how the opposing notion of reduction should be understood. Kim is critical of the classic conception advanced by Nagel and uses his new account in his criticism of emergentism. I question his claims about the successful reduction achieved in (...) the sciences and argue that his new account has not improved on Nagel’s and that the critique of emergentism he bases on it is question-begging in important respects. (shrink)
The claim that reduction entails grounding (but not vice versa) – called ‘the grounding-reduction link’ – is potentially very important but not clearly correct. After working through a fruitful debate between Gideon Rosen (who maintains the link) and Paul Audi (who maintains its impossibility), I distinguish between what I call ‘strict’ and ‘broad’ reduction. Strict reduction is incompatible with grounding, but broad reduction is not. Thus the link is possible, at least for broad reduction. (...) However, neither strict nor broad reduction entails grounding. Ultimately, there may be a link between grounding and some highly qualified form of reduction. However, the philosophical traction that one might hope to gain for grounding via such a link is considerably diminished if not outright lost. (shrink)
Reduction between theories in physics is often approached as an a priori relation in the sense that reduction is often taken to depend only on a comparison of the mathematical structures of two theories. I argue that such approaches fail to capture one crucial sense of “reduction,” whereby one theory encompasses the set of real behaviors that are well-modeled by the other. Reduction in this sense depends not only on the mathematical structures of the theories, but (...) also on empirical facts about where our theories succeed at describing real systems, and is therefore an a posteriori relation. (shrink)
My aim here is threefold: to show that conceptual facts play a more significant role in justifying explanatory reductions than most of the contributors to the current debate realize; to furnish an account of that role, and to trace the consequences of this account for conceivability arguments about the mind.
Taking reduction in the traditional deductive sense, the programmatic claim that most of genetics can be reduced by molecular genetics is defended as feasible and significant. Arguments by Ruse and Hull that either the relationship is replacement or at best a weaker form of reduction are shown to rest on a mixture of historical and logical confusions about the nature of the theories involved.
In this paper, I propose two theses, and then examine what the consequences of those theses are for discussions of reduction and emergence. The first thesis is that what have traditionally been seen as robust, reductions of one theory or one branch of science by another more fundamental one are a largely a myth. Although there are such reductions in the physical sciences, they are quite rare, and depend on special requirements. In the biological sciences, these prima facie sweeping (...) reductions fade away, like the body of the famous Cheshire cat, leaving only a smile.... The second thesis is that the "smiles" are fragmentary patchy explanations, and though patchy and fragmentary, they are very important, potentially Nobel-prize winning advances. To get the best grasp of these "smiles," I want to argue that, we need to return to the roots of discussions and analyses of scientific explanation more generally, and not focus mainly on reduction models, though three conditions based on earlier reduction models are retained in the present analysis. I briefly review the scientific explanation literature as it relates to reduction, and then offer my account of explanation. The account of scientific explanation I present is one I have discussed before, but in this paper I try to simplify it, and characterize it as involving field elements and a preferred causal model system abbreviated as FE and PCMS. In an important sense, this FE and PCMS analysis locates an "explanation" in a typical scientific research article. This FE and PCMS account is illustrated using a recent set of neurogenetic papers on two kinds of worm foraging behaviors: solitary and social feeding. One of the preferred model systems from a 2002 Nature article in this set is used to exemplify the FE and PCMS analysis, which is shown to have both reductive and nonreductive aspects. The paper closes with a brief discussion of how this FE and PCMS approach differs from and is congruent with Bickle's "ruthless reductionism" and the recently revived mechanistic philosophy of science of Machamer, Darden, and Craver. (shrink)
Reduction and reductionism have been central philosophical topics in analytic philosophy of science for more than six decades. Together they encompass a diversity of issues from metaphysics and epistemology. This article provides an introduction to the topic that illuminates how contemporary epistemological discussions took their shape historically and limns the contours of concrete cases of reduction in specific natural sciences. The unity of science and the impulse to accomplish compositional reduction in accord with a layer-cake vision of (...) the sciences, the seminal contributions of Ernest Nagel on theory reduction and how they strongly conditioned subsequent philosophical discussions, and the detailed issues pertaining to different accounts of reduction that arise in both physical and biological science (e.g., limit-case and part-whole reduction in physics, the difference-making principle in genetics, and mechanisms in molecular biology) are explored. The conclusion argues that the epistemological heterogeneity and patchwork organization of the natural sciences encourages a pluralist stance about reduction. (shrink)
Some claim that Non-reductive Physicalism is an unstable position, on grounds that NRP either collapses into reductive physicalism, or expands into emergentism of a robust or ‘strong’ variety. I argue that this claim is unfounded, by attention to the notion of a degree of freedom—roughly, an independent parameter needed to characterize an entity as being in a state functionally relevant to its law-governed properties and behavior. I start by distinguishing three relations that may hold between the degrees of freedom needed (...) to characterize certain special science entities, and those needed to characterize their composing physical entities; these correspond to what I call ‘reductions’, ‘restrictions’, and ‘eliminations’ in degrees of freedom. I then argue that eliminations in degrees of freedom, in particular—when strictly fewer degrees of freedom are required to characterize certain special science entities than are required to characterize their composing physical entities—provide a basis for making sense of how certain special science entities can be both physically acceptable and ontologically irreducible to physical entities. (shrink)
Approaches to the naturalization of phenomenology usually understand naturalization as a matter of rendering continuous the methods, epistemologies, and ontologies of phenomenological and natural scientific inquiry. Presupposed in this statement of the problematic, however, is that there is an original discontinuity, a rupture between phenomenology and the natural sciences that must be remedied. I propose that this way of thinking about the issue is rooted in a simplistic understanding of the phenomenological reduction that entails certain assumptions about the subject (...) matter of phenomenology and its relationship to the natural sciences. By contrast, Merleau‐Ponty's first work, The Structure of Behavior, presents a radically different approach to the phenomenological reduction, one that traverses the natural sciences and integrates them into phenomenology from the outset. I outline the argument for this position in The Structure of Behavior and then discuss consequences for current methodological issues surrounding the naturalization of phenomenology, focusing on the relationship between empirical sciences of mind, phenomenological psychology, and transcendental phenomenology. This novel exegesis of Merleau‐Ponty's view on the reduction offers new insight into his oft‐quoted remark that the phenomenological reduction is impossible to complete. (shrink)
Safety principles in epistemology are often hailed as providing us with an explanation of why we fail to have knowledge in Gettier cases and lottery examples, while at the same time allowing for the fact that we know the negations of sceptical hypotheses. In a recent paper, Sinhababu and Williams have produced an example—the Backward Clock—that is meant to spell trouble for safety accounts of knowledge. I argue that the Backward Clock case is, in fact, unproblematic for the more sophisticated (...) formulations of safety in the literature. However, I then proceed to construct two novel examples that turn out problematic for those formulations—one that provides us with a lottery-style case of safe ignorance and one that is a straightforward case of unsafe knowledge. If these examples succeed, then safety as it is usually conceived in the current debate cannot account for ignorance in all Gettier and lottery-style cases, and neither is it a necessary condition for knowledge. I conclude from these troublesome examples that modal epistemologists ought to embrace a much more simple and non-reductive version of safety, according to which the notion of similarity between possible worlds that determines in which worlds the subject must believe truly is an epistemic notion that cannot be defined or reduced to notions independent of knowledge. The resulting view is shown to also lead to desirable results with respect to lottery cases, certain quantum phenomena, and a puzzling case involving a cautious brain-in-a-vat. (shrink)
The paper works towards an account of explanatory integration in biology, using as a case study explanations of the evolutionary origin of novelties-a problem requiring the integration of several biological fields and approaches. In contrast to the idea that fields studying lower level phenomena are always more fundamental in explanations, I argue that the particular combination of disciplines and theoretical approaches needed to address a complex biological problem and which among them is explanatorily more fundamental varies with the problem pursued. (...) Solving a complex problem need not require theoretical unification or the stable synthesis of different biological fields, as items of knowledge from traditional disciplines can be related solely for the purposes of a specific problem. Apart from the development of genuine interfield theories, successful integration can be effected by smaller epistemic units (concepts, methods, explanations) being linked. Unification or integration is not an aim in itself, but needed for the aim of solving a particular scientific problem, where the problem's nature determines the kind of intellectual integration required. (shrink)
This paper contributes to the recently renewed debate over methodological individualism (MI) by carefully sorting out various individualist claims and by making use of recent work on reduction and explanation outside the social sciences. My major focus is on individualist claims about reduction and explanation. I argue that reductionist versions of MI fail for much the same reasons that mental predicates cannot be reduced to physical predicates and that attempts to establish reducibility by weakening the requirements for (...) class='Hi'>reduction also fail. I consider and reject a number of explanatory theses, among them the claims that any adequate theory must refer only to individuals and that individualist theory suffices to explain fully. The latter claim, I argue, is not entailed by the supervenience of social facts on individual facts. Lastly, I argue that there is one individualist restriction on explanation which is far more plausible and significant than one would initially suspect. (shrink)
One of the leading approaches to the nature of sensory pleasure reduces it to desire: roughly, a sensation qualifies as a sensation of pleasure just in case its subject wants to be feeling it. This approach is, in my view, correct, but it has never been formulated quite right; and it needs to be defended against some compelling arguments. Thus the purpose of this paper is to discover the most defensible formulation of this rough idea, and to defend it against (...) the most interesting objections. (shrink)
In `Essence and Modality', Kit Fine proposes that for a proposition to be metaphysically necessary is for it to be true in virtue of the nature of all objects whatsoever. Call this view Fine's Thesis. This paper is a study of Fine's Thesis in the context of Fine's logic of essence (LE). Fine himself has offered his most elaborate defense of the thesis in the context of LE. His defense rests on the widely shared assumption that metaphysical necessity obeys the (...) laws of the modal logic S5. In order to get S5 for metaphysical necessity, he assumes a controversial principle about the nature of all objects. I will show that the addition of this principle to his original system E5 leads to inconsistency with an independently plausible principle about essence. In response, I develop a theory that avoids this inconsistency while allowing us to maintain S5 for meta- physical necessity. However, I conclude that our investigation of Fine's Thesis in the context of LE motivates the revisionary conclusion that metaphysical necessity obeys the principles of the modal logic S4, but not those of S5. I argue that this constitutes a distinctively essentialist challenge to the received view that the logic of metaphysical necessity is S5. (shrink)
Four current accounts of theory reduction are presented, first informally and then formally: (1) an account of direct theory reduction that is based on the contributions of Nagel, Woodger, and Quine, (2) an indirect reduction paradigm due to Kemeny and Oppenheim, (3) an "isomorphic model" schema traceable to Suppes, and (4) a theory of reduction that is based on the work of Popper, Feyerabend, and Kuhn. Reference is made, in an attempt to choose between these schemas, (...) to the explanation of physical optics by Maxwell's electromagnetic theory, and to the revisions of genetics necessitated by partial biochemical reductions of genetics. A more general reduction schema is proposed which: (1) yields as special cases the four reduction paradigms considered above, (2) seems to be in better accord with both the canons of logic and actual scientific practice, and (3) clarifies the problems of meaning variance and ontological reduction. (shrink)
Contemporary philosophers of mind tend to assume that the world of nature can be reduced to basic physics. Yet there are features of the mind consciousness, intentionality, normativity that do not seem to be reducible to physics or neuroscience. This explanatory gap between mind and brain has thus been a major cause of concern in recent philosophy of mind. Reductionists hold that, despite all appearances, the mind can be reduced to the brain. Eliminativists hold that it cannot, and that this (...) implies that there is something illegitimate about the mentalistic vocabulary. Dualists hold that the mental is irreducible, and that this implies either a substance or a property dualism. Mysterian non-reductive physicalists hold that the mind is uniquely irreducible, perhaps due to some limitation of our self-understanding. In this book, Steven Horst argues that this whole conversation is based on assumptions left over from an outdated philosophy of science. While reductionism was part of the philosophical orthodoxy fifty years ago, it has been decisively rejected by philosophers of science over the past thirty years, and for good reason. True reductions are in fact exceedingly rare in the sciences, and the conviction that they were there to be found was an artifact of armchair assumptions of 17th century Rationalists and 20th century Logical Empiricists. The explanatory gaps between mind and brain are far from unique. In fact, in the sciences it is gaps all the way down.And if reductions are rare in even the physical sciences, there is little reason to expect them in the case of psychology. Horst argues that this calls for a complete re-thinking of the contemporary problematic in philosophy of mind. Reductionism, dualism, eliminativism and non-reductive materialism are each severely compromised by post-reductionist philosophy of science, and philosophy of mind is in need of a new paradigm. Horst suggests that such a paradigm might be found in Cognitive Pluralism: the view that human cognitive architecture constrains us to understand the world through a plurality of partial, idealized, and pragmatically-constrained models, each employing a particular representational system optimized for its own problem domain. Such an architecture can explain the disunities of knowledge, and is plausible on evolutionary grounds. (shrink)
In previous work, I described several examples combining reduction and emergence: where reduction is understood a la Ernest Nagel, and emergence is understood as behaviour that is novel. Here, my aim is again to reconcile reduction and emergence, for a case which is apparently more problematic than those I treated before: renormalization. My main point is that renormalizability being a generic feature at accessible energies gives us a conceptually unified family of Nagelian reductions. That is worth saying (...) since philosophers tend to think of scientific explanation as only explaining an individual event, or perhaps a single law, or at most deducing one theory as a special case of another. Here we see a framework in which there is a space of theories endowed with enough structure that it provides a family of reductions. (shrink)
University Abstract Philosophers have sought to improve upon the logical empiricists’ model of scientific reduction. While opportunities for integration between the cognitive and the neural sciences have increased, most philosophers, appealing to the multiple realizability of mental states and the irreducibility of consciousness, object to psychoneural reduction. New Wave reductionists offer a continuum of comparative goodness of intertheoretic mapping for assessing reductions. Their insistence on a unified view of intertheoretic relations obscures epistemically significant crossscientific relations and engenders dismissive (...) conclusions about psychology. Richer, more sensitive accounts of explanatory pluralism and mechanistic explanation in science advocate multi-level approaches in cross-scientific settings and criticize the distance of the standard philosophical objections from working scientists’ practices and discoveries. The Heuristic Identity Theory, a new, scientifically informed version of the psycho-physical identity theory, incorporates these insights, showing how multiple realizability is an argument for (not against) identities in science and why, therefore, consciousness is not irreducible. (shrink)
Unlike the overall framework of Ernest Nagel's work on reduction, his theory of intertheoretic connection still has life in it. It handles aptly cases where reduction requires complex representation of a target domain. Abandoning his formulation as too liberal was a mistake. Arguments that it is too liberal at best touch only Nagel's deductivist theory of explanation, not his condition of connectability. Taking this condition seriously gives a powerful view of reduction, but one which requires us to (...) index explanatory power to sciences as they are formulated at particular times. While we may thereby reduce more than philosophers have supposed, we must abandon hope (as Nagel did) of saying anything useful about reductionism. (shrink)
The entropy-reduction hypothesis claims that the cognitive processing difficulty on a word in sentence context is determined by the word's effect on the uncertainty about the sentence. Here, this hypothesis is tested more thoroughly than has been done before, using a recurrent neural network for estimating entropy and self-paced reading for obtaining measures of cognitive processing load. Results show a positive relation between reading time on a word and the reduction in entropy due to processing that word, supporting (...) the entropy-reduction hypothesis. Although this effect is independent from the effect of word surprisal, we find no evidence that these two measures correspond to cognitively distinct processes. (shrink)