Ken Warmbrod thinks Quine agrees that translation is determinate if it is determinate what speakers would say in all possible circumstances; that what things would do in merely possible circumstances is determined by what their subvisible constituent mechanisms would dispose them to do on the evidence of what alike actual mechanisms make alike actual things do actually; and that what speakers say is determined by their neural mechanisms. Warmbrod infers that people's neural mechanisms make translation of what people say determinate. (...) I argue that the evidence of what alike actual mechanisms make alike actual things do actually, underdetermines what our neural mechanisms would make us say in merely possible circumstances. So translation is indeterminate. And so too are the dispositions of physical mechanisms. (shrink)
This paper explores a remarkable convergence of ideas and evidence, previously presented in separate places by its authors. That convergence has now become so persuasive that we believe we are working within substantially the same broad framework. Taylor's mathematical papers on neuronal systems involved in consciousness dovetail well with work by Newman and Baars on the thalamocortical system, suggesting a brain mechanism much like the global workspace architecture developed by Baars (see references below). This architecture is relational, in (...) the sense that it continuously mediates the interaction of input with memory. While our approaches overlap in a number of ways, each of us tends to focus on different areas of detail. What is most striking, and we believe significant, is the extent of consensus, which we believe to be consistent with other contemporary approaches by Weiskrantz, Gray, Crick and Koch, Edelman, Gazzaniga, Newell and colleagues, Posner, Baddeley, and a number of others. We suggest that cognitive neuroscience is moving toward a shared understanding of consciousness in the brain. (shrink)
Auditory verbal hallucinations (AVHs) are a subjective experience of "hearing voices" in the absence of corresponding physical stimulation in the environment. The most remarkable feature of AVHs is their perceptual quality, that is, the experience is subjectively often as vivid as hearing an actual voice, as opposed to mental imagery or auditory memories. This has lead to propositions that dysregulation of the primary auditory cortex (PAC) is a crucial component of the neural mechanism of AVHs. One possible mechanism (...) by which the PAC could give rise to the experience of hallucinations is aberrant patterns of neuronal activity whereby the PAC is overly sensitive to activation arising from internal processing, while being less responsive to external stimulation. In this paper, we review recent research relevant to the role of the PAC in the generation of AVHs. We present new data from a functional magnetic resonance imaging (fMRI) study, examining the responsivity of the left and right PAC to parametrical modulation of the intensity of auditory verbal stimulation, and corresponding attentional top-down control in non-clinical participants with AVHs, and non-clinical participants with no AVHs. Non-clinical hallucinators showed reduced activation to speech sounds but intact attentional modulation in the right PAC. Additionally, we present data from a group of schizophrenia patients with AVHs, who do not show attentional modulation of left or right PAC. The context-appropriate modulation of the PAC may be a protective factor in non-clinical hallucinations. (shrink)
The concept of mechanism in biology has three distinct meanings. It may refer to a philosophical thesis about the nature of life and biology (‘mechanicism’), to the internal workings of a machine-like structure (‘machine mechanism’), or to the causal explanation of a particular phenomenon (‘causal mechanism’). In this paper I trace the conceptual evolution of ‘mechanism’ in the history of biology, and I examine how the three meanings of this term have come to be featured in (...) the philosophy of biology, situating the new ‘mechanismic program’ in this context. I argue that the leading advocates of the mechanismic program (i.e., Craver, Darden, Bechtel, etc.) inadvertently conflate the different senses of ‘mechanism’. Specifically, they all inappropriately endow causal mechanisms with the ontic status of machine mechanisms, and this invariably results in problematic accounts of the role played by mechanism-talk in scientific practice. I suggest that for effective analyses of the concept of mechanism, causal mechanisms need to be distinguished from machine mechanisms, and the new mechanismic program in the philosophy of biology needs to be demarcated from the traditional concerns of mechanistic biology. (shrink)
The aim of this paper is to examine the usefulness of the Machamer, Darden, and Craver (2000) mechanism approach to gaining an understanding of explanation in cognitive neuroscience. We argue that although the mechanism approach can capture many aspects of explanation in cognitive neuroscience, it cannot capture everything. In particular, it cannot completely capture all aspects of the content and significance of mental representations or the evaluative features constitutive of psychopathology.
Sections 3.16 and 3.23 of Roger Penrose's Shadows of the mind (Oxford, Oxford University Press, 1994) contain a subtle and intriguing new argument against mechanism, the thesis that the human mind can be accurately modeled by a Turing machine. The argument, based on the incompleteness theorem, is designed to meet standard objections to the original Lucas-Penrose formulations. The new argument, however, seems to invoke an unrestricted truth predicate (and an unrestricted knowability predicate). If so, its premises are inconsistent. The (...) usual ways of restricting the predicates either invalidate Penrose's reasoning or require presuppositions that the mechanist can reject. (shrink)
Long-Term Potentiation (LTP) is a kind of synaptic plasticity that many contemporary neuroscientists believe is a component in mechanisms of memory. This essay describes the discovery of LTP and the development of the LTP research program. The story begins in the 1950's with the discovery of synaptic plasticity in the hippocampus (a medial temporal lobe structure now associated with memory), and it ends in 1973 with the publication of three papers sketching the future course of the LTP research program. The (...) making of LTP was a protracted affair. Hippocampal synaptic plasticity was initially encountered as an experimental tool, then reported as a curiosity, and finally included in the ontic store of the neurosciences. Early researchers were not investigating the hippocampus in search of a memory mechanism; rather, they saw the hippocampus as a useful experimental model or as a structure implicated in the etiology of epilepsy. The link between hippocampal synaptic plasticity and learning or memory was a separate conceptual achievement. That link was formulated in at least three different ways at different times: reductively (claiming that plasticity is identical to learning), analogically (claiming that plasticity is an example or model of learning), and mechanistically (claiming that plasticity is a component in learning or memory mechanisms). The hypothesized link with learning or memory, coupled with developments in experimental techniques and preparations, shaped how researchers understood LTP itself. By 1973, the mechanistic formulation of the link between LTP and memory provided an abstract framework around which findings from multiple perspectives could be integrated into a multifield research program. (shrink)
The sixteenth and seventeenth centuries marks a period of transition between the vitalistic ontology that had dominated Renaissance natural philosophy and the Early Modern mechanistic paradigm endorsed by, among others, the Cartesians and Newtonians. This paper will focus on how the tensions between vitalism and mechanism played themselves out in the context of sixteenth and seventeenth century chemistry and chemical philosophy, particularly in the works of Paracelsus, Jan Baptista Van Helmont, Robert Fludd, and Robert Boyle. Rather than argue that (...) these natural philosophers each embraced either fully vitalistic or fully mechanistic ontologies, I hope to demonstrate that these thinkers adhered to complicated and nuanced ontologies that cannot be described in either purely vitalistic or purely mechanistic terms. A central feature of my argument is the claim that a corpuscularian theory of matter does not entail a strictly mechanistic and reductionistic account of chemical properties. I also argue that what marks the shift from pre-modern vitalistic chemical philosophy to the modern chemical philosophy that marked the Chemical Revolution is not the victory of mechanism and reductionism in chemistry but, rather, the shift to a physicalistic and naturalistic account of chemical properties and vital spirits. (shrink)
What is the relationship between pain and the body? I claim that pain is best explained as a type of personal experience and the bodily response during pain is best explained in terms of a type of mechanical neurophysiologic operation. I apply the radical philosophy of identity theory from philosophy of mind to the relationship between the personal experience of pain and specific neurophysiologic mechanism and argue that the relationship between them is best explained as one of type identity. (...) Specifically, pain is a specific type of personal experience identical to a specific type of allostatic stress response comprised of interdependent nervous, endocrine and immune mechanical operations. (shrink)
PROFESSOR LEWIS 1 and Professor Coder 2 criticize my use of GĂ¶del's theorem to refute Mechanism. 3 Their criticisms are valuable. In order to meet them I need to show more clearly both what the tactic of my argument is at one crucial point and the general aim of the whole manoeuvre.
The critique of mechanism in the political philosophy of Herder and German romanticism -- The political function of machine metaphors in Hegel's early writings -- Mechanism in religious practice -- The mechanization of labor and the birth of modern ethicality in Hegel's Jena political writings -- Mechanism and the problem of self-determination in Hegel's logic -- The modern state as absolute mechanism : Hegel's logical insight into the relation of civil society and the state.
Accounts of ontic explanation have often been devised so as to provide an understanding of mechanism and of causation. Ontic accounts differ quite radically in their ontologies, and one of the latest additions to this tradition proposed by Peter Machamer, Lindley Darden and Carl Craver reintroduces the concept of activity. In this paper I ask whether this influential and activity-based account of mechanisms is viable as an ontic account. I focus on polygenic scenarios—scenarios in which the causal truths depend (...) on more than one cause. The importance of polygenic causation was noticed early on by Mill (1893). It has since been shown to be a problem for both causal-law approaches to causation (Cartwright 1983) and accounts of causation cast in terms of capacities (Dupré 1993; Glennan 1997, pp. 605-626). However, whereas mechanistic accounts seem to be attractive precisely because they promise to handle complicated causal scenarios, polygenic causation needs to be examined more thoroughly in the emerging literature on activity-based mechanisms. The activity-based account proposed in Machamer et al. (2000, pp. 1-25) is problematic as an ontic account, I will argue. It seems necessary to ask, of any ontic account, how well it performs in causal situations where—at the explanandum level of mechanism—no activity occurs. In addition, it should be asked how well the activity-based account performs in situations where there are too few activities around to match the polygenic causal origin of the explanandum. The first situation presents an explanandum-problem and the second situation presents an explanans-problem—I will argue—both of which threaten activity-based frameworks. (shrink)
Embodied cognition has attracted significant attention within cognitive science and related fields in recent years. It is most noteworthy for its emphasis on the inextricable connection between mental functioning and embodied activity and thus for its departure from standard cognitive science's implicit commitment to the unembodied mind. This article offers a review of embodied cognition's recent empirical and theoretical contributions and suggests how this movement has moved beyond standard cognitive science. The article then clarifies important respects in which embodied cognition (...) has not departed fundamentally from the standard view. A shared commitment to representationalism, and ultimately, mechanism, suggest that the standard and embodied cognition movements are more closely related than is commonly acknowledged. Arguments against representationalism and mechanism are reviewed and an alternative position that does not entail these conceptual undergirdings is offered. (shrink)
Abduction and metaphor are two significant concepts in cognitive science. It is found that the both mental processes are on the basis of certain similarity. The similarity inspires us to seek the answers to the following two questions: (1) Whether there is a common cognitive mechanism behind abduction and metaphor? And (2) if there is, whether this common mechanism could be interpreted within the unified frame of modern intelligence theory? Centering on these two issues, the paper attempts to (...) characterize and interpret the generation and evolution of scientific metaphors from the perspective of the cognitive mechanism of abductive inference. Then it interprets the common cognitive mechanism behind abduction and metaphor within Hawkins’ frame of intelligence theory. The commonality between abduction and metaphor indicates the potential to further explore human intelligence. (shrink)
In this note, I briefly review Lyre’s (2008) analysis and interpretation of the Higgs mechanism. Contrary to Lyre, I maintain that, on the proper understanding of the term, the Higgs mechanism refers to a physical process in the course of which gauge bosons acquire a mass. Since also Lyre’s worries about imaginary masses can be dismissed, a realistic interpretation of the Higgs mechanism seems viable. While it may remain an open empirical question whether the Higgs mechanism (...) did actually occur in the early history of the universe and what the details of the mechanism are, I claim that the term can certainly refer to a physical process. (shrink)
Confronted with problems or situations that do not yield toknown theories and world views, scientists and students are alike. Theyare rarely able to directly build a model or a theory thereof. Rather,they must find ways to make sense of the circumstances using theircurrent knowledge and adjusting what is recognized in the process. Thisway of thinking, using past ways of perceiving the physical world tobuild new ones does not follow a logical path and cannot be described astheory revision. Likewise, in many (...) situations it is awkward, indeedoften impossible, to resort to analogical reasoning to account for it.This paper presents a new mechanism, called `tunnel effect', that mayexplain, in part, how scientists and students reason while constructinga new conceptual domain. `Tunnel effect' is also contrasted withanalogical reasoning. (shrink)
How do human infants learn the causal dependencies between events? Evidence suggests that this remarkable feat can be achieved by observation of only a handful of examples. Many computational models have been produced to explain how infants perform causal inference without explicit teaching about statistics or the scientific method. Here, we propose a spiking neuronal network implementation that can be entrained to form a dynamical model of the temporal and causal relationships between events that it observes. The network uses (...) spike-time dependent plasticity, long-term depression, and heterosynaptic competition rules to implement Rescorla–Wagner-like learning. Transmission delays between neurons allow the network to learn a forward model of the temporal relationships between events. Within this framework, biologically realistic synaptic plasticity rules account for well-known behavioral data regarding cognitive causal assumptions such as backwards blocking and screening-off. These models can then be run as emulators for state inference. Furthermore, this mechanism is capable of copying synaptic connectivity patterns between neuronal networks by observing the spontaneous spike activity from the neuronal circuit that is to be copied, and it thereby provides a powerful method for transmission of circuit functionality between brain regions. (shrink)
Karni and Safra  prove that the Becker-DeGroot-Marschak mechanism reveals a decision maker's true certainty equivalent of a lottery if and only if he satisfies the independence axiom. Segal  claims that this mechanism may reveal a violation of the reduction of compound lotteries axiom. This paper empirically tests these two interpretations. Our results show that the second interpretation fits better with the collected data. Moreover, we show by means of some nonexpected utility examples that these results are (...) consistent with a wide range of functionals. (shrink)
Primary and methyl aliphatic halides and tosylates undergo substitution reactions with nucleophiles in one step by the classic S N 2 mechanism, which is characterized by second-order kinetics and inversion of configuration at the reaction center. Tertiary aliphatic halides and tosylates undergo substitution reactions with nucleophiles in two (or more) steps by the classic S N 1 mechanism, which is characterized by first-order kinetics and incomplete inversion of configuration at the reaction center due to the presence of ion (...) pairs. When the nucleophile is also the solvent, the substitution reaction is called a solvolysis, and both the S N 2 and S N 1 reactions now obey first-order kinetics. Schleyer and Bentley have provided solid, but not conclusive, evidence that secondary substrates undergo solvolysis by a merged mechanism, one that blends characteristics of both the S N 2 and S N 1 mechanisms. The following paper presents the history of their sustained pursuit of a merged mechanism and subsequent rebuttals to this claim. Several issues related to the philosophy and sociology of science are also discussed. (shrink)
The consensus view in philosophy of science is that reductionism is dead. One reason for this is that the deductive nomological (DN) model of explanation, on which classical reductionism depends, is widely regarded as indefensible. I argue that the DN model is inessential to the reductionist program, and that mechanism provides a better framework for thinking about reductionism. But this runs counter to the contemporary mechanists’ claim that their view provides a distinct alternative to reductionism. I demonstrate that this (...) view is mistaken. Mechanists are committed to reductionism, as evidenced by the historical roots of the contemporary mechanist program: namely, in the mechanical philosophy of Descartes, Boyle, and others. This view shares certain core commitments with classical and contemporary reductionists. I argue that it is these shared commitments, not a direct commitment to the DN model, that constitute the essential elements of the reductionist program. (shrink)
This paper discusses the relationship between coalitional stability and the robustness of bargaining outcomes to the bargaining procedure. We consider a class of bargaining procedures described by extensive form games, where payoff opportunities are given by a characteristic function (cooperative) game. The extensive form games differ on the probability distribution assigned to chance moves which determine the order in which players take actions. One way to define mechanism robustness is in terms of the property of ‘no first mover advantage’. (...) An equilibrium is mechanism robust if for each member the expected payoff before and after being called to propose is the same. Alternatively one can define mechanism robustness as a property of equilibrium outcomes. An outcome is said to be mechanism robust if it is supported by some equilibrium in all the extensive form games (mechanisms) within our class. We show that both definitions of mechanism robustness provide an interesting characterization of the core of the underlying cooperative game. (shrink)
We sketch a framework for building a unified science of cognition. This unification is achieved by showing how functional analyses of cognitive capacities can be integrated with the multilevel mechanistic explanations of neural systems. The core idea is that functional analyses are sketches of mechanisms , in which some structural aspects of a mechanistic explanation are omitted. Once the missing aspects are filled in, a functional analysis turns into a full-blown mechanistic explanation. By this process, functional analyses are seamlessly integrated (...) with multilevel mechanistic explanations. (shrink)
In this paper, I examine metaphysical aspects in the neuroeconomics debate. I propose that part of the debate can be better understood by supposing two metaphysical stances, mechanistic and functional. I characterize the two stances, and discuss their relations. I consider two models of framing, in order to illustrate how the features of mechanistic and functional stances figure in the practice of the sciences of individual decision making.
Two seemingly contradictory tendencies have accompanied the development of the natural sciences in the past 150 years. On the one hand, the natural sciences have been instrumental in effecting a thoroughgoing transformation of social structures and have made a permanent impact on the conceptual world of human beings. This histori¬cal period has, on the other hand, also brought to light the merely hypothetical validity of scientific knowledge. As late as the middle of the 19th century the truth-pathos in the natural (...) sciences was still unbroken. Yet in the succeeding years these claims to certain knowledge underwent a fundamental crisis. For scientists today, of course, the fact that their knowledge can possess only relative validity is a matter of self-evidence. The present analysis investigates the early phase of this fundamental change in the concept of science through an examination of Hermann von Helmholtz's conception of science and his mechanistic interpretation of nature. Helmholtz (1821-1894) was one of the most important natural scientists in Germany. The development of this thoughts offers an impressive but, until now, relatively little considered report from the field of the experimental sciences chronicling the erosion of certainty. (shrink)
After a decade of intense debate about mechanisms, there is still no consensus characterization. In this paper we argue for a characterization that applies widely to mechanisms across the sciences. We examine and defend our disagreements with the major current contenders for characterizations of mechanisms. Ultimately, we indicate that the major contenders can all sign up to our characterization.
A widely accepted theory holds that emotional experiences occur mainly in a part of the human brain called the amygdala. A different theory asserts that color sensation is located in a small subpart of the visual cortex called V4. If these theories are correct, or even approximately correct, then they are remarkable advances toward a scientific explanation of human conscious experience. Yet even understanding the claims of such theories—much less evaluating them—raises some puzzles. Conscious experience does not present itself as (...) a brain process. Indeed experience seems entirely unlike neural activity. For example, to some people it seems that an exact physical duplicate of you could have different sensations than you do, or could have no sensations at all. If so, then how is it even possible that sensations could turn out to be brain processes? (shrink)
The Human Genome Project (HGP) is regarded by many as one of the major scientific achievements in recent science history, a large-scale endeavour that is changing the way in which biomedical research is done and expected, moreover, to yield considerable benefit for society. Thus, since the completion of the human genome sequencing effort, a debate has emerged over the question whether this effort merits to be awarded a Nobel Prize and if so, who should be the one(s) to receive it, (...) as (according to current procedures) no more than three individuals can be selected. In this article, the HGP is taken as a case study to consider the ethical question to what extent it is still possible, in an era of big science, of large-scale consortia and global team work, to acknowledge and reward individual contributions to important breakthroughs in biomedical fields. Is it still viable to single out individuals for their decisive contributions in order to reward them in a fair and convincing way? Whereas the concept of the Nobel prize as such seems to reflect an archetypical view of scientists as solitary researchers who, at a certain point in their careers, make their one decisive discovery, this vision has proven to be problematic from the very outset. Already during the first decade of the Nobel era, Ivan Pavlov was denied the Prize several times before finally receiving it, on the basis of the argument that he had been active as a research manager (a designer and supervisor of research projects) rather than as a researcher himself. The question then is whether, in the case of the HGP, a research effort that involved the contributions of hundreds or even thousands of researchers worldwide, it is still possible to individualise the Prize? The HGP Nobel Prize problem is regarded as an exemplary issue in current research ethics, highlighting a number of quandaries and trends involved in contemporary life science research practices more broadly. (shrink)