William Uttal's The new phrenology is a broad attack on localization in cognitive neuroscience. He argues that even though the brain is a highly differentiated organ, "high level cognitive functions" should not be localized in specific brain regions. First, he argues that psychological processes are not well-defined. Second, he criticizes the methods used to localize psychological processes, including imaging technology: he argues that variation among individuals compromises localization, and that the statistical methods used to construct activation maps are (...) flawed. Neither criticism is compelling. First, as we illustrate, there are behavioral measures which offer at least weak constraints on psychological attribution. Second, though imaging does face methodological difficulties associated with variation among individuals, these are broadly acknowledged; moreover, his specific criticisms of the imaging work, and in particular of fMRI, misrepresent the methodology. In concluding, we suggest a way of framing the issues that might allow us to resolve differences between localizationist models and more distributed models empirically. (shrink)
A spate of recent anti-localizationist publications have re-ignited the old debate about the localization of function. Many of the recent attacks on localization, however, are directed at what I will argue to be a narrow and outmoded view of localization, and thus have little conceptual or empirical impact. What I hope to present here is an analysis of functional localization that more adequately reflects the sophistication and complexity of its use in neuroscientific research, both historically and (...) recently. Proceeding first by way of contrast, I examine theanti-localizationist positions of holism andequipotentiationism. Then, I present a four-fold analysis of localization according to physical scope, physical kind, functional scope, and functional kind. Next, I turn to a discussion of the heuristic value oflocalization in deciphering structure-functionrelationships. Finally, I hope to show that the overall view of functional localization that emerges from these considerations constitutes a much more elusive target than its critics assume. It serves to mitigate, and insome instances even defeat, some forms ofanti-localizationist criticisms. (shrink)
The progression of theories suggested for our world, from ego- to geo- to helio-centric models to universe and multiverse theories and beyond, shows one tendency: The size of the described worlds increases, with humans being expelled from their center to ever more remote and random locations. If pushed too far, a potential theory of everything (TOE) is actually more a theories of nothing (TON). Indeed such theories have already been developed. I show that including observer localization into such theories (...) is necessary and sufficient to avoid this problem. I develop a quantitative recipe to identify TOEs and distinguish them from TONs and theories in-between. This precisely shows what the problem is with some recently suggested universal TOEs. (shrink)
The main aim of this work is to relate integrability in QFT with a complete particle interpretation directly to the principle of causal localization, circumventing the standard method of finding sufficiently many conservation laws. Its precise conceptual-mathematical formulation as “modular localization” within the setting of local operator algebras also suggests novel ways of looking at general (non-integrable) QFTs which are not based on quantizing classical field theories.Conformal QFT, which is known to admit no particle interpretation, suggest the presence (...) of a “partial” integrability, referred to as “conformal integrability”. This manifests itself in a “braid-permutation” group structure which contains in particular informations about the anomalous dimensional spectrum. For chiral conformal models this reduces to the braid group as it is represented in Hecke- or Birman-Wenzl-algebras associated to chiral models.Another application of modular localization mentioned in this work is an alternative to the BRST formulation of gauge theories in terms of stringlike vectorpotentials within a Hilbert space setting. (shrink)
This paper describes one style of functional analysis commonly used in the neurosciences called task-bound functional analysis. The concept of function invoked by this style of analysis is distinctive in virtue of the dependence relations it bears to transient environmental properties. It is argued that task-bound functional analysis cannot explain the presence of structural properties in nervous systems. An alternative concept of neural function is introduced that draws on the theoretical neuroscience literature, and an argument is given to show that (...) this alternative concept of may help to overcome the explanatory limitations of task-bound functional analysis. (shrink)
Pictures let us see what is not there. Or rather, since what pictures depict is not really there, we do not really see the things they are pictures of. Ever since Richard Wollheim introduced the notion of seeing-in into philosophical aesthetics, as part of his theory of depiction, there has been a lively debate about how, precisely, to understand this experience. However, one (alleged) feature of seeing-in that Wollheim pointed to has been almost completely absent in the subsequent discussion, namely (...) that seeing-in allows for non-localization. When looking at a picture, Wollheim says, there is not always an answer to the question of where one sees a certain thing in a picture. If Wollheim is right in this, pictures indeed let us see what is not there: we see things in pictures, but there is no ‘there’ where we see those things. In this paper I argue against Wollheim's claim that object-seeing-in allows for non-localization. But there is, I argue, a pictorial experience, which is closely tied to seeing-in and which is non-localized, namely (what I call) pictorial perceptual presence. (shrink)
Many of the "counterintuitive" features of relativistic quantum field theory have their formal root in the Reeh-Schlieder theorem, which in particular entails that local operations applied to the vacuum state can produce any state of the entire field. It is of great interest then that I.E. Segal and, more recently, G. Fleming (in a paper entitled "Reeh-Schlieder meets Newton-Wigner") have proposed an alternative "Newton-Wigner" localization scheme that avoids the Reeh-Schlieder theorem. In this paper, I reconstruct the Newton-Wigner localization (...) scheme and clarify the limited extent to which it avoids the counterintuitive consequences of the Reeh-Schlieder theorem. I also argue that there is no coherent interpretation of the Newton-Wigner localization scheme that renders it free from act-outcome correlations at spacelike separation. (shrink)
Material objects, such as tables and chairs, have an intimate relationship with space. They have to be somewhere. They must possess an address at which they are found. Under this aspect, they are in good company. Events, too, such as Caesar’s death and John’s buttering of the toast, and more elusive entities, such as the surface of the table, have an address, difficult as it may be to specify. A stronger notion presents itself, though. Some entities may not only be (...) located at an address; they may also own (as it were) the place at which they are located, so as to exclude other entities from being located at the same address. Thus, for certain kinds of entities, no two tokens of the same kind can be located at the same place at the same time. This is typically the case with material objects. Likewise, no two particularized properties of the same level or degree of determinacy can be located at the same place at the same time (although particularized properties of different degree, such as the red of this table and the color of this table, can). Other entities seem to evade the restriction. Two events can be perfectly co-located without competing for their address. Or, to use a different terminology, events do not occupy the spatial region at which they are located, and can therefore share it with other events. The rotation of the Earth and the cooling down of the Earth take place at exactly the same region. Some of these facts and hypotheses have important bearings as to matters of identity. For instance, co-localization seems to be a sufficient condition for identity in the case of material objects, but not in the case of.. (shrink)
What are the relationships between an entity and the space at which it is located? And between a region of space and the events that take place there? What is the metaphysical structure of localization? What its modal status? This paper addresses some of these questions in an attempt to work out at least the main coordinates of the logical structure of localization. Our task is mostly taxonomic. But we also highlight some of the underlying structural features and (...) we single out the interactions between the notion of localization and nearby notions, such as the notions of part and whole, or of necessity and possibility. A theory of localization--we argue--is needed in order to account for the basic relations between objects and space, and runs afoul a pure part-whole theory. We also provide an axiomatization of the relation of localization and examine cases of localization involving entities different from material objects. (shrink)
This paper contrasts three different positions taken by 18th century British scholars on how sensations, particularly sensations of colour and touch, come to be localized in space: Berkeley's view (initiated, though not fully executed) that we learn to localize ideas of colour by associating certain purely qualitative features of those ideas with ideas of touch and motion, Hume's view that visual and tangible impressions are originally disposed in space, and Reid's view (inspired by Porterfield) that we are innately disposed to (...) refer appearances of colour to the end of a line passing through the centre of the eye and originating from the spot on the back of the retina where the material impression causing that appearance was received. Reid's reasons for rejecting the Berkeleian and Humean views are examined. It is argued that Reid's position on visual localization is ultimately driven by his dualistic metaphysical commitments rather than by an empirically grounded investigation of the phenomena of vision. To this extent, his position sits uncomfortably with his own methodological commitments. (shrink)
Apoptosis proteins play an essential role in regulating a balance between cell proliferation and death. The successful prediction of subcellular localization of apoptosis proteins directly from primary sequence is much benefited to understand programmed cell death and drug discovery. In this paper, by use of Chou’s pseudo amino acid composition (PseAAC), a total of 317 apoptosis proteins are predicted by support vector machine (SVM). The jackknife cross-validation is applied to test predictive capability (...) of proposed method. The predictive results show that overall prediction accuracy is 91.1% which is higher than previous methods. Furthermore, another dataset containing 98 apoptosis proteins is examined by proposed method. The overall predicted successful rate is 92.9%. (shrink)
In his target article, Pulvermüller addresses the issue of word localization in the brain. It is not clear, however, how cell assemblies are localized in the case of sensory deprivation. Pulvermüller's claim is that words learned via other modalities (i.e., sign languages) should be localized differently. It is argued, however, based on experimental and theoretical ground, that they should be found in a similar place.
In this paper I will present conceptions of state reduction and particle and/or system localization which render these subjects fully compatible with the general requirements of a relativistic, i.e. Lorentz invariant, quantum theory. The approach consists of a systematic generalization of the concepts of initial data assignment at definite times, initiation and completion of measurements at definite times, and dynamical evolution as time dependence, to the concepts of initial data assignment on arbitrary space-like hyperplanes, initiation and completion of (...) measurements on arbitrary space-like hyperplanes, and dynamical evolution as space-like hyperplane dependence, respectively. I also briefly discuss the superluminal propagation which emerges from the localization study and the manner in which causal anomalies are nevertheless avoided. (shrink)
The problem of finding a covariant expression for the distribution and conservation of gravitational energy-momentum dates to the 1910s. A suitably covariant infinite-component localization is displayed, reflecting Bergmann's realization that there are infinitely many gravitational energy-momenta. Initially use is made of a flat background metric (or rather, all of them) or connection, because the desired gauge invariance properties are obvious. Partial gauge-fixing then yields an appropriate covariant quantity without any background metric or connection; one version is the collection of (...) pseudotensors of a given type, such as the Einstein pseudotensor, in _every_ coordinate system. This solution to the gauge covariance problem is easily adapted to any pseudotensorial expression (Landau-Lifshitz, Goldberg, Papapetrou or the like) or to any tensorial expression built with a background metric or connection. Thus the specific functional form can be chosen on technical grounds such as relating to Noether's theorem and yielding expected values of conserved quantities in certain contexts and then rendered covariant using the procedure described here. The application to angular momentum localization is straightforward. Traditional objections to pseudotensors are based largely on the false assumption that there is only one gravitational energy rather than infinitely many. (shrink)
From the “epistemologically different worlds” perspective, I analyze the status of cognitive neuroscience today. I investigate the main actual topics in cognitive neuroscience: localization and the brain imaging, the binding problem (Treisman’s feature integration theory and synchronized oscillations approach), differentiation and integration, optimism versus skepticism approaches, perception and object recognition, space and the mind, crossmodal interactions, and the holistic view against localization. I want to show that these problems are pseudo-problems and this “science” has “No ontology landscape”.
This paper defends cognitive neuroscience’s project of developing mechanistic explan- ations of cognitive processes through decomposition and localization against objections raised by William Uttal in The New Phrenology. The key issue between Uttal and researchers pursuing cognitive neuroscience is that Uttal bets against the possibility of decomposing mental operations into component elementary operations which are localized in distinct brain regions. The paper argues that it is through advancing and revising what are likely to be overly simplistic and incorrect decompositions (...) that the goals of cognitive neuroscience are likely to be achieved. (shrink)
Increasingly encompassing models have been suggested for our world. Theories range from generally accepted to increasingly speculative to apparently bogus. The progression of theories from ego- to geo- to helio-centric models to universe and multiverse theories and beyond was accompanied by a dramatic increase in the sizes of the postulated worlds, with humans being expelled from their center to ever more remote and random locations. Rather than leading to a true theory of everything, this trend faces a turning point after (...) which the predictive power of such theories decreases (actually to zero). Incorporating the location and other capacities of the observer into such theories avoids this problem and allows to distinguish meaningful from predictively meaningless theories. This also leads to a truly complete theory of everything consisting of a (conventional objective) theory of everything plus a (novel subjective) observer process. The observer localization is neither based on the controversial anthropic principle, nor has it anything to do with the quantum-mechanical observation process. The suggested principle is extended to more practical (partial, approximate, probabilistic, parametric) world models (rather than theories of everything). Finally, I provide a justification of Ockham's razor, and criticize the anthropic principle, the doomsday argument, the no free lunch theorem, and the falsifiability dogma.", . (shrink)
This dissertation reconsiders some traditional issues in the foundations of quantum mechanics in the context of relativistic quantum field theory (RQFT); and it considers some novel foundational issues that arise first in the context of RQFT. The first part of the dissertation considers quantum nonlocality in RQFT. Here I show that the generic state of RQFT displays Bell correlations relative to measurements performed in any pair of spacelike separated regions, no matter how distant. I also show that local systems in (...) RQFT are "open" to influence from their environment, in the sense that it is generally impossible to perform local operations that would remove the entanglement between a local system and any other spacelike separated system. The second part of the dissertation argues that RQFT does not support a particle ontology -- at least if particles are understood to be localizable objects. In particular, while RQFT permits us to describe situations in which a determinate number of particles are present, it does not permit us to speak of the location of any individual particle, nor of the number of particles in any particular region of space. Nonetheless, the absence of localizable particles in RQFT does not threaten the integrity of our commonsense concept of a localized object. Indeed, RQFT itself predicts that descriptions in terms of localized objects can be quite accurate on the macroscopic level. The third part of the dissertation examines the so-called observer-dependence of the particle concept in RQFT -- that is, whether there are any particles present must be relativized to an observer's state of motion. Now, it is not uncommon for modern physical theories to subsume observer-dependent descriptions under a more general observer-independent description of some underlying state of affairs. However, I show that the conflicting accounts concerning the particle content of the field cannot be reconciled in this way. In fact, I argue that these conflicting accounts should be thought of as "complementary" in the same sense that position and momentum descriptions are complementary in elementary quantum mechanics. (shrink)
This is the second of two papers responding (somewhat belatedly) to ‘recent’ commentary on various aspects of hyperplane dependence (HD) by several authors. In this paper I focus on the issues of the general need for HD dynamical variables, the identification of physically meaningful localizable properties, the basis vectors representing such properties and the relationship between the concepts of ‘localizable within’ and ‘measureable within’. The authors responded to here are de Koning, Halvorson, Clifton and Wallace. In the first paper of (...) this set (Fleming 2003b) I focused on the issues of the relations of HD to state reduction and unitary evolution and addressed comments of Maudlin and Myrvold. The central conclusion argued for in this second paper (§§ 5, 7) is the non-existence of strictly localizable objects or measurement processes and the consequent undermining of the principle of universal microcausality. This contrasts with the existence of strictly localizable properties and results in the consequent priority of the concept of ‘localizable within’ over ‘measureable within’. The paper opens with discussions of the need for and status of HD dynamical variables which are responses to anonymous queries. (shrink)
Several recent findings support the notion that changes in the environment can be implicitly represented by the visual system. S. R. Mitroff, D. J. Simons, and S. L. Franconeri (2002) challenged this view and proposed alternative interpretations based on explicit strategies. Across 4 experiments, the current study finds no empirical support for such alternative proposals. Experiment 1 shows that subjects do not rely on unchanged items when locating an unaware change. Experiments 2 and 3 show that unaware changes affect performance (...) even when they occur at an unpredictable location. Experiment 4 shows that the unaware congruency effect does not depend simply on the pattern of the final display. The authors point to converging evidence from other methodologies and highlight several weaknesses in Mitroff et al.’s theoretical arguments. It is concluded here that implicit representation of change provides the most parsimonious explanation for both past and present findings. (shrink)
Book review of Bechtel and Richardson, Discovering Complexity (1993). Review suggests that one theme of the book -- that scientific reason is "constituted" in part by a cognitive strategy of finding complexity -- is not fully supported.
Conceptualism, like any other philosophical doctrine of comparable scope, has both ontological and epistemological aspects. Ontologically, however, conceptualism does not differ significantly from certain forms of nominalism. 1 At its root lies an epistemological thesis: All objects of sensory intuition are localized in space and time. 2 In this paper, I wish to explore some of the consequences of this thesis.
Two cross-modal experiments provide partial support for O'Regan & Noë's (O&N's) claim that sensorimotor contingencies mediate perception. Differences in locating a target sound accompanied by a spatially disparate neutral light correlate with whether the two stimuli were perceived as spatially unified. This correlation suggests that internal representations are necessary for conscious perception, which may also mediate sensorimotor contingencies.
The aim of this contribution is to analyze the difficulties and possible inconsistencies one may encounter when attempting to integrate substance philosophy and process philosophy. I argue that it is impossible to avoid these problems, and offer a typology that helps to understand the tensions involved in attempts to integrate these two worldviews.
The Socio-Cultural pluralism and its fabrics in our society have been under the threat of religious fundamentalism and ideological extremism, which firmly believe that there can only one way of true expression and all other forms, are either substandard or false. This attitude has torn away the world into different isolated islands of human settlements. This state of affair is the outcome of multidimensional causes and one among them, no doubt, is related to one of the central issues of Philosophy, (...) i.e., the relation between ‘one and many’. The ontological dimension of the problem of ‘one and many’ cannot be solved withouthaving the proper understanding of epistemological solutions. The epistemic tools of the Advaita Vedanta are powerful enough to suggest pragmatic solutions to the good old problem of ‘one and many’ that attains new relevance in the era of Globalization. The logic of identity in difference and the logic of difference are the two types of logical systems, which are instrumental for the formulation of various types of claims to know of the western tradition. The logic of identity in difference has been clearly designed and defined by the classical logic or the Aristotelian logic. The symbolic logic encarves the various nuances of the logic of difference. The Advaita System has envisaged a new logic i.e., the logic of identity, which is implicit in the Upanisdic mahavakyya ‘Ayam Āthmam Brahma’. The paper intents to explore the possibility of application of the logic of identity in socio-cultural issues of the present day world. (shrink)
It is argued that some elusive “entropic” characteristics of chemical bonds, e.g., bond multiplicities (orders), which connect the bonded atoms in molecules, can be probed using quantities and techniques of Information Theory (IT). This complementary perspective increases our insight and understanding of the molecular electronic structure. The specific IT tools for detecting effects of chemical bonds and predicting their entropic multiplicities in molecules are summarized. Alternative information densities, including measures of the local entropy deficiency or its displacement relative to the (...) system atomic promolecule, and the nonadditive Fisher information in the atomic orbital resolution(called contragradience ) are used to diagnose the bonding patterns in illustrative diatomic and polyatomic molecules. The elements of the orbital communication theory of the chemical bond are briefly summarized and illustrated for the simplest case of the two -orbital model. The information-cascade perspective also suggests a novel, indirect mechanism of the orbital interactions in molecular systems, through “bridges” (orbital intermediates), in addition to the familiar direct chemical bonds realized through “space”, as a result of the orbital constructive interference in the subspace of the occupied molecular orbitals. Some implications of these two sources of chemical bonds in propellanes, π-electron systems and polymers are examined. The current –density concept associated with the wave-function phase is introduced and the relevant phase -continuity equation is discussed. For the first time, the quantum generalizations of the classical measures of the information content, functionals of the probability distribution alone, are introduced to distinguish systems with the same electron density, but differing in their current(phase) composition. The corresponding information/entropy sources are identified in the associated continuity equations. (shrink)
The course on nature coincides with the re-working of Merleau-Ponty's breakthrough towards an ontology and therefore plays a primordial role. The appearance of an interrogation of nature is inscribed in the movement of thought that comes after the Phenomenology of Perception. What is at issue is to show that the ontological mode of the perceived object - not the unity of a positive sense but the unity of a style that shows through in filigree in the sensible aspects - has (...) a universal meaning, that the description of the perceived world can give way to a philosophy of perception and therefore to a theory of truth. The analysis of linguistic expression to which the philosophy of perception leads opens out onto a definition of meaning as institution, understood as what inaugurates an open series of expressive appropriations. It is this theory of institution that turns the analysis of the perceived in the direction of a reflection on nature: the perceived is no longer the originary in its difference from the derived but the natural in its difference from the instituted. Nature is the "non-constructed, non-instituted," and thereby, the source of expression: "nature is what has a sense without this sense having been posited by thought." The first part of the course, which consists in a historical overview, must not be considered as a mere introduction. In fact, the problem of nature is brought out into the open by means of the history of Western metaphysics, in which Descartes is the emblematic figure. The problem consists in the duality - at once unsatisfactory and unsurpassable - between two approaches to nature: the one which accentuates its determinability and therefore its transparency to the understanding; the other which emphasizes the irreducible facticity of nature and tends therefore to valorize the view-point of the senses. To conceive nature is to constitute a concept of it that allows us to "take possession" of this duality, that is, to found the duality. The second part of the course attempts to develop this concept of nature by drawing upon the results of contemporary science. Thus a philosophy of nature is sketched that can be summarized in four propositions: 1) the totality is no less real than the parts; 2) there is a reality of the negative and therefore no alternative between being and nothingmess; 3) a natural event is not assigned to a unique spatio-temporal localization; and 4) there is generality only as generativity. (shrink)
Most philosophical accounts of emergence are incompatible with reduction. Most scientists regard a system property as emergent relative to properties of its parts if it depends upon their mode of organization-a view consistent with reduction. Emergence is a failure of aggregativity, in which ``the whole is nothing more than the sum of its parts''. Aggregativity requires four conditions, giving powerful tools for analyzing modes of organization. Differently met for different decompositions of the system, and in different degrees, the structural conditions (...) can provide evaluation criteria for choosing decompositions, ``natural kinds'', and detecting functional localization fallacies, approximations, and various biases of vulgar reductionisms. This analysis of emergence and use of these conditions as heuristics is consistent with a broader reductionistic methodology. (shrink)
Abstract: The massive redeployment hypothesis (MRH) is a theory about the functional topography of the human brain, offering a middle course between strict localization on the one hand, and holism on the other. Central to MRH is the claim that cognitive evolution proceeded in a way analogous to component reuse in software engineering, whereby existing components-originally developed to serve some specific purpose-were used for new purposes and combined to support new capacities, without disrupting their participation in existing programs. If (...) the evolution of cognition was indeed driven by such exaptation, then we should be able to make some specific empirical predictions regarding the resulting functional topography of the brain. This essay discusses three such predictions, and some of the evidence supporting them. Then, using this account as a background, the essay considers the implications of these findings for an account of the functional integration of cognitive operations. For instance, MRH suggests that in order to determine the functional role of a given brain area it is necessary to consider its participation across multiple task categories, and not just focus on one, as has been the typical practice in cognitive neuroscience. This change of methodology will motivate (even perhaps necessitate) the development of a new, domain-neutral vocabulary for characterizing the contribution of individual brain areas to larger functional complexes, and direct particular attention to the question of how these various area roles are integrated and coordinated to result in the observed cognitive effect. Finally, the details of the mix of cognitive functions a given area supports should tell us something interesting not just about the likely computational role of that area, but about the nature of and relations between the cognitive functions themselves. For instance, growing evidence of the role of “motor” areas like M1, SMA and PMC in language processing, and of “language” areas like Broca’s area in motor control, offers the possibility for significantly reconceptualizing the nature both of language and of motor control. (shrink)
Multiple realization was once taken to be a challenge to reductionist visions, especially within cognitive science, and a foundation of the “antireductionist consensus.” More recently, multiple realization has come to be challenged on naturalistic grounds, as well as on more “metaphysical” grounds. Within cognitive science, one focal issue concerns the role of neural plasticity for addressing these issues. If reorganization maintains the same cognitive functions, that supports claims for multiple realization. I take up the reorganization involved in language dysfunctions to (...) deal with questions concerned with multiple realization and neural plasticity. Beginning with Broca’s case for localization and the nineteenth century discussion of “reorganization,” and returning to more recent evidence for neural plasticity, I argue that, in the end, there is substantial support for multiple realization in cognitive systems; I further argue that this is wholly consistent with a recognition of methodological pluralism in cognitive science. (shrink)
Rob Clifton was one of the most brilliant and productive researchers in the foundations and philosophy of quantum theory, who died tragically at the age of 38. Jeremy Butterfield and Hans Halvorson collect fourteen of his finest papers here, drawn from the latter part of his career (1995-2002), all of which combine exciting philosophical discussion with rigorous mathematical results. Many of these papers break wholly new ground, either conceptually or technically. Others resolve a vague controversy intoa precise technical problem, which (...) is then solved; still others solve an open problem that had been in the air for soem time. All of them show scientific and philosophical creativity of a high order, genuinely among the very best work in the field. The papers are grouped into four Parts. First come four papers about the modal interpretation of quantum mechanics. Part II comprises three papers on the foundations of algebraic quantum field theory, with an emphasis on entanglement and nonlocality. The two papers in Part III concern the concept of a particle in relativistic quantum theories. One paper analyses localization; the other analyses the Unruh effect (Rindler quanta) using the algebraic approach to quantum theory. Finally, Part IV contains striking new results about such central issues as complementarity, Bohr's reply to the EPR argument, and no hidden variables theorems; and ends with a philosophical survey of the field of quantum information. The volume includes a full bibliography of Clifton's publications. Quantum Entanglements offers inspiration and substantial reward to graduates and professionals in the foundations of physics, with a background in philosophy, physics, or mathematics. (shrink)
Although there has been much recent discussion on mechanisms in philosophy of science and social theory, no shared understanding of the crucial concept itself has emerged. In this paper, a distinction between two core concepts of mechanism is made on the basis that the concepts correspond to two different research strategies: the concept of mechanism as a componential causal system is associated with the heuristic of functional decomposition and spatial localization and the concept of mechanism as an abstract form (...) of interaction is associated with the strategy of abstraction and simple models. The causal facts assumed and the theoretical consequences entailed by an explanation with a given mechanism differ according to which concept of mechanism is in use. Research strategies associated with mechanism concepts also involve characteristic biases that should be taken into account when using them, especially in new areas of application. (shrink)
De Vignemont argues that the sense of ownership comes from the localization of bodily sensation on a map of the body that is part of the body schema. This model should be taken as a model of the sense of embodiment. I argue that the body schema lacks the theoretical resources needed to explain this phenomenology. Furthermore, there is some reason to think that a deficient sense of embodiment is not associated with a deficient body schema. The data de (...) Vignemont uses to argue that the body image does not underlie the sense of embodiment does not rule out the possibility that part of the body image I call 'offline representations' underlies the sense of embodiment. An alternative model of the sense of embodiment in terms of offline representations of the body is presented. (shrink)
This essay introduces the massive redeployment hypothesis, an account of the functional organization of the brain that centrally features the fact that brain areas are typically employed to support numerous functions. The central contribution of the essay is to outline a middle course between strict localization on the one hand, and holism on the other, in such a way as to account for the supporting data on both sides of the argument. The massive redeployment hypothesis is supported by case (...) studies of redeployment, and compared and contrasted with other theories of the localization of function. (shrink)
Some theorists who emphasize the complexity of biological and cognitive systems and who advocate the employment of the tools of dynamical systems theory in explaining them construe complexity and reduction as exclusive alternatives. This paper argues that reduction, an approach to explanation that decomposes complex activities and localizes the components within the complex system, is not only compatible with an emphasis on complexity, but provides the foundation for dynamical analysis. Explanation via decomposition and localization is nonetheless extremely challenging, and (...) an analysis of recent cognitive neuroscience research on memory is used to illustrate what is involved. Memory researchers split between advocating memory systems and advocating memory processes, and I argue that it is the latter approach that provides the critical sort of decomposition and localization for explaining memory. The challenges of linking distinguishable functions with brain processes is illustrated by two examples: competing hypotheses about the contribution of the hippocampus and competing attempts to link areas in frontal cortex with memory processing. (shrink)
The field of cognitive imaging is explodingboth in terms of the amount of our scientificresources dedicated to it and the associatedpublication rate. However, all of this effortis based on a critical question – Do cognitivemodules exist? Both of the reviewers of my book(Uttal, 2001) and I agree that this questionhas not yet been satisfactorily answered and,depending on the ultimate answer, the cognitiveimaging approach as well as some other parts ofthe quest for mechanistic models of mind mightnot be successful. Our views (...) of how our scienceshould respond to this serious problem,however, are quite different. Both ProfessorBechtel and Lloyd argue for an optimisticattack on the problem of the localization ofcognitive processes in the brain based on thehistory of other sciences. I argue that arealistic appreciation of the limits of thisapproach should temper the enthusiasm for whatultimately will go the way of other attempts tounravel the mind-brain problem. (shrink)
To accept that cognition is embodied is to question many of the beliefs traditionally held by cognitive scientists. One key question regards the localization of cognitive faculties. Here we argue that for cognition to be embodied and sometimes embedded, means that the cognitive faculty cannot be localized in a brain area alone. We review recent research on neural reuse, the 1/f structure of human activity, tool use, group cognition, and social coordination dynamics that we believe demonstrates how the boundary (...) between the different areas of the brain, the brain and body, and the body and environment is not only blurred but indeterminate. In turn, we propose that cognition is supported by a nested structure of task-specific synergies, which are softly assembled from a variety of neural, bodily, and environmental components (including other individuals), and exhibit interaction dominant dynamics. (shrink)
Much of cognitive neuroscience as well as traditional cognitive science is engaged in a quest for mechanisms through a project of decomposition and localization of cognitive functions. Some advocates of the emerging dynamical systems approach to cognition construe it as in opposition to the attempt to decompose and localize functions. I argue that this case is not established and rather explore how dynamical systems tools can be used to analyze and model cognitive functions without abandoning the use of decomposition (...) and localization to understand mechanisms of cognition. (shrink)
One way in which philosophy of science can perform a valuable normative function for science is by showing characteristic errors made in scientific research programs and proposing ways in which such errors can be avoided or corrected. This paper examines two errors that have commonly plagued research in biology and psychology: 1) functional localization errors that arise when parts of a complex system are assigned functions which these parts are not themselves able to perform, and 2) vacuous functional explanations (...) in which one provides an analysis that does account for the inputs and outputs of a system but does not employ the same set of functions to produce this output as does the natural system. These two kinds of error usually arise when researchers limit their investigation to one type of evidence. Historically, correction of these errors has awaited researchers who have employed the opposite type of evidence. This paper explores the tendency to commit these errors by examining examples from historical and contemporary science and proposes a dialectical process through which researchers can avoid or correct such errors in the future. (shrink)
The neural substrate of early visual processing in the macaque is used as a framework to discuss recent progress towards a precise anatomical localization and understanding of the functional implications of the syndromes of blindsight, achromatopsia and akinetopsia in humans. This review is mainly concerned with how these syndromes support the principles of organization of the visual system into parallel pathways and the functional hierarchy of visual mechanisms.
The catastrophe of the eye -- A new view of seeing -- Applying the new view of seeing -- The illusion of seeing everything -- Some contentious points -- Towards consciousness -- Types of consciousness -- Phenomenal consciousness, raw feel, and why they're hard -- Squeeze a sponge, drive a porsche : a sensorimotor account of feel -- Consciously experiencing a feel -- The sensorimotor approach to color -- Sensory substitution -- The localization of touch -- The phenomenality plot (...) -- Consciousness. (shrink)
The statistical aspects of quantum explanation are intrinsic to quantum physics; individual quantum events are created in the interactions associated with observation and are not describable by predictive theory. The superposition principle shows the essential difference between quantum and non-quantum physics, and the principle is exemplified in the classic single-photon two-slit interference experiment. Recently Mandel and Pfleegor have done an experiment somewhat similar to the optical single-photon experiment but with two independently operated lasers; interference is obtained even with beam intensity (...) so small that only one photon is in the apparatus at a time. The result can be understood in terms of the superposition of states; or, in terms of the Uncertainty Principle, which is found to forbid the determination of which of the two lasers is the source of a given photon (if conditions for interference are to obtain). The Mandel-Pfleegor experiment gives a physical argument against the continuous localization of a photon that is assumed in the hidden variable theories and therefore gives further support for the generally accepted view that an observed entity (observed state) is created in the observation event. This aspect of quantum physics implies a subjectivism on the level of individual quantum-level occurrences, since there is in quantum theory no basis for asserting the existence of the event independently of observation of it. Extension of this subjectivism to large scale, non-quantum phenomena falls within the principles of quantum theory; counter considerations that argue against such an extension are noted. (shrink)
Oversimplified conceptions of cognitive neuroscience regard the goal of this discipline as the localization of previously discovered and validated cognitive processes. Research however is showing how brain data goes far beyond this translation role, as it can be used to help in explaining human cognition. Knowing about the brain is useful in building and redefining taxonomies of the mind and also in describing the mechanisms by which cognitive phenomena proceed. The present paper takes the cognitive system of attention as (...) a model research field to exemplify how biological knowledge can be used to advance the psychological theories explaining mental phenomena. (shrink)
Using the concept of adjunction, for the comprehension of the structure of a complex system, developed in Part I, we introduce the notion of covering systems consisting of partially or locally defined adequately understood objects. This notion incorporates the necessary and sufficient conditions for a sheaf theoretical representation of the informational content included in the structure of a complex system in terms of localization systems. Furthermore, it accommodates a formulation of an invariance property of information communication concerning the analysis (...) of a complex system. (shrink)
An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (...) (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design. (shrink)
A new view of the functional role of the left anterior cortex in language use is proposed. The experimental record indicates that most human linguistic abilities are not localized in this region. In particular, most of syntax (long thought to be there) is not located in Broca's area and its vicinity (operculum, insula, and subjacent white matter). This cerebral region, implicated in Broca's aphasia, does have a role in syntactic processing, but a highly specific one: It is the neural home (...) to receptive mechanisms involved in the computation of the relation between transformationally moved phrasal constituents and their extraction sites (in line with the Trace-Deletion Hypothesis). It is also involved in the construction of higher parts of the syntactic tree in speech production. By contrast, basic combinatorial capacities necessary for language processing – for example, structure-building operations, lexical insertion – are not supported by the neural tissue of this cerebral region, nor is lexical or combinatorial semantics. The dense body of empirical evidence supporting this restrictive view comes mainly from several angles on lesion studies of syntax in agrammatic Broca's aphasia. Five empirical arguments are presented: experiments in sentence comprehension, cross-linguistic considerations (where aphasia findings from several language types are pooled and scrutinized comparatively), grammaticality and plausibility judgments, real-time processing of complex sentences, and rehabilitation. Also discussed are recent results from functional neuroimaging and from structured observations on speech production of Broca's aphasics. Syntactic abilities are nonetheless distinct from other cognitive skills and are represented entirely and exclusively in the left cerebral hemisphere. Although more widespread in the left hemisphere than previously thought, they are clearly distinct from other human combinatorial and intellectual abilities. The neurological record (based on functional imaging, split-brain and right-hemisphere-damaged patients, as well as patients suffering from a breakdown of mathematical skills) indicates that language is a distinct, modularly organized neurological entity. Combinatorial aspects of the language faculty reside in the human left cerebral hemisphere, but only the transformational component (or algorithms that implement it in use) is located in and around Broca's area. Key Words: agrammatism; aphasia; Broca's area; cerebral localization; dyscalculia; functional neuroanatomy; grammatical transformation; modularity; neuroimaging; syntax; trace deletion. (shrink)
Can there be relational universals? If so, how can they be exemplified? A monadic universal is by definition capable of having a scattered spatiotemporal localization of its different exemplifications, but the problem of relational universals is that one single exemplification seems to have to be scattered in the many places where the relata are. The paper argues that it is possible to bite this bullet, and to accept a hitherto un-discussed kind of exemplification relation called ‘scattered exemplification’. It has (...) no immediate symbolic counterpart in any Indo-European natural language or in any so far constructed logical language. In order to remedy this, a notion called ‘many-place copula’ is introduced, too. (shrink)
Good research requires, among other virtues,(i) methods that yield stable experimentalobservations without arbitrary (post hoc)assumptions, (ii) logical interpretations ofthe sources of observations, and (iii) soundinferences to general causal mechanismsexplaining experimental results by placing themin larger explanatory contexts. In TheNew Phrenology , William Uttal examines theresearch tradition of localization, and findsit deficient in all three virtues, whetherbased on lesion studies or on new technologiesfor functional brain imaging. In this paper Iconsider just the arguments concerning brainimaging, especially functional MagneticResonance Imaging. I (...) think that Uttal is tooharsh in his methodological critique, butcorrect in his assessment of the conceptuallimitations of localist evidence. I proposeinstead a data-driven test for assessingrelative modularity in brain images, and showits use in a secondary analysis of fMRI datafrom the National fMRI Data Center(www.fmridc.org). Although the analysis is alimited pilot study, it offers additionalempirical challenge to localism. (shrink)
The aim of this article is twofold. Recently, Lewis has presented an argument, now known as the "counting anomaly", that the spontaneous localization approach to quantum mechanics, suggested by Ghirardi, Rimini, and Weber, implies that arithmetic does not apply to ordinary macroscopic objects. I will take this argument as the starting point for a discussion of the property structure of realist collapse interpretations of quantum mechanics in general. At the end of this I present a proof of the fact (...) that the composition principle, which holds true in standard quantum mechanics, fails in all realist collapse interpretations. On the basis of this result I reconsider the counting anomaly and show that what lies at the heart of the anomaly is the failure to appreciate the peculiarities of the property structure of such interpretations. Once this flaw is uncovered, the anomaly vanishes. (shrink)
In this paper, I argue that Arendt's understanding of freedom should be examined independently of the search for good political institutions because it is related to freedom of movement and has a transnational meaning. Although she does not say it explicitly, Arendt establishes a correlation between political identities and territorial moves: She analyzes regimes in relation to their treatment of lands and borders, that is, specific geographic movements. I call this correlation a political itinerary. My aim is to show genealogically (...) that her elaboration on the regimes of ancient, modern, and 'dark' times is supported by such a correlation. I read Arendt in light of the current clash between an amorphous global political identity (and 'new' international order) and the renewal of nationalisms. I show that, for Arendt, the world is divided by necessary frontiers - territorial borders and identity frames - and that the political consists precisely of the effort to transgress them. Arendt never proposed a restoration of authority but, on the contrary, a worldwide anarchic (that is, based on no predetermined rule) politics of de-localization and re-localization; in her terms, a politics of free movement of founded identities, a cosmopolitanism, which, nevertheless, would have nothing to do with global sovereignty. (shrink)
This article investigates the characteristic attitudes of the Greeks toward nature, which formed the perceptual framework for their ecological thinking. Two major attitudes are discerned. One regarded nature as the theatre of the gods, whose interplay produced observed phenomena, but whose localization gave them particular, restricted roles. The other attitude viewed nature as the theatre of reason, and made the beginnings of ecological thought possible. The contributions of several Greek forerunners in the field of ecology are characterized. The most (...) consistent, balanced ecological writer in ancient Greece was Theophrastus, but his conception of an autonomous nature, interacting with man, was overshadowed in the history of ancient and medieval thought by the anthropocentric teleology of Aristotle. (shrink)
We study the process of observation (measurement), within the framework of a `perspectival' (`relational', `relative state')version of the modal interpretation of quantum mechanics. We show that if we assume certain features of discreteness and determinism in the operation of the measuring device (which could be a part of the observer's nerve system), this gives rise to classical characteristics of the observed properties, in the first place to spatial localization. We investigate to what extent semi-classical behavior of the object system (...) itself (as opposed to the observational system) is needed for the emergence of classicality. Decoherence is an essential element in the mechanism of observation that we assume, but it turns out that in our approach no environment-induced decoherence on the level of the object system is required for the emergence of classical properties. (shrink)
Wijdicks and colleagues1 recently presented the Full Outline of UnResponsiveness (FOUR) scale as an alternative to the Glasgow Coma Scale (GCS)2 in the evaluation of consciousness in severely brain-damaged patients. They studied 120 patients in an intensive care setting (mainly neuro-intensive care) and claimed that “the FOUR score detects a locked-in syndrome, as well as the presence of a vegetative state.”1 We fully agree that the FOUR is advantageous in identifying locked-in patients given that it specifically tests for eye movements (...) or blinking on command. This is welcomed given that misdiagnosis of the locked-in syndrome has been shown to occur in more than half of the cases (see Laureys and colleagues3 for review). As for the diagnosis of the vegetative state, the scale explicitly tests for visual pursuit, and hence can disentangle the vegetative state from the minimally conscious state (MCS). The diagnostic criteria for MCS have been proposed4 only recently, but Wijdicks and colleagues1 do not mention the existence of this clinical entity in their article. As for the vegetative state, MCS can be encountered in the acute or subacute setting as a transitional state on the way to further recovery, or it can be a more chronic or even permanent condition. The MCS refers to patients showing inconsistent, albeit clearly discernible, minimal behavioral evidence of consciousness (eg, localization of noxious stimuli, eye fixation or tracking, reproducible movement to command, or nonfunctional verbalization).4 The FOUR scale does not test for all of the behavioral criteria required to diagnose MCS.4 It is known from the literature (see Majerus and colleagues5 for review) that about a third of patients diagnosed with vegetative state are actually in MCS, and this misdiagnosis can lead to major clinical, therapeutic, and ethical consequences. We tested the ability of the newly proposed FOUR scale to correctly diagnose the vegetative state in an acute (intensive care and neurology ward) and chronic (neurorehabilitation) setting.. (shrink)
The question of the influence of genes on behavior raises difficult philosophical and social issues. In this paper I delineate what I call the Developmentalist Challenge (DC) to assertions of genetic influence on behavior, and then examine the DC through an indepth analysis of the behavioral genetics of the nematode, C. elegans, with some briefer references to work on Drosophila. I argue that eight "rules" relating genes and behavior through environmentally-influenced and tangled neural nets capture the results of developmental and (...) behavioral studies on the nematode. Some elements of the DC are found to be sound and others are criticized. The essay concludes by examining the relations of this study to Kitcher's antireductionist arguments and Bechtel and Richardson's decomposition and localization heuristics. Some implications for human behavioral genetics are also briefly considered. (shrink)
Uncertainty relations and complementarity of canonically conjugate position and momentum observables in quantum theory are discussed with respect to some general coupling properties of a function and its Fourier transform. The question of joint localization of a particle on bounded position and momentum value sets and the relevance of this question to the interpretation of position-momentum uncertainty relations is surveyed. In particular, it is argued that the Heisenberg interpretation of the uncertainty relations can consistently be carried through in a (...) natural extension of the usual Hilbert space frame of the quantum theory. (shrink)
How can one conceive of the neuronal implementation of the processing model we proposed in our target article? In his commentary (Pulvermüller 1999, reprinted here in this issue), Pulvermüller makes various proposals concerning the underlying neural mechanisms and their potential localizations in the brain. These proposals demonstrate the compatibility of our processing model and current neuroscience. We add further evidence on details of localization based on a recent meta-analysis of neuroimaging studies of word production (Indefrey & Levelt 2000). We (...) also express some minor disagreements with respect to Pulvermüller's interpretation of the “lemma” notion, and concerning his neural modeling of phonological code retrieval. Branigan & Pickering discuss important aspects of syntactic encoding, which was not the topic of the target article. We discuss their well-taken proposal that multiple syntactic frames for a single verb lemma are represented as independent nodes, which can be shared with other verbs, such as accounting for syntactic priming in speech production. We also discuss how, in principle, the alternative multiple-frame-multiple-lemma account can be tested empirically. The available evidence does not seem to support that account. Footnotes1 BBS Note: The original manuscript of this Response article was received on January 14, 2000. (shrink)
This paper addresses the problem of upgrading functional information to knowledge. Functional information is defined as syntactically well-formed, meaningful and collectively opaque data. Its use in the formal epistemology of information theories is crucial to solve the debate on the veridical nature of information, and it represents the companion notion to standard strongly semantic information, defined as well-formed, meaningful and true data. The formal framework, on which the definitions are based, uses a contextual version of the verificationist principle of truth (...) in order to connect functional to semantic information, avoiding Gettierization and decoupling from true informational contents. The upgrade operation from functional information uses the machinery of epistemic modalities in order to add data localization and accessibility as its main properties. We show in this way the conceptual worthiness of this notion for issues in contemporary epistemology debates, such as the explanation of knowledge process acquisition from information retrieval systems, and open data repositories. (shrink)
Abstract We consider the classical concept of time of permanence and observe that its quantum equivalent is described by a bona fide self-adjoint operator. Its interpretation, by means of the spectral theorem, reveals that we have to abandon not only the idea that quantum entities would be characterizable in terms of spatial trajectories but, more generally, that they would possess the very attribute of spatiality . Consequently, a permanence time shouldn’t be interpreted as a “time” in quantum mechanics, but as (...) a measure of the total availability of a quantum entity in participating to a process of creation of a spatial localization. Content Type Journal Article Pages 1-22 DOI 10.1007/s10699-011-9233-z Authors Massimiliano Sassoli de Bianchi, Laboratorio di Autoricerca di Base, 6914 Carona, Switzerland Journal Foundations of Science Online ISSN 1572-8471 Print ISSN 1233-1821. (shrink)
If quantum mechanics is interpreted as an objective, complete, physical theory, applying to macroscopic as well as microscopic systems, then the linearity of quantum dynamics gives rise to the measurement problem and related problems, which cannot be solved without modifying the dynamics. Eight desiderata are proposed for a reasonable modified theory. They favor a stochastic modification rather than a deterministic non-linear one, but the spontaneous localization theories of Ghirardi et al. and Pearle are criticized. The intermittent fluorescence of a (...) trapped atom irradiated by two laser beams suggests a stochastic theory in which the locus of stochasticity is interaction between a material system and the electromagnetic vacuum. (shrink)
: The purpose of this paper and its sister paper (Farrell and Hooker, b) is to present, evaluate and elaborate a proposed new model for the process of scientific development: self-directed anticipative learning (SDAL). The vehicle for its evaluation is a new analysis of a well-known historical episode: the development of ape-language research. In this first paper we outline five prominent features of SDAL that will need to be realized in applying SDAL to science: 1) interactive exploration of possibility space; (...) 2) self-directedness; 3) localization of success and error; 4) Synergistic increase in learning capacity; and 5) continuity of SDAL process across scientific change. In this paper we examine the first three features of SDAL in relation to the early history of ape-language research. We show that this history is readily explicated as a self-directed, ever-finer, delineation of possibility space that enables the localization of both success and error. Paper II examines the last two features against this history. (shrink)
[Revised translation of a manuscript originally published in German in Zeitschrift fur Naturforschung 54a, 2--10 (1999) and dedicated to Georg Sussmann on the occasion of his seventieth birthday.] We discuss an ontological model suggested by quantum physics, in which the notion of events is of central significance. The conventional objects are considered as causal links between events. Localization in space-time refers primarily to events, not to objects. The intrinsic indeterminacy forces us to consider both possibilities and facts, corresponding to (...) the distinction between future and past. In presently existing theories, the definition of individual events and their localization properties depends on asymptotic arguments adapted to prevailing situations. (shrink)
This ethical analysis compares two mid-size Asian-based multinational corporations (Japanese and Taiwanese) that have established extensive operations in China. We describe and analyze ethically relevant dimensions of each corporation's culture and practices, including their corporate cultures and the ethical issues they face. We argue that these companies add value to China's social and economic transformation in several important ways, including their development of human capital – the enhanced skill sets, work experiences, and values acquired by their workers. We conclude by (...) highlighting three ethical challenges that seem most critical for these two firms: (a) creating mutual benefit for the corporation and the host country; (b) localization and management of the cultural gap between expats and local workers; and (c) dealing effectively with the power and roles of a large, pervasive government in a non-democratic society. The experiences of these two international firms can provide insights into how international firms might construct effective management strategies for doing business in countries such as China. (shrink)
Grodzinsky's Tree-Pruning Hypothesis can be extended to explain agrammatic comprehension disorders. Although agrammatism is evidence for syntactic modularity, there is no evidence for its anatomical modularity or for its localization in the frontal lobe. Agrammatism results from diffuse left hemisphere damage – allowing the emergence of the limited right hemisphere linguistic competence – rather than from damage to an anatomic module in the left hemisphere.
The Reeh-Schlieder theorem asserts the vacuum and certain other states to be spacelike superentangled relative to local fields. This motivates an inquiry into the physical status of various concepts of localization. It is argued that a covariant generalization of Newton-Wigner localization is a physically illuminating concept. When analyzed in terms of nonlocally covariant quantum fields, creating and annihilating quanta in Newton-Wigner localized states, the vacuum is seen to not possess the spacelike superentanglement that the Reeh-Schlieder theorem displays relative (...) to local fields, and to be locally empty as well as globally empty. Newton-Wigner localization is then shown to be physically interpretable in terms of a covariant generalization of the center of energy, the two localizations being identical if the system has no internal angular momentum. Finally, some of the counterintuitive features of Newton-Wigner localization are shown to have close analogues in classical special relativity. (shrink)
In previous analyses of the influence of language on cognition, speech has been the main channel examined. In studies conducted among Yucatec Mayas, efforts to determine the preferred frame of reference in use in this community have failed to reach an agreement (Bohnemeyer & Stolz, 2006; Levinson, 2003 vs. Le Guen, 2006, 2009). This paper argues for a multimodal analysis of language that encompasses gesture as well as speech, and shows that the preferred frame of reference in Yucatec Maya is (...) only detectable through the analysis of co-speech gesture and not through speech alone. A series of experiments compares knowledge of the semantics of spatial terms, performance on nonlinguistic tasks and gestures produced by men and women. The results show a striking gender difference in the knowledge of the semantics of spatial terms, but an equal preference for a geocentric frame of reference in nonverbal tasks. In a localization task, participants used a variety of strategies in their speech, but they all exhibited a systematic preference for a geocentric frame of reference in their gestures. (shrink)
Description: The massive redeployment hypothesis (MRH) is a theory about the functional organization of the human cortex, offering a middle course between strict localization on the one hand, and holism on the other. Central to MRH is the claim that cognitive evolution proceeded in a way analogous to component reuse in software engineering, whereby existing components—originally developed to serve some specific purpose—were used for new purposes and combined to support new capacities, without disrupting their participation in existing programs.
This essay explores the nihilistic coincidence of the ascetic ideal and Nietzsche’s localization of science in the conceptual world of anarchic socialismas Nietzsche indicts the uncritical convictions of modern science by way of a critique of the causa sui, questioning both religion and the enlightenment as well asboth free and unfree will and condemning the “poor philology” enshrined in the language of the “laws” of nature. Reviewing the history of philosophical nihilismin the context of Nietzsche’s “tragic knowledge” along with (...) political readings of nihilism, willing nothing rather than not willing at all, today’s this-worldly and very planetary nihilism includes the virtual loci of technological desire (literally willing nothing) as well as the perpetual and consequently pointless threat of nuclear annihilation and the routine or ordinary annihilation of plant and animal life as of inorganic nature. (shrink)
Several research groups have identified a network of regions of the adult cortex that are activated during social perception and cognition tasks. In this paper we focus on the development of components of this social brain network during early childhood and test aspects of a particular viewpoint on human functional brain development: “interactive specialization.” Specifically, we apply new data analysis techniques to a previously published data set of event-related potential ~ERP! studies involving 3-, 4-, and 12-month-old infants viewing faces of (...) different orientation and direction of eye gaze. Using source separation and localization methods, several likely generators of scalp recorded ERP are identified, and we describe how they are modulated by stimulus characteristics. We then review the results of a series of experiments concerned with perceiving and acting on eye gaze, before reporting on a new experiment involving young children with autism. Finally, we discuss predictions based on the atypical emergence of the social brain network. (shrink)
The study explores the traits and influences on global business ethics practiced by Taiwanese enterprises in East Asia in order to provide those enterprises with a ready guide to contemporaneous standards of ethical management overseas and, in particular, in East Asia. The study randomly sampled 1496 Taiwanese enterprises in Mainland China, Vietnam and Indonesia. One questionnaire per enterprise was answered by Taiwanese owners or senior administrators. Some 375 valid responses, or 25% of the sample, were returned. Taiwanese enterprises in East (...) Asia were found to be ethically inclined in respect of their local environments and generic human rights, though one-third of participants identified themselves as "ethically lax". The study identified various influences on global business ethics viz. personnel localization, employment partnership, marketing ethics and the competitiveness of Taiwanese enterprises. (shrink)
We argue that planning and control may not be separable entities, either at the behavioural level or at the neurophysiological level. We review studies that show the involvement of superior and inferior parietal cortex in both planning and control. We propose an alternative view to the localization theory put forth by Glover.
Arbib suggests that language emerged in direct relation to manual gestural communication, that Broca's area participates in producing and imitating gestures, and that emotional facial expressions contributed to gesture-language coevolution. We discuss functional and structural evidence supporting localization of the neuronal modules controlling limb praxis, speech and language, and emotional communication. Current evidence supports completely independent limb praxis and speech/language systems.