The aim of this paper is to defend the structural concept of representation, as defined by homomorphisms, against its main objections, namely: logical objections, the objection from misrepresentation, theobjection from failing necessity, and the copy theory objection. The logical objections can be met by reserving the relation.
In the recent literature on concepts, two extreme positions concerning animal minds are predominant: the one that animals possess neither concepts nor beliefs, and the one that some animals possess concepts as well as beliefs. A characteristic feature of this controversy is the lack of consensus on the criteria for possessing a concept or having a belief. Addressing this deficit, we propose a new theory of concepts which takes recent case studies of complex animal behavior into account. The main aim (...) of the paper is to present an epistemic theory of concepts and to defend a detailed theory of criteria for having concepts. The distinction between nonconceptual, conceptual, and propositional representations is inherent to this theory. Accordingly, it can be reasonably argued that some animals, e.g., grey parrots and apes, operate on conceptual representations. (shrink)
The visual brain consists of several parallel, functionally specialized processing systems, each having several stages (nodes) which terminate their tasks at different times; consequently, simultaneously presented attributes are perceived at the same time if processed at the same node and at different times if processed by different nodes. Clinical evidence shows that these processing systems can act fairly autonomously. Damage restricted to one system compromises specifically the perception of the attribute that that system is specialized for; damage to a given (...) node of a processing system that leaves earlier nodes intact results in a degraded perceptual capacity for the relevant attribute, which is directly related to the physiological capacities of the cells left intact by the damage. By contrast, a system that is spared when all others are damaged can function more or less normally. Moreover, internally created visual percepts-illusions, afterimages, imagery, and hallucinations-activate specifically the nodes specialized for the attribute perceived. Finally, anatomical evidence shows that there is no final integrator station in the brain, one which receives input from all visual areas; instead, each node has multiple outputs and no node is recipient only. Taken together, the above evidence leads us to propose that each node of a processing-perceptual system creates its own microconsciousness. We propose that, if any binding occurs to give us our integrated image of the visual world, it must be a binding between microconsciousnesses generated at different nodes. Since any two microconsciousnesses generated at any two nodes can be bound together, perceptual integration is not hierarchical, but parallel and postconscious. By contrast, the neural machinery conferring properties on those cells whose activity has a conscious correlate is hierarchical, and we refer to it as generative binding, to distinguish it from the binding that might occur between the microconsciousnesses. (shrink)
The aim of this paper is to defend the structural concept of representation, as defined by homomorphisms, against its main objections, namely: logical objections, the objection from misrepresentation, theobjection from failing necessity, and the copy theory objection. The logical objections can be met by reserving the relation ‘to be homomorphic to’ for the explication of potential representation. Actual reference objects of representations are determined by representational mechanisms. Appealing to the independence of the dimensions of ‘content’ and ‘target’ also helps to (...) see how the structural concept can cope with misrepresentation. Finally, I argue that homomorphic representations are not necessarily ‘copies’ of their representanda, and thus can convey scientific insight. (shrink)
The paper defends the structural concept of representation, defined by homomorphisms, against the main objections that have been raised against it: Logical objections, the objection from misrepresentation, the objection from failing necessity, and the copy theory objection. Homomorphic representations are not necessarily ‘copies’ of their representanda, and thus can convey scientific insight.
In this paper Modern Essentialism is used to solve a problem of individuation of spacetime points in General Relativity that has been raised by a New Leibnizian Argument against spacetime substantivalism, elaborated by Earman and Norton. An earlier essentialistic solution, proposed by Maudlin, is criticized as being against both the spirit of metrical essentialism and the fundamental principles of General Relativity. I argue for a modified essentialistic account of spacetime points that avoids those obstacles.
What has the dispositional analysis of properties and laws (e.g. Molnar, Powers, Oxford University Press, Oxford, 2003; Mumford, Laws in nature, Routledge London, 2004; Bird, Nature’s metaphysics, Clarendon Press, Oxford, 2007) to offer to the scientific understanding of physical properties?—The article provides an answer to this question for the case of spacetime points and their metrical properties in General Relativity. The analysis shows that metrical properties are not ‘powers’, i.e. they cannot be understood as producing the effects of spacetime on (...) matter with metaphysical necessity. Instead they possess categorical characteristics which, in connection with specific laws, explain those effects. Thus, the properties of spacetime do not favor the metaphysics of powers with respect to properties and laws. (shrink)
In this paper, I will defend the thesis that fundamental natural laws are distinguished from accidental empirical generalizations neither by metaphysical necessity, 147–155, 2005, 2007) nor by contingent necessitation. The only sort of modal force that distinguishes natural laws, I will argue, arises from the peculiar physical property of mutual independence of elementary interactions exemplifying the laws. Mutual independence of elementary interactions means that their existence and their nature do not depend in any way on which other interactions presently occur. (...) It is exactly this general physical property of elementary interactions in the actual world that provides natural laws with their specific modal force and grounds the experience of nature’s ‘recalcitrance’. Thus, the modal force of natural laws is explained by contingent non-modal properties of nature. In the second part of the paper, I deal with some alleged counterexamples to my approach: constraint laws, compositional laws, symmetry principles and conservation laws. These sorts of laws turn out to be compatible with my approach: constraint laws and compositional laws do not represent the dynamics of interaction-types by themselves, but only as constitutive parts of a complete set of equations, whereas symmetry principles and conservation laws do not represent any specific dynamics, but only impose general constraints on possible interactions. (shrink)
fMRI is a tool to study brain function noninvasively that can reliably identify sites of neural involvement for a given task. However, to what extent can fMRI signals be related to measures obtained in electrophysiology? Can the blood-oxygen-level-dependent signal be interpreted as spatially pooled spiking activity? Here we combine knowledge from neurovascular coupling, functional imaging and neurophysiology to discuss whether fMRI has succeeded in demonstrating one of the most established functional properties in the visual brain, namely directional selectivity in the (...) motion-processing region V5/ MT+. We also discuss differences of fMRI and electrophysiology in their sensitivity to distinct physiological processes. We conclude that fMRI constitutes a complement, not a poor-resolution substitute, to invasive techniques, and that it deserves interpretations that acknowledge its stand as a separate signal. (shrink)
In this paper, I will defend the thesis that fundamental natural laws are distinguished from accidental empirical generalizations neither by metaphysical necessity, 147–155, 2005, 2007) nor by contingent necessitation. The only sort of modal force that distinguishes natural laws, I will argue, arises from the peculiar physical property of mutual independence of elementary interactions exemplifying the laws. Mutual independence of elementary interactions means that their existence and their nature do not depend in any way on which other interactions presently occur. (...) It is exactly this general physical property of elementary interactions in the actual world that provides natural laws with their specific modal force and grounds the experience of nature’s ‘recalcitrance’. Thus, the modal force of natural laws is explained by contingent non-modal properties of nature. In the second part of the paper, I deal with some alleged counterexamples to my approach: constraint laws, compositional laws, symmetry principles and conservation laws. These sorts of laws turn out to be compatible with my approach: constraint laws and compositional laws do not represent the dynamics of interaction-types by themselves, but only as constitutive parts of a complete set of equations, whereas symmetry principles and conservation laws do not represent any specific dynamics, but only impose general constraints on possible interactions. (shrink)
The rationality of scientific concept formation in theory transitions, challenged by the thesis of semantic incommensurability, can be restored by theChains of Meaning approach to concept formation. According to this approach, concepts of different, succeeding theories may be identified with respect to referential meaning, in spite of grave diversity of the mathematical structures characterizing them in their respective theories. The criterion of referential identity for concepts is that they meet a relation ofsemantic embedding, i.e. that the embedding concept can be (...) substituted by the embedded one in classical limit situations. Three case studies from contemporary physics theories will be used to show that the Chains of Meaning approach not only yields meaning comparisons for already established concepts (as for Newtonian and Schwarzschild mass) but is also well suited to characterize actual scientific strategies of concept formation in yet open cases such as black hole entropy or relativistic thermodynamics. (shrink)
The paper discusses three different ways of explaining the referential stability of concepts of physics. In order to be successful, an approach to referential stability has to provide resources to understand what constitutes the difference between the birth of a new concept with a history of its own, and an innovative step occurring within the lifetime of a persisting concept with stable reference. According to Theodore Arabatzis' 'biographical' approach (Representing Electrons 2006), the historical continuity of representations of the electron manifests (...) itself by the numerical stability of experimental parameters like the charge-to-mass ratio, and the continued acceptance of earlier experiments as manifestations of electron properties. I argue, against Arabatzis' approach, that the stability of experimental parameters justifies the assumption that there exists a chain of representations of a unique theoretical entity only if this stability occurs against the background of evidence for theoretical continuity. The Bain/Norton approach proposes to add exactly this element to the picture, but fails to reach its aim by focusing on formal similarities of Hamiltonians as an indicator of theoretical continuity. I shall argue that theoretical continuity has to be demonstrated rather on the level of particular solutions. This task is accomplished by the semantic embedding approach by means of defining a co-reference criterion for theoretical terms requiring the existence of semantic embedding relations between the terms that occur in particular solutions of different theories. (shrink)
Nikos K. Logothetis University of Manchester, Manchester, UK In binocular rivalry, the visual percept alternates stochastically between two dichoptically presented stimuli. It is established that both processes related to the eye of origin and binocular, stimulus-related processes account for these fluctuations in conscious perception. Here we studied how their relative contributions vary over time. We applied brief disruptions to rivalry displays, concurrent with an optional eye swap, at varying time intervals after one stimulus became visible (dominant). We found that early (...) in a dominance phase the dominant eye determined the percept by stabilizing its own contribution (regardless of the stimulus), with an additional yet weaker stabilizing contribution of the stimulus (regardless of the eye). Their stabilizing contributions declined in parallel with time so that late in a dominance phase the stimulus (and in some cases also the eye-based) contribution turned negative, favoring a perceptual (or ocular) switch. Our findings show that depending on the time, first processes related to the eye of origin and then those related to the stimulus can have a greater net influence on the stability of the conscious percept. Their co-varying change may be due to feedback from image- to eyeof-origin representations. (shrink)
The recent work of Paul Teller and Sunny Auyang in the philosophy of Quantum Field Theory (QFT) has stimulated the search for the fundamental entities in this theory. In QFT, the classical notion of a particle collapses. The theory does not only exclude classical, i.e., spatiotemporally identifiable particles, but it makes particles of the same type conceptually indistinguishable. Teller and Auyang have proposed competing ersatz-ontologies to account for the 'loss of particles': field quanta vs. field events. Both ontologies, however, suffer (...) from serious defects. While quanta lack numerical identity, spatiotemporal localizability, and independence of basis-representations, events--if understood as concrete measurement events--are related to the theory only statistically. I propose an alternative solution: The entities of QFT are events of the type 'Quantum system, S, is in quantum state, Ψ '. These are not point events, but Davidsonian events, i.e., they can be identified by their location within the causal net of the world. (shrink)
Empiricists mostly prefer an epistemic notion of causality intending thereby to avoid metaphysical entanglements. General relativity however provides examples for causality without predictability, i. e. world models in which for geometrical reasons there exist no spacelike hypersurfaces containing traces of all future events. Yet local determinism for every single event remains valid in these cases. Therefore the problem arises how to account for a causal structure that implies local but not global predictability. This problem, it is argued, cannot be solved (...) by characterizing the causal connection of events itself by means of its epistemic aspects. Instead the ontological ground must be emphazised which allows for varying epistemic properties of causal connections according to the space-time structure. This can be done by use of the energy transfer concept of causality which involves no reminiscence of dubious natural necessity. (shrink)
The concept of cognition has undergone considerable extension, first, by intensive empirical study of representational systems and their role for the possession and application of cognitive capacities and second, by the turn to practical knowledge. By these two developments, the field of research on cognition has been opening with respect to which all eager attempts of demarcation between the humanities and the sciences seem to be outdated. This brings about institutional changes in the scientific landscape as can be seen in (...) the current tendency to find interdisciplinary centres for the investigation of cognitive phenomena all over the world. (shrink)