Traditional explanations of multistable visual phenomena (e.g. ambiguous figures, perceptual rivalry) suggest that the basis for spontaneous reversals in perception lies in antagonistic connectivity within the visual system. In this review, we suggest an alternative, albeit speculative, explanation for visual multistability – that spontaneous alternations reflect responses to active, programmed events initiated by brain areas that integrate sensory and non-sensory information to coordinate a diversity of behaviors. Much evidence suggests that perceptual reversals are themselves more closely related to the (...) expression of a behavior than to passive sensory responses: (1) they are initiated spontaneously, often voluntarily, and are influenced by subjective variables such as attention and mood; (2) the alternation process is greatly facilitated with practice and compromised by lesions in non-visual cortical areas; (3) the alternation process has temporal dynamics similar to those of spontaneously initiated behaviors; (4) functional imaging reveals that brain areas associated with a variety of cognitive behaviors are specifically activated when vision becomes unstable. In this scheme, reorganizations of activity throughout the visual cortex, concurrent with perceptual reversals, are initiated by higher, largely non-sensory brain centers. Such direct intervention in the processing of the sensory input by brain structures associated with planning and motor programming might serve an important role in perceptual organization, particularly in aspects related to selective attention. (shrink)
The literature on the nature of understanding can be divided into two broad camps. Explanationists believe that it is knowledge of explanations that is key to understanding. In contrast, their manipulationist rivals maintain that understanding essentially involves an ability to manipulate certain representations. The aim of this paper is to provide a novel knowledge based account of understanding. More specifically, it proposes an account of maximal understanding of a given phenomenon in terms of fully comprehensive and maximally well-connected knowledge of (...) it and of degrees of understanding in terms of approximations to such knowledge. It is completed by a contextualist semantics for outright attributions of understanding according to which an attribution of understanding is true of one just in case one knows enough about it to perform some contextually determined task. It is argued that this account has an edge over both its explanationist and manipulationist competitors. (shrink)
This study provides a survey of phenomena that present themselves during moments of naturally occurring inner experience. In our previous studies using Descriptive Experience Sampling we have discovered five frequently occurring phenomena—inner speech, inner seeing, unsymbolized thinking, feelings, and sensory awareness. Here we quantify the relative frequency of these phenomena. We used DES to describe 10 randomly identified moments of inner experience from each of 30 participants selected from a stratified sample of college students. We found that (...) each of the five phenomena occurred in approximately one quarter of sampled moments, that the frequency of these phenomena varied widely across individuals, that there were no significant gender differences in the relative frequencies of these phenomena, and that higher frequencies of inner speech were associated with lower levels of psychological distress. (shrink)
It is commonly held that research efforts in the cognitive and behavioral sciences are mainly directed toward providing explanations and that phenomena figure into scientific practice qua explananda. I contend that these assumptions convey a skewed picture of the research practices in question and of the role played by phenomena. I argue that experimental research often aims at exploring and describing “objects of research” and that phenomena can figure as components of, and as evidence for, such objects. (...) I situate my analysis within the existing literature and illustrate it with examples from memory research. (shrink)
A well-known open problem in epistemic logic is to give a syntactic characterization of the successful formulas. Semantically, a formula is successful if and only if for any pointed model where it is true, it remains true after deleting all points where the formula was false. The classic example of a formula that is not successful in this sense is the “Moore sentence” p ∧ ¬BOXp, read as “p is true but you do not know p.” Not only is the (...) Moore sentence unsuccessful, it is self-refuting, for it never remains true as described. We show that in logics of knowledge and belief for a single agent (extended by S5), Moorean phenomena are the source of all self-refutation; moreover, in logics for an introspective agent (extending KD45), Moorean phenomena are the source of all unsuccessfulness as well. This is a distinctive feature of such logics, for with a non-introspective agent or multiple agents, non-Moorean unsuccessful formulas appear. We also consider how successful and self-refuting formulas relate to the Cartesian and learnable formulas, which have been discussed in connection with Fitch’s “paradox of knowability.” We show that the Cartesian formulas are exactly the formulas that are not eventually self-refuting and that not all learnable formulas are successful. In an appendix, we give syntactic characterizations of the successful and the self-refuting formulas. (shrink)
In the face of causal complexity, scientists reconstitute phenomena in order to arrive at a more simplified and partial picture that ignores most of the 'bigger picture.' This paper will distinguish between two modes of reconstituting phenomena: one moving down to a level of greater decomposition (toward organizational parts of the original phenomenon), and one moving up to a level of greater abstraction (toward different differences regarding the phenomenon). The first aim of the paper is to illustrate that (...)phenomena are moving targets, i.e., they are not fixed once and for all, but are adapted, if necessary, on the basis of the preferred perspective adopted for pragmatic reasons. The second aim is to analyze in detail the second mode of reconstituting phenomena. This includes an exposition of the kind of pragmatic-pluralistic picture resulting from the fact that phenomena are reconstituted by a move up to a level of greater abstraction. (shrink)
This paper explores how data serve as evidence for phenomena. In contrast to standard philosophical models which invite us to think of evidential relationships as logical relationships, I argue that evidential relationships in the context of data-to-phenomena reasoning are empirical relationships that depend on holding the right sort of pattern of counterfactual dependence between the data and the conclusions investigators reach on the phenomena themselves.
Bogen and Woodward claim that the function of scientific theories is to account for 'phenomena', which they describe both as investigator-independent constituents of the world and as corresponding to patterns in data sets. I argue that, if phenomena are considered to correspond to patterns in data, it is inadmissible to regard them as investigator-independent entities. Bogen and Woodward's account of phenomena is thus incoherent. I offer an alternative account, according to which phenomena are investigator-relative entities. All (...) the infinitely many patterns that data sets exhibit have equal intrinsic claim to the status of phenomenon: each investigator may stipulate which patterns correspond to phenomena for him or her. My notion of phenomena accords better both with experimental practice and with the historical development of science. (shrink)
Thermodynamics and Statistical Mechanics are related to one another through the so-called "thermodynamic limit'' in which, roughly speaking the number of particles becomes infinite. At critical points (places of physical discontinuity) this limit fails to be regular. As a result, the "reduction'' of Thermodynamics to Statistical Mechanics fails to hold at such critical phases. This fact is key to understanding an argument due to Craig Callender to the effect that the thermodynamic limit leads to mistakes in Statistical Mechanics. I discuss (...) this argument and argue that the conclusion is misguided. In addition, I discuss an analogous example where a genuine physical discontinuity---the breaking of drops---requires the use of infinite idealizations. (shrink)
Philosophical discussions of biological classification have failed to recognise the central role of homology in the classification of biological parts and processes. One reason for this is a misunderstanding of the relationship between judgments of homology and the core explanatory theories of biology. The textbook characterisation of homology as identity by descent is commonly regarded as a definition. I suggest instead that it is one of several attempts to explain the phenomena of homology. Twenty years ago the ‘new experimentalist’ (...) movement in philosophy of science drew attention to the fact that many experimental phenomena have a ‘life of their own’: the conviction that they are real is not dependent on the theories used to characterise and explain them. I suggest that something similar can be true of descriptive phenomena, and that many homologies are phenomena of this kind. As a result the descriptive biology of form and function has a life of its own—a degree of epistemological independence from the theories that explain form and function. I also suggest that the two major ‘homology concepts’ in contemporary biology, usually seen as two competing definitions, are in reality complementary elements of the biological explanation of homology. (shrink)
Synchronistic or psi phenomena are interpreted as entanglement correlations in a generalized quantum theory. From the principle that entanglement correlations cannot be used for transmitting information, we can deduce the decline effect, frequently observed in psi experiments, and we propose strategies for suppressing it and improving the visibility of psi effects. Some illustrative examples are discussed.
Autoscopic phenomena are complex experiences that include the visual illusory reduplication of one’s own body. From a phenomenological point of view, we can distinguish three conditions: autoscopic hallucinations, heautoscopy, and out-of-body experiences. The dysfunctional pattern involves multisensory disintegration of personal and extrapersonal space perception. The etiology, generally either neurological or psychiatric, is different. Also, the hallucination of Self and own body image is present during dreams and differs according to sleep stage. Specifically, the representation of the Self in REM (...) dreams is frequently similar to the perception of Self in wakefulness, whereas in NREM dreams, a greater polymorphism of Self and own body representation is observed. The parallels between autoscopic phenomena in pathological cases and the Self-hallucination in dreams will be discussed to further the understanding of the particular states of self awareness, especially the complex integration of different memory sources in Self and body representation. (shrink)
There has been a good deal of interest in recent years in what Franz Brentano had to say about the notion of ‘intentional objects’ and about intentionality as a criterion of the mental. There has been less interest in his classification of mental phenomena. In his Psychology from an Empirical Standpoint Brentano asserts and argues for the thesis that mental phenomena can be classified in terms of three kinds of mental act or activity, all of which are directed (...) towards an immanent object. These are, respectively, presentation, judgment and what he calls the phenomena of love and hate. Once again, less interest has been shown in what he has to say about the last of these three than in what he says about the others. I wish to take Brentano's views as the point of departure for a discussion of love and hate, since these notions seem to me to have a good deal of philosophical interest, for at least two main reasons. First, I have recently had some concern with the part that personal relations play in our understanding of others and of ourselves, and love and hate seem to be very important elements in such relations. Second, love and hate have long seemed to me to provide important counter-examples to some prevalent philosophical theories about the emotions. I shall take this issue first. (shrink)
Traditional explanations of multistable visual phenomena (e.g. ambiguous figures, perceptual rivalry) suggest that the basis for spontaneous reversals in perception lies in antagonistic connectivity within the visual system. In this review, we suggest an alternative, albeit speculative. explanation for visual multistability - that spontaneous alternations reflect responses to active, programmed events initiated by brain areas that integrate sensory and non-sensory information to coordinate a diversity of behaviors. Much evidence suggests that perceptual reversals are themselves more closely related to the (...) expression of a behavior than to passive sensory responses: (1) they are initiated spontaneously, often voluntarily, and are influenced by subjective variables such as attention and mood; (2) the alternation process is greatly facilitated with practice and compromised by lesions in non- visual cortical areas; (3) the alternation process has temporal dynamics similar to those of spontaneously initiated behaviors; (4) functional imaging reveals that brain areas associated with a variety of cognitive behaviors are specifically activated when vision becomes unstable. In this scheme, reorganizations of activity throughout the visual cortex, concurrent with perceptual reversals, are initiated by higher, largely non- sensory brain centers. Such direct intervention In the processing of the sensory input by brain structures associated with planning and motor programming might serve an important role in perceptual organization, particularly in aspects related to selective attention. (shrink)
This paper provides a restatement and defense of the data/ phenomena distinction introduced by Jim Bogen and me several decades ago (e.g., Bogen and Woodward, The Philosophical Review, 303–352, 1988). Additional motivation for the distinction is introduced, ideas surrounding the distinction are clarified, and an attempt is made to respond to several criticisms.
Cognitive science is, more than anything else, a pursuit of cognitive mechanisms. To make headway towards a mechanistic account of any particular cognitive phenomenon, a researcher must choose among the many architectures available to guide and constrain the account. It is thus fitting that this volume on contemporary debates in cognitive science includes two issues of architecture, each articulated in the 1980s but still unresolved: " • Just how modular is the mind? – a debate initially pitting encapsulated mechanisms against (...) highly interactive ones. • Does the mind process language-like representations according to formal rules? – a debate initially pitting symbolic architectures against less language-like architectures. " Our project here is to consider the second issue within the broader context of where cognitive science has been and where it is headed. The notion that cognition in general—not just language processing—involves rules operating on language-like representations actually predates cognitive science. In traditional philosophy of mind, mental life is construed as involving propositional attitudes—that is, such attitudes towards propositions as believing, fearing, and desiring that they be true—and logical inferences from them. On this view, if a person desires that a proposition be true and believes that if she performs a certain action it will become true, she will make the inference and perform the action. (shrink)
This major new work by Anthony J. Steinbock, a leading authority in Phenomenology and Husserl Studies, explores an interrelated set of problems in Husserl's phenomenology and provides an excellent example of phenomenology in practice, demonstrating how its methods and resources shed light on philosophical problems.
A comparison of models and experiments supports the argument that although both function as mediators and can be understood to work in an experimental mode, experiments offer greater epistemic power than models as a means to investigate the economic world. This outcome rests on the distinction that whereas experiments are versions of the real world captured within an artificial laboratory environment, models are artificial worlds built to represent the real world. This difference in ontology has epistemic consequences: experiments have greater (...) potential to make strong inferences back to the world, but also have the power to isolate new phenomena. This latter power is manifest in the possibility that whereas working with models may lead to ?surprise?, experimental results may be unexplainable within existing theory and so ?confound? the experimenter. (shrink)
Debate about cognitive science explanations has been formulated in terms of identifying the proper level(s) of explanation. Views range from reductionist, favoring only neuroscience explanations, to mechanist, favoring the integration of multiple levels, to pluralist, favoring the preservation of even the most general, high-level explanations, such as those provided by embodied or dynamical approaches. In this paper, we challenge this framing. We suggest that these are not different levels of explanation at all but, rather, different styles of explanation that capture (...) different, cross-cutting patterns in cognitive phenomena. Which pattern is explanatory depends on both the cognitive phenomenon under investigation and the research interests occasioning the explanation. This reframing changes how we should answer the basic questions of which cognitive science approaches explain and how these explanations relate to one another. On this view, we should expect different approaches to offer independent explanations in terms of their different focal patterns and the value of those explanations to partly derive from the broad patterns they feature. (shrink)
This paper investigates some metaphysical and epistemological assumptions behind Bogen and Woodward’s data-to-phenomena inferences. I raise a series of points and suggest an alternative possible Kantian stance about data-to-phenomena inferences. I clarify the nature of the suggested Kantian stance by contrasting it with McAllister’s view about phenomena as patterns in data sets.
The neuroanatomical substrates controlling and regulating sleeping and waking, and thus consciousness, are located in the brain stem. Most crucial for bringing the brain into a state conducive for consciousness and information processing is the mesencephalic part of the brain stem. This part controls the state of waking, which is generally associated with a high degree of consciousness. Wakefulness is accompanied by a low-amplitude, high-frequency electroencephalogram, due to the fact that thalamocortical neurons fire in a state of tonic depolarization. Information (...) can easily pass the low-level threshold of these neurons, leading to a high transfer ratio. The complexity of the electroencephalogram during conscious waking is high, as expressed in a high correlation dimension. Accordingly, the level of information processing is high. Spindles, and alpha waves in humans, mark the transition from wakefulness to sleep. These phenomena are related to drowsiness, associated with a reduction in consciousness. Drowsiness occurs when cells undergo moderate hyperpolarizations. Increased inhibitions result in a reduction of afferent information, with a lowered transfer ratio. Information processing subsides, which is also expressed in a diminished correlation dimension. Consciousness is further decreased at the onset of slow wave sleep. This sleep is controlled by the medullar reticular formation and is characterized by a high-voltage, low-frequency electroencephalogram. Slow wave sleep becomes manifest when neurons undergo a further hyperpolarization. Inhibitory activities are so strong that the transfer ratio further drops, as does the correlation dimension. Thus, sensory information is largely blocked and information processing is on a low level. Finally, rapid eye movement sleep is regulated by the pontine reticular formation and is associated with a ''wake-like'' electroencephalographic pattern. Just as during wakefulness, this is the expression of a depolarization of thalamocortical neurons. The transfer ratio of rapid eye movement sleep has not yet been determined, but seems to vary. Evidence exists that this type of sleep, associated with dreaming, with some kind of perception and consciousness, is involved in processing of ''internal'' information. In line with this, rapid eye movement sleep has higher correlation dimensions than slow-wave sleep and sometimes even higher than wakefulness. It is assumed that the ''near-the-threshold'' depolarized state of neurons in the thalamus and cerebral cortex is a necessary condition for perceptual processes and consciousness, such as occurs during waking and in an altered form during rapid eye movement sleep. (shrink)
Empiricists claim that in accepting a scientific theory one should not commit oneself to claims about things that are not observable in the sense of registering on human perceptual systems (according to Van Fraassen’s constructive empiricism) or experimental equipment (according to what I call liberal empiricism ). They also claim scientific theories should be accepted or rejected on the basis of how well they save the phenomena in the sense delivering unified descriptions of natural regularities among things that meet (...) their conditions for observability. I argue that empiricism is both unfaithful to real world scientific practice, and epistemically imprudent, if not incoherent. To illuminate scientific practice and save regularity phenomena one must commit oneself to claims about causal mechanisms that can be detected from data, but do not register directly on human perceptual systems or experimental equipment. I conclude by suggesting that empiricists should relax their standards for acceptable beliefs. (shrink)
In many fields of biology, both the phenomena to be explained and the mechanisms proposed to explain them are commonly presented in diagrams. Our interest is in how scientists construct such diagrams. Researchers begin with evidence, typically developed experimentally and presented in data graphs. To arrive at a robust diagram of the phenomenon or the mechanism, they must integrate a variety of data to construct a single, coherent representation. This process often begins as the researchers create a first sketch, (...) and it continues over an extended period as they revise the sketch until they arrive at a diagram they find acceptable. We illustrate this process by examining the sketches developed in the course of two research projects directed at understanding the generation of circadian rhythms in cyanobacteria. One identified a new aspect of the phenomenon itself, whereas the other aimed to develop a new mechanistic account. In both cases, the research resulted in a paper in which the conclusion was presented in a diagram that the authors deemed adequate to convey it. These diagrams violate some of the normative “cognitive design principles” advanced by cognitive scientists as constraints on successful visual communication. We suggest that scientists’ sketching is instead governed by norms of success that are broadly explanatory: conveying the phenomenon or mechanism. (shrink)
In this paper, I investigate how researchers evaluate their characterizations of scientific phenomena. Characterizing phenomena is an important – albeit often overlooked – aspect of scientific research, as phenomena are targets of explanation and theorization. As a result, there is a lacuna in the literature regarding how researchers determine whether their characterization of a target phenomenon is appropriate for their aims. This issue has become apparent for accounts of scientific explanation that take phenomena to be explananda. (...) In particular, philosophers who endorse mechanistic explanation suggest that the discovery of the mechanisms that explain a phenomenon can lead to its recharacterization. However, they fail to make clear how these explanations provide warrant for recharacterizing their explananda phenomena. Drawing from cases of neurobiological research on potentiation phenomena, I argue that attempting to explain a phenomenon may provide reason to suspend judgment about its characterization, but this cannot provide warrant to recharacterize it if researchers cannot infer a phenomenon’s characteristics from this explanation. To explicate this, I go beyond explanation – mechanistic or otherwise – to analyze why and how researchers change their epistemic commitments in light of new evidence. (shrink)
A small consortium of philosophers has begun work on the implications of epistemic networks (Zollman 2008 and forthcoming; Grim 2006, 2007; Weisberg and Muldoon forthcoming), building on theoretical work in economics, computer science, and engineering (Bala and Goyal 1998, Kleinberg 2001; Amaral et. al., 2004) and on some experimental work in social psychology (Mason, Jones, and Goldstone, 2008). This paper outlines core philosophical results and extends those results to the specific question of thresholds. Epistemic maximization of certain types does show (...) clear threshold effects. Intriguingly, however, those effects appear to be importantly independent from more familiar threshold effects in networks. (shrink)
This paper draws attention to an increasingly common method of using computer simulations to establish evidential standards in physics. By simulating an actual detection procedure on a computer, physicists produce patterns of data (‘signatures’) that are expected to be observed if a sought-after phenomenon is present. Claims to detect the phenomenon are evaluated by comparing such simulated signatures with actual data. Here I provide a justification for this practice by showing how computer simulations establish the reliability of detection procedures. I (...) argue that this use of computer simulation undermines two fundamental tenets of the Bogen–Woodward account of evidential reasoning. Contrary to Bogen and Woodward’s view, computer-simulated signatures rely on ‘downward’ inferences from phenomena to data. Furthermore, these simulations establish the reliability of experimental setups without physically interacting with the apparatus. I illustrate my claims with a study of the recent detection of the superfluid-to-Mott-insulator phase transition in ultracold atomic gases. (shrink)
Duhem's 1908 essay questions the relation between physical theory and metaphysics and, more specifically, between astronomy and physics–an issue still of importance today. He critiques the answers given by Greek thought, Arabic science, medieval Christian scholasticism, and, finally, the astronomers of the Renaissance.
In this paper I argue-against van Fraassen's constructive empiricism-that the practice of saving phenomena is much broader than usually thought, and includes unobservable phenomena as well as observable ones. My argument turns on the distinction between data and phenomena: I discuss how unobservable phenomena manifest themselves in data models and how theoretical models able to save them are chosen. I present a paradigmatic case study taken from the history of particle physics to illustrate my argument. The (...) first aim of this paper is to draw attention to the experimental practice of saving unobservable phenomena, which philosophers have overlooked for too long. The second aim is to explore some far-reaching implications this practice may have for the debate on scientific realism and constructive empiricism. (shrink)
Batterman raises a number of concerns for the inferential conception of the applicability of mathematics advocated by Bueno and Colyvan. Here, we distinguish the various concerns, and indicate how they can be assuaged by paying attention to the nature of the mappings involved and emphasizing the significance of interpretation in this context. We also indicate how this conception can accommodate the examples that Batterman draws upon in his critique. Our conclusion is that ‘asymptotic reasoning’ can be straightforwardly accommodated within the (...) inferential conception. 1 Introduction2 Immersion, Inference and Partial Structures3 Idealization and Surplus Structure4 Renormalization and the Stability of Mathematical Representations5 Explanation and Eliminability6 Requirements for Explanation7 Interpretation and Idealization8 Explanation, Empirical Regularities and the Inferential Conception9 Conclusion. (shrink)
consistent and sufficiently strong system of first-order formal arithmetic fails to decide some independent Gödel sentence. We examine consistent first-order extensions of such systems. Our purpose is to discover what is minimally required by way of such extension in order to be able to prove the Gödel sentence in a non-trivial fashion. The extended methods of formal proof must capture the essentials of the so-called ‘semantical argument’ for the truth of the Gödel sentence. We are concerned to show that the (...) deflationist has at his disposal such extended methods—methods which make no use or mention of a truth-predicate. This consideration leads us to reassess arguments recently advanced—one by Shapiro and another by Ketland—against the deflationist's account of truth. Their main point of agreement is this: they both adduce the Gödel phenomena as motivating a ‘thick’ notion of truth, rather than the deflationist's ‘thin’ notion. But the so-called ‘semantical argument’, which appears to involve a ‘thick’ notion of truth, does not really have to be semantical at all. It is, rather, a reflective argument. And the reflections upon a system that are contained therein are deflationarily licit, expressible without explicit use or mention of a truth-predicate. Thus it would appear that this anti-deflationist objection fails to establish that there has to be more to truth than mere conformity to the disquotational T-schema. (shrink)
Can there be mathematical explanations of physical phenomena? In this paper, I suggest an affirmative answer to this question. I outline a strategy to reconstruct several typical examples of such explanations, and I show that they fit a common model. The model reveals that the role of mathematics is explicatory. Isolating this role may help to re-focus the current debate on the more specific question as to whether this explicatory role is, as proposed here, also an explanatory one.
The papers collected here are the result of an INTERNATIONAL SYMPOSIUM: Data · Phenomena · Theories: What’s the notion of a scientific phenomenon good for? held in Heidelberg in September 2008. The event was organized by the research group Causality, Cognition, and the Constitution of Scientific Phenomena in cooperation with Philosophy Department at the University of Heidelberg (Peter McLaughlin and Andreas Kemmerling) and the IWH Heidelberg. The symposium was supported by the Emmy-Noether-Programm der Deutschen Forschungsgemeinschaft and by Stiftung (...) Universitat Heidelebrg . The workshop was held in honor of Daniela Bailer-Jones, who died on 13 November 2006 at the age of 37 (cf. my 2007 Daniela Bailer-Jones ). Bailer-Jones was an Emmy Noether fellow, and the symposium was arranged and run by those who were working in her research group at the time of her death: Monika Dullstein, Jochen Apel, and Pavel Radchencko. To them goes the credit for the conception, planning, and carrying out of the symposium. (shrink)
I question Brentano's thesis that all and only mental phenomena are intentional. The common gloss on intentionality in terms of directedness does not justify the claim that intentionality is sufficient for mentality. One response to this problem is to lay down further requirements for intentionality. For example, it may be said that we have intentionality only where we have such phenomena as failure of substitution or existential presupposition. I consider a variety of such requirements for intentionality. I argue (...) they either fail to exclude all non-mental phenomena or are so demanding that they ground new, serious challenges to the claim that qualitative states of mind are intentional. (shrink)