Depending on different positions in the debate on scientific realism, there are various accounts of the phenomena of physics. For scientific realists like Bogen and Woodward, phenomena are matters of fact in nature, i.e., the effects explained and predicted by physical theories. For empiricists like van Fraassen, the phenomena of physics are the appearances observed or perceived by sensory experience. Constructivists, however, regard the phenomena of physics as artificial structures generated by experimental and mathematical methods. My (...) paper investigates the historical background of these different meanings of phenomenon in the traditions of physics and philosophy. In particular, I discuss Newton’s account of the phenomena and Bohr’s view of quantum phenomena, their relation to the philosophical discussion, and to data and evidence in current particle physics and quantum optics. (shrink)
This paper provides a restatement and defense of the data/ phenomena distinction introduced by Jim Bogen and me several decades ago (e.g., Bogen and Woodward, The Philosophical Review, 303–352, 1988). Additional motivation for the distinction is introduced, ideas surrounding the distinction are clarified, and an attempt is made to respond to several criticisms.
This paper draws attention to an increasingly common method of using computer simulations to establish evidential standards in physics. By simulating an actual detection procedure on a computer, physicists produce patterns of data (‘signatures’) that are expected to be observed if a sought-after phenomenon is present. Claims to detect the phenomenon are evaluated by comparing such simulated signatures with actual data. Here I provide a justification for this practice by showing how computer simulations establish the reliability of detection procedures. I (...) argue that this use of computer simulation undermines two fundamental tenets of the Bogen–Woodward account of evidential reasoning. Contrary to Bogen and Woodward’s view, computer-simulated signatures rely on ‘downward’ inferences from phenomena to data. Furthermore, these simulations establish the reliability of experimental setups without physically interacting with the apparatus. I illustrate my claims with a study of the recent detection of the superfluid-to-Mott-insulator phase transition in ultracold atomic gases. (shrink)
Empiricists claim that in accepting a scientific theory one should not commit oneself to claims about things that are not observable in the sense of registering on human perceptual systems (according to Van Fraassen’s constructive empiricism) or experimental equipment (according to what I call liberal empiricism ). They also claim scientific theories should be accepted or rejected on the basis of how well they save the phenomena in the sense delivering unified descriptions of natural regularities among things that meet (...) their conditions for observability. I argue that empiricism is both unfaithful to real world scientific practice, and epistemically imprudent, if not incoherent. To illuminate scientific practice and save regularity phenomena one must commit oneself to claims about causal mechanisms that can be detected from data, but do not register directly on human perceptual systems or experimental equipment. I conclude by suggesting that empiricists should relax their standards for acceptable beliefs. (shrink)
This paper investigates some metaphysical and epistemological assumptions behind Bogen and Woodward’s data-to-phenomena inferences. I raise a series of points and suggest an alternative possible Kantian stance about data-to-phenomena inferences. I clarify the nature of the suggested Kantian stance by contrasting it with McAllister’s view about phenomena as patterns in data sets.
The last two decades have seen a rising interest in (a) the notion of a scientific phenomenon as distinct from theories and data, and (b) the intricacies of experimentally producing and stabilizing phenomena. This paper develops an analysis of the stabilization of phenomena that integrates two aspects that have largely been treated separately in the literature: one concerns the skills required for empirical work; the other concerns the strategies by which claims about phenomena are validated. I argue (...) that in order to make sense of the process of stabilization, we need to distinguish between two types of phenomena: phenomena as patterns in the data ( surface regularities ) and phenomena as underlying (or hidden ) regularities. I show that the epistemic relationships that data bear to each of these types of phenomena are different: Data patterns are instantiated by individual data, whereas underlying regularities are indicated by individual data, insofar as they instantiate a data pattern. Drawing on an example from memory research, I argue that neither of these two kinds of phenomenon can be stabilized in isolation. I conclude that what is stabilized when phenomena are stabilized is the fit between surface regularities and hidden regularities. (shrink)
I discuss the application of the Model of Pragmatic Information to the study of spontaneous anomalystic mental phenomena like telepathy, precognition, etc. In these phenomena the most important effects are related to anomalous information gain by the subjects. I consider the basic ideas of the Model, as they have been applied to experimental anomalystic phenomena and to spontaneous phenomena that have strong physical effects, like poltergeist cases, highlighting analogies and differences. Moreover, I point out that in (...) such cases we cannot assign a probability of being accepted to every proposition, and so we cannot use standard formulas for pragmatic information and other relevant measures. To overcome the problem, I propose that qualitative possibility theory could be used to describe the situation. In such theory, the confidence in a proposition is expressed using a scale. Basic concepts like epistemic states, belief revision, information gain, pragmatic information etc. are discussed in this frame. Finally an application to some specific cases is sketched. (shrink)
The subject of this essay is the thing itself, examined through the fantastic character of phenomenality, that is, through the coming into being or opening up of the world. The world of appearance emerges from a simple, absolute nothing: there is nothing behind or before the world. There are right away many things, a world: one thing implies others, since for one to be it must distinguish itself from another. Thus, if `to be' means `to distinguish,' Being begins with the (...) parting of things that makes their connection possible. Thus the thing in itself is straightaway the undergoing of its own parting; being is a passion. The Imago , then, is not a picture or figure, but the arriving in presence, which imagination elicits or welcomes by advancing in response. Imagination, then, is not first of all open to an image, but to world. It opens itself to the Thing, to the possibility of something, to parting, and in so doing brings itself toward creation. (shrink)
Bose-Einstein statistics may be characterized in terms of multinomial distribution. From this characterization, an information theoretic analysis is made for Einstein-Podolsky-Rosen like situation; using Shannon’s measure of entropy.
According to standard (quantum) statistical mechanics, the phenomenon of a phase transition, as described in classical thermodynamics, cannot be derived unless one assumes that the system under study is infinite. This is naturally puzzling since real systems are composed of a finite number of particles; consequently, a well‐known reaction to this problem was to urge that the thermodynamic definition of phase transitions (in terms of singularities) should not be “taken seriously.” This article takes singularities seriously and analyzes their role by (...) using the well‐known distinction between data and phenomena , in an attempt to better understand the origin of the puzzle. *Received April 2009; revised July 2009. †To contact the author, please write to: University of Cambridge, Department of History and Philosophy of Science, Free School Lane, Cambridge CB2 3RH, United Kingdom; e‐mail: email@example.com. (shrink)
The singularity arising from the violation of the Lipschitz condition in the simple Newtonian system proposed recently by Norton (2003) is so fragile as to be completely and irreparably destroyed by slightly relaxing certain (infinite) idealizations pertaining to elastic phenomena in this model. I demonstrate that this is also true for several other Lipschitz-indeterministic systems, which, unlike Norton's example, have no surface curvature singularities. As a result, indeterminism in these systems should rather be viewed as an artefact of certain (...) infinite idealizations essential for these models, depriving them of much of their intended metaphysical import. (shrink)
Traditional explanations of multistable visual phenomena (e.g. ambiguous figures, perceptual rivalry) suggest that the basis for spontaneous reversals in perception lies in antagonistic connectivity within the visual system. In this review, we suggest an alternative, albeit speculative, explanation for visual multistability – that spontaneous alternations reflect responses to active, programmed events initiated by brain areas that integrate sensory and non-sensory information to coordinate a diversity of behaviors. Much evidence suggests that perceptual reversals are themselves more closely related to the (...) expression of a behavior than to passive sensory responses: (1) they are initiated spontaneously, often voluntarily, and are influenced by subjective variables such as attention and mood; (2) the alternation process is greatly facilitated with practice and compromised by lesions in non-visual cortical areas; (3) the alternation process has temporal dynamics similar to those of spontaneously initiated behaviors; (4) functional imaging reveals that brain areas associated with a variety of cognitive behaviors are specifically activated when vision becomes unstable. In this scheme, reorganizations of activity throughout the visual cortex, concurrent with perceptual reversals, are initiated by higher, largely non-sensory brain centers. Such direct intervention in the processing of the sensory input by brain structures associated with planning and motor programming might serve an important role in perceptual organization, particularly in aspects related to selective attention. (shrink)
Some twenty years ago, Bogen and Woodward challenged one of the fundamental assumptions of the received view, namely the theory-observation dichotomy and argued for the introduction of the further category of scientific phenomena. The latter, Bogen and Woodward stressed, are usually unobservable and inferred from what is indeed observable, namely scientific data. Crucially, Bogen and Woodward claimed that theories predict and explain phenomena, but not data. But then, of course, the thesis of theory-ladenness, which has it that our (...) observations are influenced by the theories we hold, cannot apply. On the basis of two case studies, I want to show that this consequence of Bogen and Woodward’s account is rather unrealistic. More importantly, I also object against Bogen and Woodward’s view that the reliability of data, which constitutes the precondition for data-to-phenomena inferences, can be secured without the theory one seeks to test. The case studies I revisit have figured heavily in the publications of Bogen and Woodward and others: the discovery of weak neutral currents and the discovery of the zebra pattern of magnetic anomalies. I show that, in the latter case, data can be ignored if they appear to be irrelevant from a particular theoretical perspective (TLI) and that, in the former case, the tested theory can be critical for the assessment of the reliability of the data (TLA). I argue that both TLI and TLA are much stronger senses of theory-ladenness than the classical thesis and that neither TLI nor TLA can be accommodated within Bogen and Woodward’s account. (shrink)
Kant’s claim that we are ignorant of things in themselves is a claim that we cannot know ‘the intrinsic nature of things’, or so at least I argued in Kantian Humility.2 I’m delighted to find that Lucy Allais is in broad agreement with this core idea, thinking it represents, at the very least, a part of Kant’s view. She sees some of the advantages of this interpretation. It has significant textual support. It does justice to Kant’s sense that we are (...) missing out on something, in our failure to know things as they are in themselves. And it makes tellable, after all, Kant’s at first sight untellable tale, about the knowable existence of unknowable things: for we can know that things exist, without knowing what their intrinsic properties are. However, Allais is critical of the way I fill out this core idea, and she has an alternative to offer. She thinks Kant’s distinction between things in themselves and phenomena is not a distinction between two kinds of properties, intrinsic and relational. She is critical of my interpretation of causal powers, which I take to be the relevant relational properties: my idea, first, that causal powers are in fact relational properties; second, that causal powers are only contingently associated with intrinsic properties, so that creating substances with intrinsic properties is insufficient for creating causal power; and, third, that intrinsic properties are causally inert. Her criticisms of these three ideas.. (shrink)
This paper is about mechanisms and models, and how they interact. In part, it is a response to recent discussion in philosophy of biology regarding whether natural selection is a mechanism. We suggest that this debate is indicative of a more general problem that occurs when scientists produce mechanistic models of populations and their behaviour. We can make sense of claims that there are mechanisms that drive population-level phenomena such as macroeconomics, natural selection, ecology, and epidemiology. But talk of (...) mechanisms and mechanistic explanation evokes objects with well-defined and localisable parts which interact in discrete ways, while models of populations include parts and interactions that are neither local nor discrete in any actual populations. This apparent tension can be resolved by carefully distinguishing between the properties of a model and those of the system it represents. To this end, we provide an analysis that recognises the flexible relationship between a mechanistic model and its target system. In turn, this reveals a surprising feature of mechanistic representation and explanation: it can occur even when there is a mismatch between the mechanism of the model and that of its target. Our analysis reframes the debate, providing an alternative way to interpret scientists’ mechanism-talk , which initially motivated the issue. We suggest that the relevant question is not whether any population-level phenomenon such as natural selection is a mechanism, but whether it can be usefully modelled as though it were a particular type of mechanism. (shrink)
I question Brentano's thesis that all and only mental phenomena are intentional. The common gloss on intentionality in terms of directedness does not justify the claim that intentionality is sufficient for mentality. One response to this problem is to lay down further requirements for intentionality. For example, it may be said that we have intentionality only where we have such phenomena as failure of substitution or existential presupposition. I consider a variety of such requirements for intentionality. I argue (...) they either fail to exclude all non-mental phenomena or are so demanding that they ground new, serious challenges to the claim that qualitative states of mind are intentional. (shrink)
A distinction is made between theory-driven and phenomenological models. It is argued that phenomenological models are significant means by which theory is applied to phenomena. They act both as sources of knowledge of their target systems and are explanatory of the behaviors of the latter. A version of the shell-model of nuclear structure is analyzed and it is explained why such a model cannot be understood as being subsumed under the theory structure of Quantum Mechanics. Thus its representational capacity (...) does not stem from its close link to theory. It is shown that the shell model yields knowledge about the target and is explanatory of certain behaviors of nuclei. Aspects of the process by which the shell model acquires its representational capacity are analyzed. It is argued that these point to the conclusion that the representational status of the model is a function of its capacity to function as a source of knowledge and its capacity to postulate and explain underlying mechanisms that give rise to the observed behavior of its target. (shrink)
Bogen and Woodward claim that the function of scientific theories is to account for 'phenomena', which they describe both as investigator-independent constituents of the world and as corresponding to patterns in data sets. I argue that, if phenomena are considered to correspond to patterns in data, it is inadmissible to regard them as investigator-independent entities. Bogen and Woodward's account of phenomena is thus incoherent. I offer an alternative account, according to which phenomena are investigator-relative entities. All (...) the infinitely many patterns that data sets exhibit have equal intrinsic claim to the status of phenomenon: each investigator may stipulate which patterns correspond to phenomena for him or her. My notion of phenomena accords better both with experimental practice and with the historical development of science. (shrink)
In this paper I argue-against van Fraassen's constructive empiricism-that the practice of saving phenomena is much broader than usually thought, and includes unobservable phenomena as well as observable ones. My argument turns on the distinction between data and phenomena: I discuss how unobservable phenomena manifest themselves in data models and how theoretical models able to save them are chosen. I present a paradigmatic case study taken from the history of particle physics to illustrate my argument. The (...) first aim of this paper is to draw attention to the experimental practice of saving unobservable phenomena, which philosophers have overlooked for too long. The second aim is to explore some far-reaching implications this practice may have for the debate on scientific realism and constructive empiricism. (shrink)
Thought experiment acquires evidential significance only on particular metaphysical assumptions. These include the thesis that science aims at uncovering "phenomena"universal and stable modes in which the world is articulatedand the thesis that phenomena are revealed imperfectly in actual occurrences. Only on these Platonically inspired assumptions does it make sense to bypass experience of actual occurrences and perform thought experiments. These assumptions are taken to hold in classical physics and other disciplines, but not in sciences that emphasize variety and (...) contingency, such as Aristotelian natural philosophy and some forms of historiography. This explains why thought experiments carry weight in the former but not the latter disciplines. (shrink)
This paper explores how data serve as evidence for phenomena. In contrast to standard philosophical models which invite us to think of evidential relationships as logical relationships, I argue that evidential relationships in the context of data-to-phenomena reasoning are empirical relationships that depend on holding the right sort of pattern of counterfactual dependence between the data and the conclusions investigators reach on the phenomena themselves.
Quantum gravity is supposed to be the most fundamental theory, including a quantum theory of the metrical field (spacetime). However, it is not clear how a quantum theory of gravity could account for classical phenomena, including notably measurement outcomes. But all the evidence that we have for a physical theory is based on measurement outcomes. We consider this problem in the framework of canonical quantum gravity, pointing out a dilemma: all the available accounts that admit classical phenomena presuppose (...) entities with a well-defined spatio-temporal localization (“local beables” in John Bell's terms) as primitive. But there seems to be no possibility to include such primitives in canonical quantum gravity. However, if one does not do so, it is not clear how entities that are supposed to be ontologically prior to spacetime could give rise to entities that then are spatio-temporally localized. (shrink)
Experimental engineering models have been used both to model general phenomena, such as the onset of turbulence in fluid flow, and to predict the performance of machines of particular size and configuration in particular contexts. Various sorts of knowledge are involved in the method - logical consistency, general scientific principles, laws of specific sciences, and experience. I critically examine three different accounts of the foundations of the method of experimental engineering models (scale models), and examine how theory, practice, and (...) experience are involved in employing the method to obtain practical results. Models of machines and mechanisms can be (and generally are) involved in establishing criteria for similar phenomena, which provide guidance in using events to model other events. Conversely, models of phenomena such as events that model other events can be (and generally are) involved in experimentation on models of machines. I conclude that often it is not more detailed models or the more precise equations they engender that leads to better understanding, but rather an insightful use of knowledge at hand to determine which similarity principles are appropriate in allowing us to infer what we do not know from what we are able to observe. (shrink)
Explanatory problems in the philosophy of neuroscience are not well captured by the division between the radical and the trivial neuron doctrines. The actual problem is, instead, whether mechanistic biological explanations across different levels of description can be extended to account for psychological phenomena. According to cognitive neuroscience, some neural levels of description at least are essential for the explanation of psychological phenomena, whereas, in traditional cognitive science, psychological explanations are completely independent of the neural levels of description. (...) The challenge for cognitive neuroscience is to discover the levels of description appropriate for the neural explanation of psychological phenomena. (shrink)
Philosophical discussions of biological classification have failed to recognise the central role of homology in the classification of biological parts and processes. One reason for this is a misunderstanding of the relationship between judgments of homology and the core explanatory theories of biology. The textbook characterisation of homology as identity by descent is commonly regarded as a definition. I suggest instead that it is one of several attempts to explain the phenomena of homology. Twenty years ago the ‘new experimentalist’ (...) movement in philosophy of science drew attention to the fact that many experimental phenomena have a ‘life of their own’: the conviction that they are real is not dependent on the theories used to characterise and explain them. I suggest that something similar can be true of descriptive phenomena, and that many homologies are phenomena of this kind. As a result the descriptive biology of form and function has a life of its own—a degree of epistemological independence from the theories that explain form and function. I also suggest that the two major ‘homology concepts’ in contemporary biology, usually seen as two competing definitions, are in reality complementary elements of the biological explanation of homology. (shrink)
The distinction between data and phenomena introduced by Bogen and Woodward (Philosophical Review 97(3):303–352, 1988) was meant to help accounting for scientific practice, especially in relation with scientific theory testing. Their article and the subsequent discussion is primarily viewed as internal to philosophy of science. We shall argue that the data/phenomena distinction can be used much more broadly in modelling processes in philosophy.
In this philosophical paper, I discuss and illustrate the necessary three ingredients that together could allow a collective phenomenon to be labelled as “emergent.” First, the phenomenon, as usual, requires a group of natural objects entering in a non-linear relationship and potentially entailing the existence of various semantic descriptions depending on the human scale of observation. Second, this phenomenon has to be observed by a mechanical observer instead of a human one, which has the natural capacity for temporal or spatial (...) integration, or both. Finally, for this natural observer to detect and select the collective phenomenon, it needs to do so on account of the adaptive advantage this phenomenon is responsible for. The necessity for such a teleological characterization and the presence of natural selection drive us to defend, with many authors, the idea that emergent phenomena should belong only to biology. Following a brief philosophical plea, we present a simple and illustrative computer thought experiment in which a society of agents evolves a stigmergic collective behavior as an outcome of its greater adaptive value. The three ingredients are illustrated and discussed within this experimental context. Such an inclusion of the mechanical observer and the natural selection to which this phenomenon is submitted should underlie the necessary de-subjectivation that strengthens any scientific endeavor. I shall finally show why the short paths taken by ant colonies, the collective flying of birds and the maximum consumption of nutrients by a cellular metabolism are strongly emergent. (shrink)
In semiclassical mechanics one finds explanations of quantum phenomena that appeal to classical structures. These explanations are prima facie problematic insofar as the classical structures they appeal to do not exist. Here I defend the view that fictional structures can be genuinely explanatory by introducing a model-based account of scientific explanation. Applying this framework to the semiclassical phenomenon of wavefunction scarring, I argue that not only can the fictional classical trajectories explain certain aspects of this quantum phenomenon, but also (...) that an explanation that does not make reference to these classical structures is, in a certain sense, deficient. Introduction The Case of Wavefunction Scarring Model Explanations, or How Fictional Structures Can Explain Putting Understanding Back into Explanation CiteULike Connotea Del.icio.us What's this? (shrink)
Providing an overview of Integral Ecology, this article defines and explains some of the key terms and concepts that underlie an approach to the environment that is inspired by and makes use of Ken Wilber's Integral Theory. First Integral Ecology is distinguished from other environmental approaches. Then Wilber's Integral Theory is introduced, which provides a foundation for a participatory approach to ecology. Next, the ontology, epistemology, and methodology of environmental phenomena is examined in light of Wilber's framework and illustrated (...) with multidimensional examples of recycling. Finally, an Integral Ecology platform is presented. (shrink)
Newton's methodology emphasized propositions "inferred from phenomena." These rest on systematic dependencies that make phenomena measure theoretical parameters. We consider the inferences supporting Newton's inductive argument that gravitation is proportional to inertial mass. We argue that the support provided by these systematic dependencies is much stronger than that provided by bootstrap confirmation; this kind of support thus avoids some of the major objections against bootstrapping. Finally we examine how contemporary testing of equivalence principles exemplifies this Newtonian methodological theme.
I take Newton's arguments to inverse square centripetal forces from Kepler's harmonic and areal laws to be classic deductions from phenomena. I argue that the theorems backing up these inferences establish systematic dependencies that make the phenomena carry the objective information that the propositions inferred from them hold. A review of the data supporting Kepler's laws indicates that these phenomena are Whewellian colligations-generalizations corresponding to the selection of a best fitting curve for an open-ended body of data. (...) I argue that the information theoretic features of Newton's corrections of the Keplerian phenomena to account for perturbations introduced by universal gravitation show that these corrections do not undercut the inferences from the Keplerian phenomena. Finally, I suggest that all of Newton's impressive applications of Universal gravitation to account for motion phenomena show an attempt to deliver explanations that share these salient features of his classic deductions from phenomena. (shrink)
Assuming an essential difference between scientific data and phenomena, this paper argues for the view that we have to understand how empirical findings get transformed into scientific phenomena. The work of scientists is seen as largely consisting in constructing these phenomena which are then utilized in more abstract theories. It is claimed that these matters are of importance for discussions of theory choice and progress in science. A case study is presented as a starting point: paleomagnetism and (...) the use of paleomagnetic data in early discussions of continental drift. Some general features of this study are presented in formalized language. It is suggested that the presentation given is particularly suited for a semantic conception of theories. Even though the construction of scientific phenomena is the main topic of this paper, the view presented here is more adapted to realism than social constructivism. (shrink)
ENTIRE BOOK, SINGLE FILE. BOOLEAN RELATION THEORY AND THE INCOMPLETENESS PHENOMENA. 10/30/07 version. Same as 10/01/07 version with Preface added. 568 pages without Appendix B. See above for Appendix B by Francoise Point.
In the third book in the trilogy that includes Reduction and Givenness and Being Given. Marion renews his argument for a phenomenology of givenness, with penetrating analyses of the phenomena of event, idol, flesh, and icon. Turning explicitly to hermeneutical dimensions of the debate, Marion masterfully draws together issues emerging from his close reading of Descartes and Pascal, Husserl and Heidegger, Levinas and Henry. Concluding with a revised version of his response to Derrida, In the Name: How to Avoid (...) Speaking of It, Marion powerfully re-articulates the theological possibilities of phenomenology. (shrink)
Bogen and Woodward (1988) advance adistinction between data and phenomena. Roughly, theformer are the observations reported by experimentalscientists, the latter are objective, stable featuresof the world to which scientists infer based onpatterns in reliable data. While phenomena areexplained by theories, data are not, and so theempirical basis for an inference to a theory consistsin claims about phenomena. McAllister (1997) hasrecently offered a critique of their version of thisdistinction, offering in its place a version on whichphenomena are theory (...) laden, and hence on which theempirical support for inferences to theories is also,unavoidably, theory laden. In this commentary I arguethat McAllister and Bogen and Woodward are mistaken inthinking that the distinction is necessary, and thatthe empirical support for inferences to theories isnot necessarily theory laden in the way McAllister'saccount entails they are. (shrink)
Bogen and Woodward characterized data as embedded in the context in which they are produced (‘local’) and claims about phenomena as retaining their significance beyond that context (‘nonlocal’). This view does not fit sciences such as biology, which successfully disseminate data via packaging processes that include appropriate labels, vehicles, and human interventions. These processes enhance the evidential scope of data and ensure that claims about phenomena are understood in the same way across research communities. I conclude that the (...) degree of locality of both data and claims about phenomena varies depending on the packaging used to make them travel and on the research setting in which they are used. †To contact the author, please write to: ESRC Centre for Genomics in Society, University of Exeter, Byrne House, St. Germans Road, EX4 4PJ Exeter, United Kingdom; e‐mail: firstname.lastname@example.org. (shrink)
The papers collected here are the result of an INTERNATIONAL SYMPOSIUM: Data · Phenomena · Theories: What’s the notion of a scientific phenomenon good for? held in Heidelberg in September 2008. The event was organized by the research group Causality, Cognition, and the Constitution of Scientific Phenomena in cooperation with Philosophy Department at the University of Heidelberg (Peter McLaughlin and Andreas Kemmerling) and the IWH Heidelberg. The symposium was supported by the Emmy-Noether-Programm der Deutschen Forschungsgemeinschaft and by Stiftung (...) Universitat Heidelebrg . The workshop was held in honor of Daniela Bailer-Jones, who died on 13 November 2006 at the age of 37 (cf. my 2007 Daniela Bailer-Jones ). Bailer-Jones was an Emmy Noether fellow, and the symposium was arranged and run by those who were working in her research group at the time of her death: Monika Dullstein, Jochen Apel, and Pavel Radchencko. To them goes the credit for the conception, planning, and carrying out of the symposium. (shrink)
mechanism" is frequently encountered in the social science literature, but there is considerable confusion about the exact meaning of the term. The article begins by addressing the main conceptual issues. Use of this term is the hallmark of an approach that is critical of the explanatory deficits of correlational analysis and of the covering-law model, advocating instead the causal reconstruction of the processes that account for given macro-phenomena. The term "social mechanisms" should be used to refer to recurrent processes (...) generating a specific kind of outcome. Explanation of social macro-phenomena by mechanisms typically involves causal regression to lower-level elements, as stipulated by methodological individualism. While there exist a good many mechanism models to explain emergent effects of collective behavior, we lack a similarly systematic treatment of generative mechanisms in which institutions and specific kinds of structural configurations play the decisive role. Key Words: causal regression correlational analysis emergent effects micro-macro processes social mechanisms structural determinants. (shrink)
This paper examines Newton's argument from the phenomena to the law of universal gravitation-especially the question how such a result could have been obtained from the evidential base on which that argument rests. Its thesis is that the crucial step was a certain application of the third law of motion-one that could only be justified by appeal to the consequences of the resulting theory; and that the general concept of interaction embodied in Newton's use of the third law most (...) probably evolved in the course of the very investigation that led to this theory. (shrink)