Modeling involves the use of false idealizations, yet there is typically a belief or hope that modeling somehow manages to deliver true information about the world. The paper discusses one possible way of reconciling truth and falsehood in modeling. The key trick is to relocate truth claims by reinterpreting an apparently false idealizing assumption in order to make clear what possibly true assertion is intended when using it. These include interpretations in terms of negligibility, applicability, tractability, early-step, and more. Elaborations (...) are suggested about their precise formulations, mutual relationships, and truth-aptness. (shrink)
In this commentary to Napoletani et al. (Found Sci 16:1–20, 2011), we argue that the approach the authors adopt suggests that neural nets are mathematical techniques rather than models of cognitive processing, that the general approach dates as far back as Ptolemy, and that applied mathematics is more than simply applying results from pure mathematics.
The reshaping of much scientific research around computational methods is not just a technological curiosity. It results in a significant reshaping of conceptual and representational resources within science in ways with which many traditional philosophical positions are ill-equipped to cope. Some illustrations of this are provided and a consequence for the roles of science and the arts is noted.
A comparison is made between some epistemological issues arising in computer networks and standard features of social epistemology. A definition of knowledge for computational devices is provided and the topics of nonconceptual content and testimony are discussed.
Reasons are given to justify the claim that computer simulations and computational science constitute a distinctively new set of scientific methods and that these methods introduce new issues in the philosophy of science. These issues are both epistemological and methodological in kind.
A twofold taxonomy for emergence is presented into which a variety of contemporary accounts of emergence fit. The first taxonomy consists of inferential, conceptual, and ontological emergence; the second of diachronic and synchronic emergence. The adequacy of weak emergence, a computational form of inferential emergence, is then examined and its relationship to conceptual emergence and ontological emergence is detailed. †To contact the author, please write to: Corcoran Department of Philosophy, 120 Cocke Hall, University of Virginia, Charlottesville, VA 22904‐4780; e‐mail: email@example.com.
I discuss here a number of different kinds of diachronic emergence, noting that they differ in important ways from synchronic conceptions. I argue that Bedau’s weak emergence has an essentially historical aspect, in that there can be two indistinguishable states, one of which is weakly emergent, the other of which is not. As a consequence, weak emergence is about tokens, not types, of states. I conclude by examining the question of whether the concept of weak emergence is too weak and (...) note that there is at present no unifying account of diachronic and synchronic concepts of emergence. (shrink)
Four interpretations of single-case conditional propensities are described and it is shown that for each a version of what has been called ‘Humphreys' Paradox’ remains, despite the clarifying work of Gillies, McCurdy and Miller. This entails that propensities cannot be a satisfactory interpretation of standard probability theory. Introduction The basic issue The formal paradox Values of conditional propensities Interpretations of propensities McCurdy's response Miller's response Other possibilities 8.1 Temporal evolution 8.2 Renormalization 8.3 Causal influence Propensities to generate frequencies Conclusion.
Wesley Salmon provided three classic criteria of adequacy for satisfactory interpretations of probability. A fourth criterion is suggested here. A distinction is drawn between frequency‐driven probability models and theory‐driven probability models and it is argued that single case accounts of chance are superior to frequency accounts at least for the latter. Finally it is suggested that theories of chance should be required only to be contingently true, a position which is a natural extension of Salmon's ontic account of probabilistic causality (...) and his own later views on propensities. (shrink)
There have been many efforts to infer causation from association byusing statistical models. Algorithms for automating this processare a more recent innovation. In Humphreys and Freedman[(1996) British Journal for the Philosophy of Science 47, 113–123] we showed that one such approach, by Spirtes et al., was fatally flawed. Here we put our arguments in a broader context and reply to Korb and Wallace [(1997) British Journal for thePhilosophy of Science 48, 543–553] and to Spirtes et al.[(1997) British Journal for the (...) Philosophy of Science 48, 555–568]. Their arguments leave our position unchanged: claims to have developed a rigorous engine for inferring causation from association are premature at best, the theorems have no implications for samples of any realistic size, and the examples used to illustrate the algorithms are indicative of failure rather than success. The gap between association and causation has yet to be bridged. (shrink)
I argue that supervenience is an inadequate device for representing relations between different levels of phenomena. I then provide six criteria that emergent phenomena seem to satisfy. Using examples drawn from macroscopic physics, I suggest that such emergent features may well be quite common in the physical realm.
A framework for representing a specific kind of emergent property instance is given. A solution to a generalized version of the exclusion argument is then provided and it is shown that upwards and downwards causation is unproblematical for that kind of emergence. One real example of this kind of emergence is briefly described and the suggestion made that emergence may be more common than current opinions allow.
I argue here for a number of ways that modern computational science requires a change in the way we represent the relationship between theory and applications. It requires a switch away from logical reconstruction of theories in order to take surface mathematical syntax seriously. In addition, syntactically different versions of the same theory have important differences for applications, and this shows that the semantic account of theories is inappropriate for some purposes. I also argue against formalist approaches in the philosophy (...) of science and for a greater role for perceptual knowledge rather than propositional knowledge in scientific empiricism. (shrink)
The process of constructing mathematical models is examined and a case made that the construction process is an integral part of the justification for the model. The role of heuristics in testing and modifying models is described and some consequences for scientific methodology are drawn out. Three different ways of constructing the same model are detailed to demonstrate the claims made here.
This article provides a survey of some of the reasons why computational approaches have become a permanent addition to the set of scientific methods. The reasons for this require us to represent the relation between theories and their applications in a different way than do the traditional logical accounts extant in the philosophical literature. A working definition of computer simulations is provided and some properties of simulations are explored by considering an example from quantum chemistry.
The elements of structural models used in the social sciences are built up from four fundamental assumptions. It is then shown how the central idea of qualitative probabilistic causality follows as a special case of this covariational account. The relationships of both instrumentalism and common cause arguments for scientific realism to these structures is demonstrated. It is concluded that a predictivist argument against a thoroughgoing instrumentalism can be given, and hence why the difference between experimental and non-experimental contexts is important (...) to arguments for scientific realism. (shrink)
Existing definitions of relevance relations are essentially ambiguous outside the binary case. Hence definitions of probabilistic causality based on relevance relations, as well as probability values based on maximal specificity conditions and homogeneous reference classes are also not uniquely specified. A 'neutral state' account of explanations is provided to avoid the problem, based on an earlier account of aleatory explanations by the author. Further reasons in support of this model are given, focusing on the dynamics of explanation. It is shown (...) that truth in explanation need not entail maximal specificity and that probabilistic explanations should not contain a specification of probability values. (shrink)
It is argued in this paper that although much attention has been paid to causal chains and common causes within the literature on probabilistic causality, a primary virtue of that approach is its ability to deal with cases of multiple causation. In doing so some ways are indicated in which contemporary sine qua non analyses of causation are too narrow (and ways in which probabilistic causality is not) and an argument by Reichenbach designed to provide a basis for the asymmetry (...) of causation is refined. The importance of referring causal claims to an abstract model is also emphasized. (shrink)