Reality’s next top model? Content Type Journal Article DOI 10.1007/s11016-010-9475-3 Authors Jaakko Kuorikoski, Philosophy of Science Group/Social and Moral Philosophy, University of Helsinki, P.O. Box 24, 00014 Helsinki, Finland Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
This paper aims to provide Humean metaphysics for the interventionist theory of causation. This is done by appealing to the hierarchical picture of causal relations as being realized by mechanisms, which in turn are identified with lower-level causal structures. The modal content of invariances at the lowest level of this hierarchy, at which mechanisms are reduced to strict natural laws, is then explained in terms of projectivism based on the best-system view of laws.
Probabilistic phenomena are often perceived as being problematic targets for contrastive explanation. It is usually thought that the possibility of contrastive explanation hinges on whether or not the probabilistic behaviour is irreducibly indeterministic, and that the possible remaining contrastive explananda are token event probabilities or complete probability distributions over such token outcomes. This paper uses the invariance-under-interventions account of contrastive explanation to argue against both ideas. First, the problem of contrastive explanation also arises in cases in which the probabilistic behaviour (...) of the explanandum is due to unobserved causal heterogeneity. Second, it turns out that, in contrast to the case of pure indeterminism, the plausible contrastive explananda under causal heterogeneity are not token event probabilities, but population-level statistical facts. (shrink)
Evolution is often characterized as a tinkerer that creates efficient but messy solutions to problems. We analyze the nature of the problems that arise when we try to explain and understand cognitive phenomena created by this haphazard design process. We present a theory of explanation and understanding and apply it to a case problem – solutions generated by genetic algorithms. By analyzing the nature of solutions that genetic algorithms present to computational problems, we show that the reason for why evolutionary (...) designs are often hard to understand is that they exhibit non-modular functionality, and that breaches of modularity wreak havoc on our strategies of causal and constitutive explanation. (shrink)
Whether simulation models provide the right kind of understanding comparable to that of analytic models has been and remains a contentious issue. The assessment of understanding provided by simulations is often hampered by a conflation between the sense of understanding and understanding proper. This paper presents a deflationist conception of understanding and argues for the need to replace appeals to the sense of understanding with explicit criteria of explanatory relevance and for rethinking the proper way of conceptualizing the role of (...) a single human mind in the collective scientific understanding. (shrink)
Many of the arguments for neuroeconomics rely on mistaken assumptions about criteria of explanatory relevance across disciplinary boundaries and fail to distinguish between evidential and explanatory relevance. Building on recent philosophical work on mechanistic research programmes and the contrastive counterfactual theory of explanation, we argue that explaining an explanatory presupposition or providing a lower-level explanation does not necessarily constitute explanatory improvement. Neuroscientific findings have explanatory relevance only when they inform a causal and explanatory account of the psychology of human decision-making.
Comparisons of rival explanations or theories often involve vague appeals to explanatory power. In this paper, we dissect this metaphor by distinguishing between different dimensions of the goodness of an explanation: non-sensitivity, cognitive salience, precision, factual accuracy and degree of integration. These dimensions are partially independent and often come into conflict. Our main contribution is to go beyond simple stipulation or description by explicating why these factors are taken to be explanatory virtues. We accomplish this by using the contrastive-counterfactual approach (...) to explanation and the view of understanding as an inferential ability. By combining these perspectives, we show how the explanatory power of an explanation in a given dimension can be assessed by showing the range of answers it provides to what-if-things-had-been-different questions and the theoretical and pragmatic importance of these questions. Our account also explains intuitions linking explanation to unification or to exhibition of a mechanism. (shrink)
Although there has been much recent discussion on mechanisms in philosophy of science and social theory, no shared understanding of the crucial concept itself has emerged. In this paper, a distinction between two core concepts of mechanism is made on the basis that the concepts correspond to two different research strategies: the concept of mechanism as a componential causal system is associated with the heuristic of functional decomposition and spatial localization and the concept of mechanism as an abstract form of (...) interaction is associated with the strategy of abstraction and simple models. The causal facts assumed and the theoretical consequences entailed by an explanation with a given mechanism differ according to which concept of mechanism is in use. Research strategies associated with mechanism concepts also involve characteristic biases that should be taken into account when using them, especially in new areas of application. (shrink)
Robert Sugden argues that robustness analysis cannot play an epistemic role in grounding model-world relationships because the procedure is only a matter of comparing models with each other. We posit that this argument is based on a view of models as being surrogate systems in too literal a sense. In contrast, the epistemic importance of robustness analysis is easy to explicate if modelling is viewed as extended cognition, as inference from assumptions to conclusions. Robustness analysis is about assessing the reliability (...) of our extended inferences, and when our confidence in these inferences changes, so does our confidence in the results. Furthermore, we argue that Sugden’s inductive account relies tacitly on robustness considerations. (shrink)
The invariance under interventions –account of causal explanation imposes a modularity constraint on causal systems: a local intervention on a part of the system should not change other causal relations in that system. This constraint has generated criticism against the account, since many ordinary causal systems seem to break this condition. This paper answers to this criticism by noting that explanatory models are always models of specific causal structures, not causal systems as a whole, and that models of causal structures (...) can have different modularity properties which determine what can and what cannot be explained with the model. (shrink)
All economic models involve abstractions and idealisations. Economic theory itself does not tell which idealizations are truly fatal or harmful for the result and which are not. This is why much of what is seen as theoretical contribution in economics is constituted by deriving familiar results from different modelling assumptions. If a modelling result is robust with respect to particular modelling assumptions, the empirical falsity of these particular assumptions does not provide grounds for criticizing the result. In this paper we (...) demonstrate how derivational robustness analysis does carry epistemic weight and answer criticism concerning its non-empirical nature and the problematic form of the required independence of the ways of derivation. The epistemic rationale and importance of robustness analysis also challenge some common conceptions of the role of theory in economics. (shrink)
Like other mathematically intensive sciences, economics is becoming increasingly computerized. Despite the extent of the computation, however, there is very little true simulation. Simple computation is a form of theory articulation, whereas true simulation is analogous to an experimental procedure. Successful computation is faithful to an underlying mathematical model, whereas successful simulation directly mimics a process or a system. The computer is seen as a legitimate tool in economics only when traditional analytical solutions cannot be derived, i.e., only as a (...) purely computational aid. We argue that true simulation is seldom practiced because it does not fit the conception of understanding inherent in mainstream economics. According to this conception, understanding is constituted by analytical derivation from a set of fundamental economic axioms. We articulate this conception using the concept of economists' perfect model. Since the deductive links between the assumptions and the consequences are not transparent in ‘bottom‐up’ generative microsimulations, microsimulations cannot correspond to the perfect model and economists do not therefore consider them viable candidates for generating theories that enhance economic understanding. (shrink)
The most common argument against the use of rational choice models outside economics is that they make unrealistic assumptions about individual behavior. We argue that whether the falsity of assumptions matters in a given model depends on which factors are explanatorily relevant. Since the explanatory factors may vary from application to application, effective criticism of economic model building should be based on model-specific arguments showing how the result really depends on the false assumptions. However, some modeling results in imperialistic applications (...) are relatively robust with respect to unrealistic assumptions. Key Words: unrealistic assumptions economics imperialism rational choice as if robustness. (shrink)