Comparisons of rival explanations or theories often involve vague appeals to explanatory power. In this paper, we dissect this metaphor by distinguishing between different dimensions of the goodness of an explanation: non-sensitivity, cognitive salience, precision, factual accuracy and degree of integration. These dimensions are partially independent and often come into conflict. Our main contribution is to go beyond simple stipulation or description by explicating why these factors are taken to be explanatory virtues. We accomplish this by using the contrastive-counterfactual approach (...) to explanation and the view of understanding as an inferential ability. By combining these perspectives, we show how the explanatory power of an explanation in a given dimension can be assessed by showing the range of answers it provides to what-if-things-had-been-different questions and the theoretical and pragmatic importance of these questions. Our account also explains intuitions linking explanation to unification or to exhibition of a mechanism. (shrink)
We claim that the process of theoretical model refinement in economics is best characterised as robustness analysis: the systematic examination of the robustness of modelling results with respect to particular modelling assumptions. We argue that this practise has epistemic value by extending William Wimsatt's account of robustness analysis as triangulation via independent means of determination. For economists robustness analysis is a crucial methodological strategy because their models are often based on idealisations and abstractions, and it is usually difficult to tell (...) which idealisations are truly harmful. (shrink)
The article argues for the epistemic rationale of triangulation, namely, the use of multiple and independent sources of evidence. It claims that triangulation is to be understood as causal reasoning from data to phenomenon, and it rationalizes its epistemic value in terms of controlling for likely errors and biases of particular data-generating procedures. This perspective is employed to address objections against triangulation concerning the fallibility and scope of the inference, as well as problems of independence, incomparability, and discordance of evidence. (...) The debate on the existence of social preferences is used as an illustrative case. (shrink)
Robert Sugden argues that robustness analysis cannot play an epistemic role in grounding model-world relationships because the procedure is only a matter of comparing models with each other. We posit that this argument is based on a view of models as being surrogate systems in too literal a sense. In contrast, the epistemic importance of robustness analysis is easy to explicate if modelling is viewed as extended cognition, as inference from assumptions to conclusions. Robustness analysis is about assessing the reliability (...) of our extended inferences, and when our confidence in these inferences changes, so does our confidence in the results. Furthermore, we argue that Sugden’s inductive account relies tacitly on robustness considerations. (shrink)
This paper provides an inferentialist account of model-based understanding by combining a counterfactual account of explanation and an inferentialist account of representation with a view of modeling as extended cognition. This account makes it understandable how the manipulation of surrogate systems like models can provide genuinely new empirical understanding about the world. Similarly, the account provides an answer to the question how models, that always incorporate assumptions that are literally untrue of the model target, can still provide factive explanations. Finally, (...) the paper shows how the contrastive counterfactual theory of explanation can provide tools for assessing the explanatory power of models. (shrink)
If ontic dependence is the basis of explanation, there cannot be mathematical explanations. Accounting for the explanatory dependency between mathematical properties and empirical phenomena poses i...
Odenbaugh and Alexandrova provide a challenging critique of the epistemic benefits of robustness analysis, singling out for particular criticism the account we articulated in Kuorikoski et al.. Odenbaugh and Alexandrova offer two arguments against the confirmatory value of robustness analysis: robust theorems cannot specify causal mechanisms and models are rarely independent in the way required by robustness analysis. We address Odenbaugh and Alexandrova’s criticisms in order to clarify some of our original arguments and to shed further light on the properties (...) of robustness analysis and its epistemic rationale. (shrink)
The most common argument against the use of rational choice models outside economics is that they make unrealistic assumptions about individual behavior. We argue that whether the falsity of assumptions matters in a given model depends on which factors are explanatorily relevant. Since the explanatory factors may vary from application to application, effective criticism of economic model building should be based on model-specific arguments showing how the result really depends on the false assumptions. However, some modeling results in imperialistic applications (...) are relatively robust with respect to unrealistic assumptions. Key Words: unrealistic assumptions economics imperialism rational choice as if robustness. (shrink)
ABSTRACTWe investigate the applicability of Rodrik’s accounts of model selection and horizontal progress to macroeconomic DSGE modelling in both academic and policy-oriented modelling contexts. We argue that the key step of identifying critical assumptions is complicated by the interconnectedness of the common structural core of DSGE models and by the ad hoc modifications introduced to model various rigidities and other market imperfections. We then outline alternative ways in which macroeconomic modelling could become more horizontally progressive.
Like other mathematically intensive sciences, economics is becoming increasingly computerized. Despite the extent of the computation, however, there is very little true simulation. Simple computation is a form of theory articulation, whereas true simulation is analogous to an experimental procedure. Successful computation is faithful to an underlying mathematical model, whereas successful simulation directly mimics a process or a system. The computer is seen as a legitimate tool in economics only when traditional analytical solutions cannot be derived, i.e., only as a (...) purely computational aid. We argue that true simulation is seldom practiced because it does not fit the conception of understanding inherent in mainstream economics. According to this conception, understanding is constituted by analytical derivation from a set of fundamental economic axioms. We articulate this conception using the concept of economists' perfect model. Since the deductive links between the assumptions and the consequences are not transparent in ‘bottom‐up’ generative microsimulations, microsimulations cannot correspond to the perfect model and economists do not therefore consider them viable candidates for generating theories that enhance economic understanding. (shrink)
Although there has been much recent discussion on mechanisms in philosophy of science and social theory, no shared understanding of the crucial concept itself has emerged. In this paper, a distinction between two core concepts of mechanism is made on the basis that the concepts correspond to two different research strategies: the concept of mechanism as a componential causal system is associated with the heuristic of functional decomposition and spatial localization and the concept of mechanism as an abstract form of (...) interaction is associated with the strategy of abstraction and simple models. The causal facts assumed and the theoretical consequences entailed by an explanation with a given mechanism differ according to which concept of mechanism is in use. Research strategies associated with mechanism concepts also involve characteristic biases that should be taken into account when using them, especially in new areas of application. (shrink)
Human behavior is not always independent of the ways in which humans are scientifically classified. That there are looping effects of human kinds has been used as an argument for the methodological separation of the natural and the human sciences and to justify social constructionist claims. We suggest that these arguments rely on false presuppositions and present a mechanisms-based account of looping that provides a better way to understand the phenomenon and its theoretical and philosophical implications.
Whether simulation models provide the right kind of understanding comparable to that of analytic models has been and remains a contentious issue. The assessment of understanding provided by simulations is often hampered by a conflation between the sense of understanding and understanding proper. This paper presents a deflationist conception of understanding and argues for the need to replace appeals to the sense of understanding with explicit criteria of explanatory relevance and for rethinking the proper way of conceptualizing the role of (...) a single human mind in the collective scientific understanding. (shrink)
We review the most prominent modeling approaches in social epistemology aimed at understand- ing the functioning of epistemic communities and provide a philosophy of science perspective on the use and interpretation of such simple toy models, thereby suggesting how they could be integrated with conceptual and empirical work. We highlight the need for better integration of such models with relevant findings from disciplines such as social psychology and organization studies.
Many of the arguments for neuroeconomics rely on mistaken assumptions about criteria of explanatory relevance across disciplinary boundaries and fail to distinguish between evidential and explanatory relevance. Building on recent philosophical work on mechanistic research programmes and the contrastive counterfactual theory of explanation, we argue that explaining an explanatory presupposition or providing a lower-level explanation does not necessarily constitute explanatory improvement. Neuroscientific findings have explanatory relevance only when they inform a causal and explanatory account of the psychology of human decision-making.
Political science and economic science . . . make use of the same language, the same mode of abstraction, the same instruments of thought and the same method of reasoning. (Black 1998, 354) Proponents as well as opponents of economics imperialism agree that imperialism is a matter of unification; providing a unified framework for social scientific analysis. Uskali Mäki distinguishes between derivational and ontological unification and argues that the latter should serve as a constraint for the former. We explore whether, (...) in the case of rational-choice political science, self-interested behavior can be seen as a common causal element and solution concepts as the common derivational element, and whether the former constraints the use of the latter. We find that this is not the case. Instead, what is common to economics and rational-choice political science is a set of research heuristics and a focus on institutions with similar structures and forms of organization. (shrink)
Mechanisms are often characterized as causal structures and the interventionist account of causation is then used to characterize what it is to be a causal structure. The associated modularity constraint on causal structures has evoked criticism against using the theory as an account of mechanisms, since many mechanisms seem to violate modularity. This paper answers to this criticism by making a distinction between a causal system and a causal structure. It makes sense to ask what the modularity properties of a (...) given causal structure are, but not whether a causal system is modular tout court. The counter-examples to the interventionist account are systems in which a particular structure is modular in variables, but not in parameters. A failure of parameter-modularity does not by itself threaten the interventionist interpretation of the structure and the possibility of causally explaining with that structure, but it does mean that knowledge of the structure is not sufficient to constitutively explain system-level properties of the embedding system. (shrink)
Constitutivemechanisticexplanationsexplainapropertyofawholewith the properties of its parts and their organization. Carl Craver’s mutual manipulability criterion for constitutive relevance only captures the explanatory relevance of causal properties of parts and leaves the organization side of mechanistic explanation unaccounted for. We use the contrastive counterfactual theory of explanation and an account of the dimensions of organization to build a typology of organizational dependence. We analyse organizational explanations in terms of such dependencies and emphasize the importance of modular organizational motifs. We apply this framework (...) to two cases from social science and systems biology, both fields in which organization plays a crucial explanatory role: agent-based simulations of residential segregation and the recent work on network motifs in transcription regulation networks. (shrink)
All economic models involve abstractions and idealisations. Economic theory itself does not tell which idealizations are truly fatal or harmful for the result and which are not. This is why much of what is seen as theoretical contribution in economics is constituted by deriving familiar results from different modelling assumptions. If a modelling result is robust with respect to particular modelling assumptions, the empirical falsity of these particular assumptions does not provide grounds for criticizing the result. In this paper we (...) demonstrate how derivational robustness analysis does carry epistemic weight and answer criticism concerning its non-empirical nature and the problematic form of the required independence of the ways of derivation. The epistemic rationale and importance of robustness analysis also challenge some common conceptions of the role of theory in economics. (shrink)
This paper aims to provide Humean metaphysics for the interventionist theory of causation. This is done by appealing to the hierarchical picture of causal relations as being realized by mechanisms, which in turn are identified with lower-level causal structures. The modal content of invariances at the lowest level of this hierarchy, at which mechanisms are reduced to strict natural laws, is then explained in terms of projectivism based on the best-system view of laws.
Evolution is often characterized as a tinkerer that creates efficient but messy solutions to problems. We analyze the nature of the problems that arise when we try to explain and understand cognitive phenomena created by this haphazard design process. We present a theory of explanation and understanding and apply it to a case problem – solutions generated by genetic algorithms. By analyzing the nature of solutions that genetic algorithms present to computational problems, we show that the reason for why evolutionary (...) designs are often hard to understand is that they exhibit non-modular functionality, and that breaches of modularity wreak havoc on our strategies of causal and constitutive explanation. (shrink)
Probabilistic phenomena are often perceived as being problematic targets for contrastive explanation. It is usually thought that the possibility of contrastive explanation hinges on whether or not the probabilistic behaviour is irreducibly indeterministic, and that the possible remaining contrastive explananda are token event probabilities or complete probability distributions over such token outcomes. This paper uses the invariance-under-interventions account of contrastive explanation to argue against both ideas. First, the problem of contrastive explanation also arises in cases in which the probabilistic behaviour (...) of the explanandum is due to unobserved causal heterogeneity. Second, it turns out that, in contrast to the case of pure indeterminism, the plausible contrastive explananda under causal heterogeneity are not token event probabilities, but population-level statistical facts. (shrink)
This paper deals with the evidential value of neuroeconomic experiments for the triangulation of economically relevant phenomena. We examine the case of social preferences, which involves bringing together evidence from behavioural experiments, neuroeconomic experiments, and observational studies from other social sciences. We present an account of triangulation and identify the conditions under which neuroeconomic evidence is diverse in the way required for successful triangulation. We also show that the successful triangulation of phenomena does not necessarily afford additional confirmation to general (...) theories about those phenomena. (shrink)
The invariance under interventions –account of causal explanation imposes a modularity constraint on causal systems: a local intervention on a part of the system should not change other causal relations in that system. This constraint has generated criticism against the account, since many ordinary causal systems seem to break this condition. This paper answers to this criticism by noting that explanatory models are always models of specific causal structures, not causal systems as a whole, and that models of causal structures (...) can have different modularity properties which determine what can and what cannot be explained with the model. (shrink)
Human behavior is not always independent of the ways in which humans are scientifically classified. That there are looping effects of human kinds has been used as an argument for the methodological separation of the natural and the human sciences and to justify social constructionist claims. We suggest that these arguments rely on false presuppositions and present a mechanisms-based account of looping that provides a better way to understand the phenomenon and its theoretical and philosophical implications.
According to the diversity-beats-ability theorem, groups of diverse problem solvers can outperform groups of high-ability problem solvers. We argue that the model introduced by Lu Hong and Scott Page is inadequate for exploring the trade-off between diversity and ability. This is because the model employs an impoverished implementation of the problem-solving task. We present a new version of the model which captures the role of ‘ability’ in a meaningful way, and use it to explore the trade-offs between diversity and ability (...) in scientific problem solving. (shrink)
The recognition that models and simulations play a central role in the epistemology of science is about fifteen years old. Although models had long been discussed as possible foundational units in the logical analysis of scientific knowledge, the philosophical study of modelling as a distinct epistemic practice really got going in the wake of the Models as Mediators anthology edited by Margaret Morrison and Mary Morgan. In spite of the broad agreement that in fact much of science is model-based, however, (...) there is still little agreement on pretty much anything else. What are models? Are they representations or fictions, abstract entities or concrete artifacts? Which functions do they play? Can they explain.. (shrink)
Nudge and boost are two competing approaches to applying the psychology of reasoning and decision making to improve policy. Whereas nudges rely on manipulation of choice architecture to steer people towards better choices, the objective of boosts is to develop good decision-making competences. Proponents of both approaches claim capacity to enhance social welfare through better individual decisions. We suggest that such efforts should involve a more careful analysis of how individual and social welfare are related in the policy context. First, (...) individual rationality is not always sufficient or necessary for improving collective outcomes. Second, collective outcomes of complex social interactions among individuals are largely ignored by the focus of both nudge and boost on individual decisions. We suggest that the design of mechanisms and social norms can sometimes lead to better collective outcomes than nudge and boost, and present conditions under which the three approaches (nudge, boost, and design) can be expected to enhance social welfare. (shrink)
According to the diversity-beats-ability theorem, groups of diverse problem solvers can outperform groups of high-ability problem solvers. We argue that the model introduced by Lu Hong and Scott Page is inadequate for exploring the trade-off between diversity and ability. This is because the model employs an impoverished implementation of the problem-solving task. We present a new version of the model that captures the role of ‘ability’ in a meaningful way, and we use it to explore the trade-offs between diversity and (...) ability in scientific problem solving. (shrink)
Tässä luvussa tarkastelemme hypoteesien testaamista ja kokeellista kausaalista järkeilyä tieteenfilosofisesta näkökulmasta. Arvioimme kokeellisen menetelmän mahdollisuuksia ja rajoituksia yhteiskuntatieteellisen tutkimuksen kontekstissa, jossa luonnontieteille ominaisia yleispäteviä teorioita harvoin on saatavilla ja jossa suoraviivaisiin kausaaliväitteisiin suhtaudutaan usein epäillen. Tämä luku ei siis ole menetelmäopas, joka kädestä pitäen opastaisi, kuinka yhteiskuntatieteellisiä kokeita tulisi rakentaa, vaan katsaus niihin perustaviin metodologisiin kysymyksiin ja periaatteisiin, joihin varsinaiset menetelmät nojaavat.
Reality’s next top model? Content Type Journal Article DOI 10.1007/s11016-010-9475-3 Authors Jaakko Kuorikoski, Philosophy of Science Group/Social and Moral Philosophy, University of Helsinki, P.O. Box 24, 00014 Helsinki, Finland Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.