ArgumentIn this paper we comparatively explore three claims concerning the disciplinary character of economics by means of citation analysis. The three claims under study are: economics exhibits strong forms of institutional stratification and, as a byproduct, a rather pronounced internal hierarchy; economists strongly conform to institutional incentives; and modern mainstream economics is a largely self-referential intellectual project mostly inaccessible to disciplinary or paradigmatic outsiders. The validity of these claims is assessed by means of an interdisciplinary comparison of citation patterns aiming (...) to identify peculiar characteristics of economic discourse. In doing so, we emphasize that citation data can always be interpreted in different ways, thereby focusing on the contrast between a “cognitive” and an “evaluative” approach towards citation data. (shrink)
Von 1925 bis 1928 wurden im Berliner J. M. Spaeth-Verlag unter der Leitung von Hans Rosenkranz eine Reihe von Werken seinerzeit eher unbekannter, in der Retrospektive jedoch signifikanter Autoren der Zwischenkriegszeit publiziert. Der Beitrag thematisiert Rosenkranz als jungen Verleger und Bewunderer Stefan Zweigs. Er entwirft auf Grundlage der Archivüberlieferung einen neuen Blick auf die Geschichte des Unternehmens und kommentiert das damit verbundene literarische Programm: Welche wichtigen verlegerischen Projekte wurden in jener kurzen Zeit unternommen? Welche Rolle hatte Stefan Zweig (...) für das Zustandekommen einiger Titel und besonders in den letzten Wochen der Verlagsexistenz? Inwiefern lässt sich Programmgestaltung und ökonomische Entwicklung von J. M. Spaeth als paradigmatisch für jüdische Verlage in der Weimarer Republik verstehen? Dazu wird erstmals das Scheitern des Unternehmens während der „Bücherkrise“ Ende der 1920er Jahre aus den Quellen rekonstruiert. (shrink)
This article considers the prospects of inference to the best explanation as a method of confirming causal claims vis-à-vis the medical evidence of mechanisms. I show that IBE is actually descriptive of how scientists reason when choosing among hypotheses, that it is amenable to the balance/weight distinction, a pivotal pair of concepts in the philosophy of evidence, and that it can do justice to interesting features of the interplay between mechanistic and population level assessments.
This imaginative and unusual book explores the moral sensibilities and cultural assumptions that were at the heart of political debate in Victorian and early twentieth-century Britain. It focuses on the role of intellectuals as public moralists, and suggests ways in which their more formal political theory rested upon habits of response and evaluation that were deeply embedded in wider social attitudes and aesthetic judgements. Stefan Collini examines the characteristic idioms and strategies of argument employed in periodical and polemical writing, (...) and reconstructs the sense of identity and of relation to an audience exhibited by social critics from John Stuart Mill and Matthew Arnold to J. M. Keynes and F. R. Leavis. Dr Collini begins by situating the leading intellectuals in the social and political world of the Victorian governing classes. He explores fundamental values like `altruism', `character', and `manliness', which are revealed as the animating dynamic of much of the political thought of the period. The book assesses the impact of increasing academic specialization across a range of disciplines, and offers an illuminating analysis of the public voice of legal theorists like Maine and Dicey. Through a detailed study of J.S. Mill's posthumous reputation Dr Collini uncovers the process by which the genealogy of images of national cultural identity is established; and he concludes with a provocative exploration of the nationalist significance of what he calls `the Whig interpretation of English literature'. Public Moralists is a subtle and illuminating study by a leading intellectual historian which will redirect debate about the distinctive development of modern English culture. (shrink)
A critical pathway for conceptual innovation in the social is the construction of theoretical ideas based on empirical data. Grounded theory has become a leading approach promising the construction of novel theories. Yet grounded theory-based theoretical innovation has been scarce in part because of its commitment to let theories emerge inductively rather than imposing analytic frameworks a priori. We note, along with a long philosophical tradition, that induction does not logically lead to novel theoretical insights. Drawing from the theory of (...) inference, meaning, and action of pragmatist philosopher Charles S. Peirce, we argue that abduction, rather than induction, should be the guiding principle of empirically based theory construction. Abduction refers to a creative inferential process aimed at producing new hypotheses and theories based on surprising research evidence. We propose that abductive analysis arises from actors' social and intellectual positions but can be further aided by careful methodological data analysis. We outline how formal methodological steps enrich abductive analysis through the processes of revisiting, defamiliarization, and alternative casing. (shrink)
In his 2010 paper ‘Grounding and Truth-Functions’, Fabrice Correia has developed the first and so far only proposal for a logic of ground based on a worldly conception of facts. In this paper, we show that the logic allows the derivation of implausible grounding claims. We then generalize these results and draw some conclusions concerning the structural features of ground and its associated notion of relevance, which has so far not received the attention it deserves.
In the past few decades, a growth in ethical consumerism has led brands to increasingly develop conscientiousness and depict ethical image at a corporate level. However, most of the research studying business ethics in the field of corporate brand management is either conceptual or has been empirically conducted in relation to goods/products contexts. This is surprising because corporate brands are more relevant in services contexts, because of the distinct nature of services and the key role that employees have in the (...) services sector. Accordingly, this article aims at empirically examining the effects of customer perceived ethicality in the context of corporate services brands. Based on data collected for eight service categories using a panel of 2179 customers, the hypothesized structural model is tested using path analysis. The results show that, in addition to a direct effect, customer perceived ethicality has a positive and indirect effect on customer loyalty, through the mediators of customer affective commitment and customer perceived quality. Further, employee empathy positively influences the impact of customer perceived ethicality on customer affective commitment, and customer loyalty positively impacts customer positive word-of-mouth. The first implication of these results is that corporate brand strategy needs to be aligned with human resources policies and practices if brands want to turn ethical strategies into employee behavior. Second, corporate brands should build more authentic communications grounded in their ethical beliefs and supported by evidence from actual employees. (shrink)
Against content theories of slurs, according to which slurs have some kind of derogatory content, Anderson and Lepore have objected that they cannot explain that even slurs under quotation can cause offense. If slurs had some kind of derogatory content, the argument goes, quotation would render this content inert and, thus, quoted slurs should not be offensive. Following this, Anderson and Lepore propose that slurs are offensive because they are prohibited words. In this paper, we will show that, pace Anderson (...) and Lepore, content theories of slurs do provide an explanation of the fact that quoted slurs can cause offense: even under quotation, the explanation goes, the derogatory content of a slur can still be psychologically efficacious. We will go one step further by pointing out that offensiveness is not the only function of slurs, but that slurs can also be used to create and reinforce negative attitudes towards the target group. While content theories can easily explain this by referring to some kind of derogatory content, Anderson and Lepore’s prohibitionism will lack a satisfactory explanation of this second function of slurs. Concluding, we will argue that, unlike uses of slurs, uses of quoted slurs normally do not derogate the target group. This will again speak in favor of content theories. Accordingly, uses of quoted slurs are not derogatory because quotation renders the derogatory content inert. Hence, rather than speaking against content theories, quoted slurs speak in their favor. (shrink)
The fact that the standard probabilistic calculus does not define probabilities for sentences with embedded conditionals is a fundamental problem for the probabilistic theory of conditionals. Several authors have explored ways to assign probabilities to such sentences, but those proposals have come under criticism for making counterintuitive predictions. This paper examines the source of the problematic predictions and proposes an amendment which corrects them in a principled way. The account brings intuitions about counterfactual conditionals to bear on the interpretation of (...) indicatives and relies on the notion of causal (in)dependence. (shrink)
This paper discusses counterexamples to the thesis that the probabilities of conditionals are conditional probabilities. It is argued that the discrepancy is systematic and predictable, and that conditional probabilities are crucially involved in the apparently deviant interpretations. Furthermore, the examples suggest that such conditionals have a less prominent reading on which their probability is in fact the conditional probability, and that the two readings are related by a simple step of abductive inference. Central to the proposal is a distinction between (...) causal and purely stochastic dependence between variables. (shrink)
Prospect Theory (PT) is widely regarded as the most promising descriptive model for decision making under uncertainty. Various tests have corroborated the validity of the characteristic fourfold pattern of risk attitudes implied by the combination of probability weighting and value transformation. But is it also safe to assume stable PT preferences at the individual level? This is not only an empirical but also a conceptual question. Measuring the stability of preferences in a multi-parameter decision model such as PT is far (...) more complex than evaluating single-parameter models such as Expected Utility Theory under the assumption of constant relative risk aversion. There exist considerable interdependencies among parameters such that allegedly diverging parameter combinations could in fact produce very similar preference structures. In this paper, we provide a theoretic framework for measuring the (temporal) stability of PT parameters. To illustrate our methodology, we further apply our approach to 86 subjects for whom we elicit PT parameters twice, with a time lag of 1 month. While documenting remarkable stability of parameter estimates at the aggregate level, we find that a third of the subjects show significant instability across sessions. (shrink)
Need considerations play an important role in empirically informed theories of distributive justice. We propose a concept of need-based justice that is related to social participation and provide an ethical measurement of need-based justice. The β-ε-index satisfies the need-principle, monotonicity, sensitivity, transfer and several »technical« axioms. A numerical example is given.
I argue that difference-making should be a crucial element for evaluating the quality of evidence for mechanisms, especially with respect to the robustness of mechanisms, and that it should take central stage when it comes to the general role played by mechanisms in establishing causal claims in medicine. The difference- making of mechanisms should provide additional compelling reasons to accept the gist of Russo-Williamson thesis and include mechanisms in the protocols for Evidence- Based Medicine, as the EBM+ research group has (...) been advocating. (shrink)
This paper proposes a compositional model-theoretic account of the way the interpretation of indicative conditionals is determined and constrained by the temporal and modal expressions in their constituents. The main claim is that the tenses in both the antecedent and the consequent of an indicative conditional are interpreted in the same way as in isolation. This is controversial for the antecedents of predictive conditionals like ‘If he arrives tomorrow, she will leave’, whose Present tense is often claimed to differ semantically (...) from that in their stand-alone counterparts, such as ‘He arrives tomorrow’. Under the unified analysis developed in this paper, the differences observed in pairs like these are explained by interactions between the temporal and modal dimensions of interpretation. This perspective also sheds new light on the relationship between ‘non-predictive’ and ‘epistemic’ readings of indicative conditionals. (shrink)
The connection between the probabilities of conditionals and the corresponding conditional probabilities has long been explored in the philosophical literature, but its implementation faces both technical obstacles and objections on empirical grounds. In this paper I ?rst outline the motivation for the probabilistic turn and Lewis’ triviality results, which stand in the way of what would seem to be its most straightforward implementation. I then focus on Richard Jeffrey’s ’random-variable’ approach, which circumvents these problems by giving up the notion that (...) conditionals denote propositions in the usual sense. Even so, however, the random-variable approach makes counterintuitive predictions in simple cases of embedded conditionals. I propose to address this problem by enriching the model with an explicit representation of causal dependencies. The addition of such causal information not only remedies the shortcomings of Jeffrey’s conditional, but also opens up the possibility of a uni?ed probabilistic account of indicative and counterfactual conditionals. (shrink)
This article shows how activists in the open data movement re-articulate notions of democracy, participation, and journalism by applying practices and values from open source culture to the creation and use of data. Focusing on the Open Knowledge Foundation Germany and drawing from a combination of interviews and content analysis, it argues that this process leads activists to develop new rationalities around datafication that can support the agency of datafied publics. Three modulations of open source are identified: First, by regarding (...) data as a prerequisite for generating knowledge, activists transform the sharing of source code to include the sharing of raw data. Sharing raw data should break the interpretative monopoly of governments and would allow people to make their own interpretation of data about public issues. Second, activists connect this idea to an open and flexible form of representative democracy by applying the open source model of participation to political participation. Third, activists acknowledge that intermediaries are necessary to make raw data accessible to the public. This leads them to an interest in transforming journalism to become an intermediary in this sense. At the same time, they try to act as intermediaries themselves and develop civic technologies to put their ideas into practice. The article concludes with suggesting that the practices and ideas of open data activists are relevant because they illustrate the connection between datafication and open source culture and help to understand how datafication might support the agency of publics and actors outside big government and big business. (shrink)
Ştefan Aug. Doinaş and Basarab Nicolescu, two great spirits related through the generosity of the humanist vision, met, held an epistolary dialogue and had common projects. Doinaş commented upon a few of the innovative concepts proposed by Basarab Nicolescu and he also aesthetically transfigured, in literary pages, certain concepts of transdisciplinarity.
It is widely recognized that the innate versus acquired distinction is a false dichotomy. Yet many scientists continue to describe certain traits as “innate” and take this to imply that those traits are not acquired, or “unlearned.” This article asks what cognitive role, if any, the concept of innateness should play in the psychological and behavioural sciences. I consider three arguments for eliminating innateness from scientific discourse. First, the classification of a trait as innate is thought to discourage empirical research (...) into its developmental origin. Second, this concept lumps together a number of different biological properties that ought to be treated as distinct. Third, innateness is associated with the outmoded folk biological theory of essentialism. In response to these objections, I consider two attempts to revise the concept of innateness which aim to make it more suitable for scientific explanation and research. One proposal is that innateness can be defined in terms of the biological property of environmental canalization. On this view, a trait is innate to the extent that it is developmentally buffered against a range of different environments. Another proposal is that innateness serves as an explanatory primitive for cognitive science. This view holds that there exist a sharp boundary between psychological and biological explanations and that to identify a trait as innate means that it falls into the latter explanatory domain. This essay ends with some questions for future research. (shrink)
Psychological distance effects have attracted the attention of behavioral economists in the context of descriptive modeling and behavioral policy. Indeed, psychological distance effects have been shown for an increasing number of domains and applications relevant to economic decision-making. The current paper questions whether these effects are robust enough for economists to apply them to relevant policy questions. We demonstrate systematic replication failures for the distance-from-a-distance effect shown by Maglio et al., and relate them to theoretical arguments suggesting that psychological distance (...) theories are currently too poorly specified to make predictions that are precise enough for economic analyses. (shrink)
There are several important criticisms against the unificationist model of scientific explanation: Unification is a broad and heterogeneous notion and it is hard to see how a model of explanation based exclusively on unification can make a distinction between genuine explanatory unification from cases of ordering or classification. Unification alone cannot solve the asymmetry and irrelevance problems. Unification and explanation pull in different directions and should be decoupled, because for good scientific explanation extra ad explanandum information is often required. I (...) am presenting a possible solution to those problems, by focusing on an often overlooked but important element of how theoretic unification is achieved—the conceptual frameworks of theories. The core conceptual assumptions behind theories are decisive for discriminating between explanatory and non-explanatory unification. The conceptual framework is also flexible enough to balance the tension between informativeness and maximum systematization in constructing explanatory inferences. A short case study of orthogenetic and Darwinian explanations in paleontology is presented as an illustration of how my addition to the unificationist model is applicable to a historical debate between rival explanations. (shrink)
Stefan Sienkiewicz analyses five argument forms which are central to Pyrrhonian scepticism, as expressed in the writings of Sextus Empiricus. In particular, Sienkiewicz distinguishes between two different perspectives of the sceptic and his dogmatic opponent, and interprets the five modes of scepticism from both viewpoints.
The number of legal and nonlegal ethical regulations in the biomedical field has increased tremendously, leaving present-day practitioners and researchers in a virtual crossfire of legislations and guidelines. Judging by the production and by the way these regulations are motivated and presented, they are held to be of great importance to ethical practice. This view is shared by many commentators. For instance, Commons and Baldwin write that, within the nursing profession, patient care can be performed unethically or ethically depending on (...) the professional standards the nurses have set for themselves. They also hold that such standards are set when nurses become aware of the ethical codes available. As nurses are often not familiar with the codes, they do not all conform to them. Commons and Baldwin argue that nurses' ability to deal with ethical dilemmas is effectively secured with education on guidelines, creating a “barrier” between personal and professional values. (shrink)
The globalization movement in recent decades has meant rapid growth in trade, financial transactions, and cross-country ownership of economic assets. In this article, we examine how the globalization of national business systems has influenced the framing of corporate social responsibility (CSR). This is done using text analysis of CEO letters appearing in the annual reports of 15 major corporations in Sweden during a period of transformational change. The results show that the discourse about CSR in the annual reports has changed (...) from a national and communitarian view of social responsibility (cf. a negotiated view of CSR) toward an international and individualistic view of social responsibility (cf. a self-regulating view of CSR). The article contributes theoretically (1) by adding a national–global dimension to previous conceptualizations of CSR and (2) by showing that the rise of CSR discourse and activities in the last 10 years does not have to imply an increased commitment and interest in corporate responsibility per se, only that there are increased societal expectations that corporations should develop the capability to act more independently as moral agents. (shrink)
Integrating the study of human diversity into the human evolutionary sciences requires substantial revision of traditional conceptions of a shared human nature. This process may be made more difficult by entrenched, 'folkbiological' modes of thought. Earlier work by the authors suggests that biologically naive subjects hold an implicit theory according to which some traits are expressions of an animal's inner nature while others are imposed by its environment. In this paper, we report further studies that extend and refine our account (...) of this aspect of folkbiology. We examine biologically naive subjects' judgments about whether traits of an animal are 'innate', 'in its DNA' or 'part of its nature'. Subjects do not understand these three descriptions to be equivalent. Both innate and in its DNA have the connotation that the trait is species-typical. This poses an obstacle to the assimilation of the biology of polymorphic and plastic traits by biologically naive audiences. Researchers themselves may not be immune to the continuing pull of folkbiological modes of thought. (shrink)
This volume is the second installment in Stefan Jonsson’s epic study of the crowd and the mass in modern Europe, building on his work in A Brief History of the Masses, which focused on monumental artworks produced in 1789, 1889, and 1989.
For Aristotle, a just political community has to find similarity in difference and foster habits of reciprocity. Conventionally, speech and law have been seen to fulfill this role. This article reconstructs Aristotle’s conception of currency as a political institution of reciprocal justice. By placing Aristotle’s treatment of reciprocity in the context of the ancient politics of money, currency emerges not merely as a medium of economic exchange but also potentially as a bond of civic reciprocity, a measure of justice, and (...) an institution of ethical deliberation. Reconstructing this account of currency in analogy to law recovers the hopes Aristotle placed in currency as a necessary institution particular to the polis as a self-governing political community striving for justice. If currency was a foundational institution, it was also always insufficient, likely imperfect, and possibly tragic. Turned into a tool for the accumulation of wealth for its own sake, currency becomes unjust and a serious threat to any political community. Aristotelian currency can fail precisely because it contains an important moment of ethical deliberation. This political significance of currency challenges accounts of the ancient world as bifurcated between oikos and polis and encourages contemporary political theorists to think of money as a constitutional project that can play an important role in improving reciprocity across society. (shrink)
The culture of honour hypothesis offers a compelling example of how human psychology differentially adapts to pastoral and horticultural environments. However, there is disagreement over whether this pattern is best explained by a memetic, evolutionary psychological, dual inheritance, or niche construction model. I argue that this disagreement stems from two shortcomings: lack of clarity about the theoretical commitments of these models and inadequate comparative data for testing them. To resolve the first problem, I offer a theoretical framework for deriving competing (...) predictions from each of the four models. In particular, this involves a novel interpretation of the difference between dual inheritance theory and cultural niche construction. I then illustrate a strategy for testing their predictions using data from the Human Relations Area File. Empirical results suggest that the aggressive psychological phenotype typically associated with honour culture is more common among pastoral societies than among horticultural societies. Theoretical considerations suggest that this pattern is best explained as a case of cultural niche construction. (shrink)
Stefan Jonsson uses three monumental works of art to build a provocative history of popular revolt: Jacques-Louis David's _The Tennis Court Oath_, James Ensor's _Christ's Entry into Brussels in 1889_, and Alfredo Jaar's _They Loved It So Much, the Revolution_. Addressing, respectively, the French Revolution of 1789, Belgium's proletarian messianism in the 1880s, and the worldwide rebellions and revolutions of 1968, these canonical images not only depict an alternative view of history but offer a new understanding of the relationship (...) between art and politics and the revolutionary nature of true democracy. Drawing on examples from literature, politics, philosophy, and other works of art, Jonsson carefully constructs his portrait, revealing surprising parallels between the political representation of "the people" in government and their aesthetic representation in painting. Both essentially "frame" the people, Jonsson argues, defining them as elites or masses, responsible citizens or angry mobs. Yet in the aesthetic fantasies of David, Ensor, and Jaar, Jonsson finds a different understanding of democracy-one in which human collectives break the frame and enter the picture. Connecting the achievements and failures of past revolutions to current political issues, Jonsson then situates our present moment in a long historical drama of popular unrest, making his book both a cultural history and a contemporary discussion about the fate of democracy in our globalized world. (shrink)
Research on patients with damage to ventromedial frontal cortices suggests a key role for emotions in practical decision making. This field of investigation is often associated with Antonio Damasio’s Somatic Marker Hypothesis—a putative account of the mechanism through which autonomic tags guide decision making in typical individuals. Here we discuss two questionable assumptions—or ‘myths’—surrounding the direction and interpretation of this research. First, it is often assumed that there is a single somatic marker hypothesis. As others have noted, however, Damasio’s ‘hypothesis’ (...) admits of multiple interpretations (Dunn et al. [2006]; Colombetti [2008]). Our analysis builds upon this point by characterizing decision making as a multi-stage process and identifying the various potential roles for somatic markers. The second myth is that the available evidence suggests a role for somatic markers in the core stages of decision making, that is, during the generation, deliberation, or evaluation of candidate options. On the contrary, we suggest that somatic markers most likely have a peripheral role, in the recognition of decision points, or in the motivation of action. This conclusion is based on an examination of the past twenty-five years of research conducted by Damasio and colleagues, focusing in particular on some early experiments that have been largely neglected by the critical literature. 1 Introduction2 What is the Somatic Marker Model?3 Multiple Somatic Marker Hypotheses3.1 Are somatic markers necessary for practical decision making?3.2 Speed, accuracy, or both?3.3 At which of the five stages of decision making are somatic markers engaged?4 Anecdotal Evidence Suggests a Peripheral Role for Somatic Markers4.1 Chronic indecisiveness4.2 Extreme impulsiveness4.3 Enhanced decision making in the lab4.4 Lack of motivation. 5 Early Experiments Suggest that VMF Damage Leaves Core Processes Intact5.1 The evocative images study5.2 Five problem solving tasks6 Recent Experiments Fail to Discriminate among Alternate Versions of SMH7 Conclusion. (shrink)
In this paper I offer an anti-Humean critique to Williamson and Russo’s approach to medical mechanisms. I focus on one of the specific claims made by Williamson and Russo, namely the claim that micro-structural ‘mechanisms’ provide evidence for the stability across populations of causal relationships ascertained at the (macro-) level of (test) populations. This claim is grounded in the epistemic account of causality developed by Williamson, an account which—while not relying exclusively on mechanistic evidence for justifying causal judgements—appeals nevertheless to (...) mechanisms, and rejects their anti-Humean interpretation in terms of capacities, powers, potencies, etc. By using (and expanding on) Cartwright’s basic critique against Humean mechanisms, I suggest that, in order to move beyond the level of plausibility, Williamson and Russo’s position is in need of a clarification as to the occurent reading of the components, functioning and interferences of mechanisms. Relatedly, as concerns Williamson’s epistemic account of causation, I argue that this account is in need of a more straightforward answer as to what truth-makers its causal claims should have. (shrink)
Philosophers investigating the interpretation and use of conditional sentences have long been intrigued by the intuitive correspondence between the probability of a conditional `if A, then C' and the conditional probability of C, given A. Attempts to account for this intuition within a general probabilistic theory of belief, meaning and use have been plagued by a danger of trivialization, which has proven to be remarkably recalcitrant and absorbed much of the creative effort in the area. But there is a strategy (...) for avoiding triviality that has been known for almost as long as the triviality results themselves. What is lacking is a straightforward integration of this approach in a larger framework of belief representation and dynamics. This paper discusses some of the issues involved and proposes an account of belief update by conditionalization. (shrink)
The rise of causality and the attendant graph-theoretic modeling tools in the study of counterfactual reasoning has had resounding effects in many areas of cognitive science, but it has thus far not permeated the mainstream in linguistic theory to a comparable degree. In this study I show that a version of the predominant framework for the formal semantic analysis of conditionals, Kratzer-style premise semantics, allows for a straightforward implementation of the crucial ideas and insights of Pearl-style causal networks. I spell (...) out the details of such an implementation, focusing especially on the notions of intervention on a network and backtracking interpretations of counterfactuals. (shrink)
This paper is devoted to Bolzano’s theory of grounding (Abfolge) in his Wissenschaftslehre. Bolzanian grounding is an explanatory consequence relation that is frequently considered an ancestor of the notion of metaphysical grounding. The paper focuses on two principles that concern grounding in the realm of conceptual sciences and relate to traditionally widespread ideas on explanations: the principles, namely, that grounding orders conceptual truths from simple to more complex ones (Simplicity), and that it comes along with a certain theoretical economy among (...) them (Economy). Being spelled out on the basis of Bolzano’s notion of deducibility (Ableitbarkeit), these principles are revealing for the question to what extent grounding can be considered a formal relation. (shrink)
According to an increasingly popular view among philosophers of science, both causal and non-causal explanations can be accounted for by a single theory: the counterfactual theory of explanation. A kind of non-causal explanation that has gained much attention recently but that this theory seems unable to account for are grounding explanations. Reutlinger :239-256, 2017) has argued that, despite these appearances to the contrary, such explanations are covered by his version of the counterfactual theory. His idea is supported by recent work (...) on grounding by Schaffer and Wilson who claim there to be a tight connection between grounding and counterfactual dependence. The present paper evaluates the prospects of the idea. We show that there is only a weak sense in which grounding explanations convey information about counterfactual dependencies, and that this fact cannot plausibly be taken to reveal a distinctive feature that grounding explanations share with other kinds of explanations. (shrink)