We propose a method for estimating subjective beliefs, viewed as a subjective probability distribution. The key insight is to characterize beliefs as a parameter to be estimated from observed choices in a well-defined experimental task and to estimate that parameter as a random coefficient. The experimental task consists of a series of standard lottery choices in which the subject is assumed to use conventional risk attitudes to select one lottery or the other and then a series of betting choices in (...) which the subject is presented with a range of bookies offering odds on the outcome of some event that the subject has a belief over. Knowledge of the risk attitudes of subjects conditions the inferences about subjective beliefs. Maximum simulated likelihood methods are used to estimate a structural model in which subjects employ subjective beliefs to make bets. We present evidence that some subjective probabilities are indeed best characterized as probability distributions with non-zero variance. (shrink)
In this discussion paper, I seek to challenge Hylarie Kochiras’ recent claims on Newton’s attitude towards action at a distance, which will be presented in Section 1. In doing so, I shall include the positions of Andrew Janiak and John Henry in my discussion and present my own tackle on the matter . Additionally, I seek to strengthen Kochiras’ argument that Newton sought to explain the cause of gravity in terms of secondary causation . I also provide some specification (...) on what Kochiras calls ‘Newton’s substance counting problem’ . In conclusion, I suggest a historical correction .Keywords: Isaac Newton ; Action at a distance; Cause of gravity; Fourth letter to Bentley. (shrink)
The philosophical background important to Mill’s theory of induction has two major components: Richard Whately’s introduction of the uniformity principle into inductive inference and the loss of the idea of formal cause.
John Locke’s distinction between primary and secondary qualities of objects has meet resistance. In this paper I bypass the traditional critiques of the distinction and instead concentrate on two specific counterexamples to the distinction: Killer yellow and the puzzle of multiple dispositions. One can accommodate these puzzles, I argue, by adopting Thomas Reid’s version of the primary/secondary quality distinction, where the distinction is founded upon conceptual grounds. The primary/secondary quality distinction is epistemic rather than metaphysical. A consequence of Reid’s (...) primary/ secondary quality distinction is that one must deny the original version of Molyneux’s question, while one must affirm an amended version of it. I show that these two answers to Molyneux’s question are not at odds with current empirical research. (shrink)
In this essay, I take the role as friendly commentator and call attention to three potential worries for John D. Norton’s material theory of induction. I attempt to show that his “principle argument” is based on a false dichotomy, that the idea that facts ultimately derive their license from matters of fact is debatable, and that one of the core implications of his theory is untenable for historical and fundamental reasons.
John Searle has argued that the aim of strong AI of creating a thinking computer is misguided. Searle’s Chinese Room Argument purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality. But we are not mainly interested in the program itself but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the (...) implementation of them cannot suffice for semantics. Perhaps our world is a world in which any implementation of the right computer program will create a system with intrinsic intentionality, in which case Searle’s Chinese Room Scenario is empirically (nomically) impossible. But, indeed, perhaps our world is a world in which Searle’s Chinese Room Scenario is empirically (nomically) possible and that the silicon basis of modern day computers are one kind of material unsuited to give you intrinsic intentionality. The metaphysical question turns out to be a question of what kind of world we are in and I argue that in this respect we do not know our modal address. The Modal Address Argument does not ensure that strong AI will succeed, but it shows that Searle’s challenge on the research program of strong AI fails in its objectives. (shrink)
John Searle has argued that the aim of strong AI to create a thinking computer is misguided. Searle's "Chinese Room Argument" purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality But we are not mainly interested in the program itself, but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the (...) implementation of them cannot suffice for semantics. Perhaps our world is a world in which any implementation of the right computer program will create a system with intrinsic intentionality, in which case Searle's "Chinese Room Scenario" is empirically impossible. But perhaps our world is a world in which Searle's "Chinese Room Scenario" is empirically possible, and the silicon basis of modern-day computers is one kind of material unsuited to give you intrinsic intentionality. The metaphysical question turns out to be a question of what kind of world we are in, and I argue that in this respect we do not know our model address. The "Model Address Argument" does not ensure that strong AI will succeed, but it shows that Searle's challenge to the research program of strong AI fails in its objectives. (shrink)
This paperback edition reproduces the complete text of the Essay as prepared by professor Nidditch for The Clarendon Edition of the Works of John Locke. The Register of Formal Variants and the Glossary are omitted and Professor Nidditch has written a new foreword.
To describe leadership as ethical is largely a perceptional phenomenon informed by beliefs about what is normatively appropriate. Yet there is a remarkable scarcity in the leadership literature regarding how to define what is "normatively appropriate." To shed light on this issue, we draw upon Relational Models Theory (Fiske, 1992, Psychol Rev, 99:689-723), which differentiates between four types of relationships: communal sharing, authority ranking, equality matching, and market pricing. We describe how each of these relationship models dictates a distinct set (...) of normatively appropriate behaviors. We argue that perceptions of unethical leadership behavior result from one of three situations: (a) a mismatch between leader's and follower's relational models, (b) a different understanding about the behavioral expression, or preos, of the same relational model, or (c) a violation of a previously agreed upon relational model. Further, we argue that the type of relational model mismatch impacts the perceived severity of a transgression. Finally, we discuss the implications of our model with regard to understanding, managing, and regulating ethical leadership failures. (shrink)
Conceptual engineers aim to revise rather than describe our concepts. But what are concepts? And how does one engineer them? Answering these questions is of central importance for implementing and theorizing about conceptual engineering. This paper discusses and criticizes two influential views of this issue: semanticism, according to which conceptual engineers aim to change linguistic meanings, and psychologism, according to which conceptual engineers aim to change psychological structures. I argue that neither of these accounts can give us the full story. (...) Instead, I propose and defend the Dual Content View of Conceptual Engineering. On this view, conceptual engineering targets concepts, where concepts are understood as having two (interrelated) kinds of contents: referential content and cognitive content. I show that this view is independently plausible and that it gives us a comprehensive account of conceptual engineering that helps to make progress on some of the most difficult problems surrounding conceptual engineering. (shrink)
Unlike conceptual analysis, conceptual engineering does not aim to identify the content that our current concepts do have, but the content which these concepts should have. For this method to show the results that its practitioners typically aim for, being able to change meanings seems to be a crucial presupposition. However, certain branches of semantic externalism raise doubts about whether this presupposition can be met. To the extent that meanings are determined by external factors such as causal histories or microphysical (...) structures, it seems that they cannot be changed intentionally. This paper gives an extended discussion of this ‘externalist challenge’. Pace Herman Cappelen’s recent take on this issue, it argues that the viability of conceptual engineering crucially depends on our ability to bring about meaning change. Furthermore, it argues that, contrary to first appearance, causal theories of reference do allow for a sufficient degree of meaning control. To this purpose, it argues that there is a sense of what is called ‘collective long-range control’, and that popular versions of the causal theory of reference imply that people have this kind of control over meanings. (shrink)
Max Deutsch (2020) has recently argued that conceptual engineering is stuck in a dilemma. If it is construed as the activity of revising the semantic meanings of existing terms, then it faces an unsurmountable implementation problem. If, on the other hand, it is construed as the activity of introducing new technical terms, then it becomes trivial. According to Deutsch, this conclusion need not worry us, however, for conceptual engineering is ill-motivated to begin with. This paper responds to Deutsch by arguing, (...) first, that there is a third construal of conceptual engineering, neglected by him, which renders it both implementable and non-trivial, and second, that even the more ambitious project of changing semantic meanings is no less feasible than other normative projects we currently pursue. Lastly, the value of conceptual engineering is defended against Deutsch’s objections. (shrink)
Connecting human minds to various technological devices and applications through brain-computer interfaces affords intriguingly novel ways for humans to engage and interact with the world. Not only do BCIs play an important role in restorative medicine, they are also increasingly used outside of medical or therapeutic contexts. A striking peculiarity of BCI technology is that the kind of actions it enables seems to differ from paradigmatic human actions, because, effects in the world are brought about by devices such as robotic (...) arms, prosthesis, or other machines, and their execution runs through a computer directed by brain signals. In contrast to usual forms of action, the sequence does not need to involve bodily or muscle movements at all. A motionless body, the epitome of inaction, might be acting. How do theories of action relate to such BCI-mediated forms of changing the world? We wish to explore this question through the lenses of three perspectives on agency: subjective experience of agency, philosophical action theory, and legal concepts of action. Our analysis pursues three aims: First, we shall discuss whether and which BCI-mediated events qualify as actions, according to the main concepts of action in philosophy and law. Secondly, en passant, we wish to highlight the ten most interesting novelties or peculiarities of BCI-mediated movements. Thirdly, we seek to explore whether these novel forms of movement may have consequences for concepts of agency. More concretely, we think that convincing assessments of BCI-movements require more fine-grained accounts of agency and a distinction between various forms of control during movements. In addition, we show that the disembodied nature of BCI-mediated events causes troubles for the standard legal account of actions as bodily movements. In an exchange with views from philosophy, we wish to propose that the law ought to reform its concept of action to include some, but not all, BCI-mediated events and sketch some of the wider implications this may have, especially for the venerable legal idea of the right to freedom of thought. In this regard, BCIs are an example of the way in which technological access to yet largely sealed-off domains of the person may necessitate adjusting normative boundaries between the personal and the social sphere. (shrink)
It seems natural to think that Carnapian explication and experimental philosophy can go hand in hand. But what exactly explicators can gain from the data provided by experimental philosophers remains controversial. According to an influential proposal by Shepherd and Justus, explicators should use experimental data in the process of ‘explication preparation’. Against this proposal, Mark Pinder has recently suggested that experimental data can directly assist an explicator’s search for fruitful replacements of the explicandum. In developing his argument, he also proposes (...) a novel aspect of what makes a concept fruitful, namely, that it is taken up by the relevant community. In this paper, I defend explication preparation against Pinder’s objections and argue that his uptake proposal conflates theoretical and practical success conditions of explications. Furthermore, I argue that Pinder’s suggested experimental procedure needs substantial revision. I end by distinguishing two kinds of explication projects, and showing how experimental philosophy can contribute to each of them. (shrink)
Multi-stakeholder initiatives have become a vital part of the organizational landscape for corporate social responsibility. Recent debates have explored whether these initiatives represent opportunities for the “democratization” of transnational corporations, facilitating civic participation in the extension of corporate responsibility, or whether they constitute new arenas for the expansion of corporate influence and the private capture of regulatory power. In this article, we explore the political dynamics of these new governance initiatives by presenting an in-depth case study of an organization often (...) heralded as a model MSI: the Forest Stewardship Council. An effort to address global deforestation in the wake of failed efforts to agree a multilateral convention on forests at the Rio Summit in 1992, the FSC was launched in 1993 as a non-state regulatory experiment: a transnational MSI, administering a global eco-labeling scheme for timber and forest products. We trace the scheme’s evolution over the past two decades, showing that while the FSC has successfully facilitated multi-sectoral determination of new standards for forestry, it has nevertheless failed to transform commercial forestry practices or stem the tide of tropical deforestation. Applying a neo-Gramscian analysis to the organizational evolution of the FSC, we examine how broader market forces and resource imbalances between non-governmental and market actors can serve to limit the effectiveness of MSIs in the current neo-liberal environment. This presents dilemmas for NGOs which can lead to their defection, ultimately undermining the organizational legitimacy of MSIs. (shrink)
Reissued here in its corrected second edition of 1864, this essay by John Stuart Mill argues for a utilitarian theory of morality. Originally printed as a series of three articles in Fraser's Magazine in 1861, the work sought to refine the 'greatest happiness' principle that had been championed by Jeremy Bentham, defending it from common criticisms, and offering a justification of its validity. Following Bentham, Mill holds that actions can be judged as right or wrong depending on whether they (...) promote happiness or 'the reverse of happiness'. Although attracted by Bentham's consequentialist framework based on empirical evidence rather than intuition, Mill separates happiness into 'higher' and 'lower' pleasures, arguing for a weighted system of measurement when making and judging decisions. Dissected and debated since its first appearance, the essay is Mill's key discussion on the topic and remains a fundamental text in the study of ethics. (shrink)
After examining the ways in which Newman employed the tools of rhetoric in his Apologia pro Vita Sua in response to Charles Kingsley’s charges against him, this essay charts Newman’s use of his personal testimony to proclaim the Gospel and defend the Catholic Faith and concludes with an analysis of the strengths and potential weaknesses of his approach.
Reasoning without experience is very slippery. A man may puzzle me by arguents [sic] … but I’le beleive my ey experience ↓my eyes.↓ernan mcmullin once remarked that, although the “avowedly tentative form” of the Queries “marks them off from the rest of Newton’s published work,” they are “the most significant source, perhaps, for the most general categories of matter and action that informed his research.”2 The Queries (or Quaestiones), which Newton inserted at the very end of the third book of (...) the Opticks3 or its Latin rendition, Optice,4 constitute that part of his optical magnum opus which he reworked and augmented the most—especially between 1704 and 1717. While the main text of the Opticks itself underwent .. (shrink)
Ethical issues concerning brain–computer interfaces have already received a considerable amount of attention. However, one particular form of BCI has not received the attention that it deserves: Affective BCIs that allow for the detection and stimulation of affective states. This paper brings the ethical issues of affective BCIs in sharper focus. The paper briefly reviews recent applications of affective BCIs and considers ethical issues that arise from these applications. Ethical issues that affective BCIs share with other neurotechnologies are presented and (...) ethical concerns that are specific to affective BCIs are identified and discussed. (shrink)
Mill predicted that "[t]he Liberty is likely to survive longer than anything else that I have written...because the conjunction of [Harriet Taylor’s] mind with mine has rendered it a kind of philosophic text-book of a single truth, which the changes progressively taking place in modern society tend to bring out in ever greater relief." Indeed, On Liberty is one of the most influential books ever written, and remains a foundational document for the understanding of vital political, philosophical and social issues. (...) In addition to its many useful appendices, this new edition includes a chronology, bibliography, and a substantial introduction which outlines Mill’s life and works, and sets this central work of 1859 in the context of both his own intellectual development and of the play of ideas and political forces in Victorian society. (shrink)
In this paper I argue against a criticism by Matthew Weiner to Grice’s thesis that cancellability is a necessary condition for conversational implicature. I argue that the purported counterexamples fail because the supposed failed cancellation in the cases Weiner presents is not meant as a cancellation but as a reinforcement of the implicature. I moreover point out that there are special situations in which the supposed cancellation may really work as a cancellation.
Recently, a number of critical social theorists have argued that the analysis of social relations of unfreedom should take into account the phenomenon of self-subordination. In my article, I draw on Hegel’s theory of recognition to elucidate this phenomenon and show that recognition can be not only a means of self-realization, but also of subjugation. I develop my argument in three steps: As a first step, I reconstruct the idea of social pathologies in the tradition of Critical Theory. In the (...) course of this reconstruction, it becomes clear that the analysis of social pathologies should focus on the binding force of recognition. As a second step, I reinterpret Hegel and show that a close reading of the relationship of lordship and bondage can help us to understand how a subject can become bound by recognition. As a third step, I make an attempt at reactualizing Hegel’s idea. Following Sartre’s analysis of anti-Semitism, I outline three stages of how subjects can gradually come to subordinate themselves and become entrapped in social relations of unfreedom such as race, class or gender. (shrink)
Bayesian approaches for estimating multilevel latent variable models can be beneficial in small samples. Prior distributions can be used to overcome small sample problems, for example, when priors that increase the accuracy of estimation are chosen. This article discusses two different but not mutually exclusive approaches for specifying priors. Both approaches aim at stabilizing estimators in such a way that the Mean Squared Error of the estimator of the between-group slope will be small. In the first approach, the MSE is (...) decreased by specifying a slightly informative prior for the group-level variance of the predictor variable, whereas in the second approach, the decrease is achieved directly by using a slightly informative prior for the slope. Mathematical and graphical inspections suggest that both approaches can be effective for reducing the MSE in small samples, thus rendering them attractive in these situations. The article also discusses how these approaches can be implemented in Mplus. (shrink)
In this essay, I attempt to assess Henk de Regt and Dennis Dieks recent pragmatic and contextual account of scientific understanding on the basis of an important historical case-study: understanding in Newton's theory of universal gravitation and Huygens' reception of universal gravitation. It will be shown that de Regt and Dieks' Criterion for the Intelligibility of a Theory, which stipulates that the appropriate combination of scientists' skills and intelligibility-enhancing theoretical virtues is a condition for scientific understanding, is too strong. On (...) the basis of this case-study, it will be shown that scientists can understand each others' positions qualitatively and quantitatively, despite their endorsement of different worldviews and despite their convictions as what counts as a proper explanation. (shrink)
ABSTRACTIn this article, I consider Bernard Suits’ Utopia where the denizens supposedly fill their days playing Utopian sports, with regard to the relevance of the thought experiment for understand...
In this article, I first address the ethical considerations about football and show that a meritocratic-fairness view of sports fails to capture the phenomenon of football. Fairness of result is not at centre stage in football. Football is about the drama, about the tension and the emotions it provokes. This moves us to the realm of aesthetics. I reject the idea of the aesthetics of football as the disinterested aesthetic appreciation, which traditionally has been deemed central to aesthetics. Instead, I (...) argue that we should try and develop an agon aesthetics where our aesthetic appreciation is understood as involving and being embedded in our engagement in the game. The drama of football is staged but not scripted. The aesthetics of competitions like football matches—the agon aesthetics—lies in engaging in the conflict that a competition is, while being aware that the conflict is not over ordinary world or everyday life issues, but unnecessary and invented for the very purpose of having a conflict to enjoy. (shrink)
We develop an extension of the familiar linear mixed logit model to allow for the direct estimation of parametric non-linear functions defined over structural parameters. Classic applications include the estimation of coefficients of utility functions to characterize risk attitudes and discounting functions to characterize impatience. There are several unexpected benefits of this extension, apart from the ability to directly estimate structural parameters of theoretical interest.
We exhibit a finite lattice without critical triple that cannot be embedded into the enumerable Turing degrees. Our method promises to lead to a full characterization of the finite lattices embeddable into the enumerable Turing degrees.
ABSTRACTGraeme Wood’s The Way of the Strangers gets as close as is humanly possible to an ethnography of recruiters and sympathizers of the Islamic State. Contrary to much writing on radical Islamism, Wood convincingly shows that the Islamic State’s ideas—rooted in a literalist reading of ancient Islamic sources—are central in motivating many of the movement’s followers. His accounts of individual adherents also suggests, however, that ideas are not the only factor, as certain personality traits influence who is attracted to radical (...) Islamist movements. (shrink)
Like many of their contemporaries Bernard Nieuwentijt and Pieter van Musschenbroek were baffled by the heterodox conclusions which Baruch Spinoza drew in the Ethics. As the full title of the Ethics—Ethica ordine geometrico demonstrata—indicates, these conclusions were purportedly demonstrated in a geometrical order, i.e. by means of pure mathematics. First, I highlight how Nieuwentijt tried to immunize Spinoza’s worrisome conclusions by insisting on the distinction between pure and mixed mathematics. Next, I argue that the anti-Spinozist underpinnings of Nieuwentijt’s distinction between (...) pure and mixed mathematics resurfaced in the work of van Musschenbroek. By insisting on the distinction between pure and mixed mathematics, Nieuwentijt and van Musschenbroek argued that Spinoza abused mathematics by making claims about things that exist in rerum natura by relying on a pure mathematical approach. In addition, by insisting that mixed mathematics should be painstakingly based on mathematical ideas that correspond to nature, van Musschenbroek argued that René Descartes’ natural-philosophical project abused mathematics by introducing hypotheses, i.e. ideas, that do not correspond to nature. (shrink)
HOW WE THINK PART ONE: THE PROBLEM OF TRAINING THOUGHT CHAPTER ONE WHAT IS THOUGHT? § i. Varied Senses of the Term No words are oftener on our lips than ...
British philosopher and economist John Stuart Mill is the author of several essays, including Utilitarianism - a defence of Jeremy Bentham's principle applied to the field of ethics - and The Subjection of Women, which advocates legal equality between the sexes. This work, arguably his most famous contribution to political philosophy and theory, was first published in 1859, and remains a major influence upon contemporary liberal political thought. In it, Mill argues for a limitation of the power of government (...) and society over the individual, and defines liberty as an absolute individual right. According to the still much debated 'harm principle', power against the individual can only be exercised to prevent harm to others. Full of contemporary relevance, this essay also defends freedom of speech as a necessary condition of social and intellectual progress. (shrink)
We give an algorithm for deciding whether an embedding of a finite partial order [Formula: see text] into the enumeration degrees of the [Formula: see text]-sets can always be extended to an embedding of a finite partial order [Formula: see text].
Newton’s immensely famous, but tersely written, General Scholium is primarily known for its reference to the argument of design and Newton’s famous dictum “hypotheses non fingo”. In the essay at hand, I shall argue that this text served a variety of goals and try to add something new to our current knowledge of how Newton tried to accomplish them. The General Scholium highlights a cornucopia of features that were central to Newton’s natural philosophy in general: matters of experimentation, methodological issues, (...) theological matters, matters related to the instauration of prisca sapientia, epistemological claims central to Newton’s empiricism, and, finally, metaphysical issues. For Newton these matters were closely interwoven. I shall address these matters based on a thorough study of the extant manuscript material. (shrink)
In this essay, I shall take up the theme of Galileo’s notion of cause, which has already received considerable attention. I shall argue that the participants in the debate as it stands have overlooked a striking and essential feature of Galileo’s notion of cause. Galileo not only reformed natural philosophy, he also – as I shall defend – introduced a new notion of causality and integrated it in his scientific practice. Galileo’s conception of causality went hand in hand with his (...) methodology. It is my claim that Galileo was trying to construct a new scientifically useful notion of causality. This new notion of causality is an interventionist notion. (shrink)