To describe leadership as ethical is largely a perceptional phenomenon informed by beliefs about what is normatively appropriate. Yet there is a remarkable scarcity in the leadership literature regarding how to define what is "normatively appropriate." To shed light on this issue, we draw upon Relational Models Theory (Fiske, 1992, Psychol Rev, 99:689-723), which differentiates between four types of relationships: communal sharing, authority ranking, equality matching, and market pricing. We describe how each of these relationship models dictates a distinct set (...) of normatively appropriate behaviors. We argue that perceptions of unethical leadership behavior result from one of three situations: (a) a mismatch between leader's and follower's relational models, (b) a different understanding about the behavioral expression, or preos, of the same relational model, or (c) a violation of a previously agreed upon relational model. Further, we argue that the type of relational model mismatch impacts the perceived severity of a transgression. Finally, we discuss the implications of our model with regard to understanding, managing, and regulating ethical leadership failures. (shrink)
Conceptual engineers aim to revise rather than describe our concepts. But what are concepts? And how does one engineer them? Answering these questions is of central importance for implementing and theorizing about conceptual engineering. This paper discusses and criticizes two influential views of this issue: semanticism, according to which conceptual engineers aim to change linguistic meanings, and psychologism, according to which conceptual engineers aim to change psychological structures. I argue that neither of these accounts can give us the full story. (...) Instead, I propose and defend the Dual Content View of Conceptual Engineering. On this view, conceptual engineering targets concepts, where concepts are understood as having two (interrelated) kinds of contents: referential content and cognitive content. I show that this view is independently plausible and that it gives us a comprehensive account of conceptual engineering that helps to make progress on some of the most difficult problems surrounding conceptual engineering. (shrink)
Unlike conceptual analysis, conceptual engineering does not aim to identify the content that our current concepts do have, but the content which these concepts should have. For this method to show the results that its practitioners typically aim for, being able to change meanings seems to be a crucial presupposition. However, certain branches of semantic externalism raise doubts about whether this presupposition can be met. To the extent that meanings are determined by external factors such as causal histories or microphysical (...) structures, it seems that they cannot be changed intentionally. This paper gives an extended discussion of this ‘externalist challenge’. Pace Herman Cappelen’s recent take on this issue, it argues that the viability of conceptual engineering crucially depends on our ability to bring about meaning change. Furthermore, it argues that, contrary to first appearance, causal theories of reference do allow for a sufficient degree of meaning control. To this purpose, it argues that there is a sense of what is called ‘collective long-range control’, and that popular versions of the causal theory of reference imply that people have this kind of control over meanings. (shrink)
Max Deutsch (2020) has recently argued that conceptual engineering is stuck in a dilemma. If it is construed as the activity of revising the semantic meanings of existing terms, then it faces an unsurmountable implementation problem. If, on the other hand, it is construed as the activity of introducing new technical terms, then it becomes trivial. According to Deutsch, this conclusion need not worry us, however, for conceptual engineering is ill-motivated to begin with. This paper responds to Deutsch by arguing, (...) first, that there is a third construal of conceptual engineering, neglected by him, which renders it both implementable and non-trivial, and second, that even the more ambitious project of changing semantic meanings is no less feasible than other normative projects we currently pursue. Lastly, the value of conceptual engineering is defended against Deutsch’s objections. (shrink)
Connecting human minds to various technological devices and applications through brain-computer interfaces affords intriguingly novel ways for humans to engage and interact with the world. Not only do BCIs play an important role in restorative medicine, they are also increasingly used outside of medical or therapeutic contexts. A striking peculiarity of BCI technology is that the kind of actions it enables seems to differ from paradigmatic human actions, because, effects in the world are brought about by devices such as robotic (...) arms, prosthesis, or other machines, and their execution runs through a computer directed by brain signals. In contrast to usual forms of action, the sequence does not need to involve bodily or muscle movements at all. A motionless body, the epitome of inaction, might be acting. How do theories of action relate to such BCI-mediated forms of changing the world? We wish to explore this question through the lenses of three perspectives on agency: subjective experience of agency, philosophical action theory, and legal concepts of action. Our analysis pursues three aims: First, we shall discuss whether and which BCI-mediated events qualify as actions, according to the main concepts of action in philosophy and law. Secondly, en passant, we wish to highlight the ten most interesting novelties or peculiarities of BCI-mediated movements. Thirdly, we seek to explore whether these novel forms of movement may have consequences for concepts of agency. More concretely, we think that convincing assessments of BCI-movements require more fine-grained accounts of agency and a distinction between various forms of control during movements. In addition, we show that the disembodied nature of BCI-mediated events causes troubles for the standard legal account of actions as bodily movements. In an exchange with views from philosophy, we wish to propose that the law ought to reform its concept of action to include some, but not all, BCI-mediated events and sketch some of the wider implications this may have, especially for the venerable legal idea of the right to freedom of thought. In this regard, BCIs are an example of the way in which technological access to yet largely sealed-off domains of the person may necessitate adjusting normative boundaries between the personal and the social sphere. (shrink)
It seems natural to think that Carnapian explication and experimental philosophy can go hand in hand. But what exactly explicators can gain from the data provided by experimental philosophers remains controversial. According to an influential proposal by Shepherd and Justus, explicators should use experimental data in the process of ‘explication preparation’. Against this proposal, Mark Pinder has recently suggested that experimental data can directly assist an explicator’s search for fruitful replacements of the explicandum. In developing his argument, he also proposes (...) a novel aspect of what makes a concept fruitful, namely, that it is taken up by the relevant community. In this paper, I defend explication preparation against Pinder’s objections and argue that his uptake proposal conflates theoretical and practical success conditions of explications. Furthermore, I argue that Pinder’s suggested experimental procedure needs substantial revision. I end by distinguishing two kinds of explication projects, and showing how experimental philosophy can contribute to each of them. (shrink)
Multi-stakeholder initiatives have become a vital part of the organizational landscape for corporate social responsibility. Recent debates have explored whether these initiatives represent opportunities for the “democratization” of transnational corporations, facilitating civic participation in the extension of corporate responsibility, or whether they constitute new arenas for the expansion of corporate influence and the private capture of regulatory power. In this article, we explore the political dynamics of these new governance initiatives by presenting an in-depth case study of an organization often (...) heralded as a model MSI: the Forest Stewardship Council. An effort to address global deforestation in the wake of failed efforts to agree a multilateral convention on forests at the Rio Summit in 1992, the FSC was launched in 1993 as a non-state regulatory experiment: a transnational MSI, administering a global eco-labeling scheme for timber and forest products. We trace the scheme’s evolution over the past two decades, showing that while the FSC has successfully facilitated multi-sectoral determination of new standards for forestry, it has nevertheless failed to transform commercial forestry practices or stem the tide of tropical deforestation. Applying a neo-Gramscian analysis to the organizational evolution of the FSC, we examine how broader market forces and resource imbalances between non-governmental and market actors can serve to limit the effectiveness of MSIs in the current neo-liberal environment. This presents dilemmas for NGOs which can lead to their defection, ultimately undermining the organizational legitimacy of MSIs. (shrink)
In this discussion paper, I seek to challenge Hylarie Kochiras’ recent claims on Newton’s attitude towards action at a distance, which will be presented in Section 1. In doing so, I shall include the positions of Andrew Janiak and John Henry in my discussion and present my own tackle on the matter . Additionally, I seek to strengthen Kochiras’ argument that Newton sought to explain the cause of gravity in terms of secondary causation . I also provide some specification on (...) what Kochiras calls ‘Newton’s substance counting problem’ . In conclusion, I suggest a historical correction .Keywords: Isaac Newton ; Action at a distance; Cause of gravity; Fourth letter to Bentley. (shrink)
Reasoning without experience is very slippery. A man may puzzle me by arguents [sic] … but I’le beleive my ey experience ↓my eyes.↓ernan mcmullin once remarked that, although the “avowedly tentative form” of the Queries “marks them off from the rest of Newton’s published work,” they are “the most significant source, perhaps, for the most general categories of matter and action that informed his research.”2 The Queries (or Quaestiones), which Newton inserted at the very end of the third book of (...) the Opticks3 or its Latin rendition, Optice,4 constitute that part of his optical magnum opus which he reworked and augmented the most—especially between 1704 and 1717. While the main text of the Opticks itself underwent .. (shrink)
Ethical issues concerning brain–computer interfaces have already received a considerable amount of attention. However, one particular form of BCI has not received the attention that it deserves: Affective BCIs that allow for the detection and stimulation of affective states. This paper brings the ethical issues of affective BCIs in sharper focus. The paper briefly reviews recent applications of affective BCIs and considers ethical issues that arise from these applications. Ethical issues that affective BCIs share with other neurotechnologies are presented and (...) ethical concerns that are specific to affective BCIs are identified and discussed. (shrink)
In this paper I argue against a criticism by Matthew Weiner to Grice’s thesis that cancellability is a necessary condition for conversational implicature. I argue that the purported counterexamples fail because the supposed failed cancellation in the cases Weiner presents is not meant as a cancellation but as a reinforcement of the implicature. I moreover point out that there are special situations in which the supposed cancellation may really work as a cancellation.
In this essay, I attempt to assess Henk de Regt and Dennis Dieks recent pragmatic and contextual account of scientific understanding on the basis of an important historical case-study: understanding in Newton's theory of universal gravitation and Huygens' reception of universal gravitation. It will be shown that de Regt and Dieks' Criterion for the Intelligibility of a Theory, which stipulates that the appropriate combination of scientists' skills and intelligibility-enhancing theoretical virtues is a condition for scientific understanding, is too strong. On (...) the basis of this case-study, it will be shown that scientists can understand each others' positions qualitatively and quantitatively, despite their endorsement of different worldviews and despite their convictions as what counts as a proper explanation. (shrink)
ABSTRACTIn this article, I consider Bernard Suits’ Utopia where the denizens supposedly fill their days playing Utopian sports, with regard to the relevance of the thought experiment for understand...
In this article, I first address the ethical considerations about football and show that a meritocratic-fairness view of sports fails to capture the phenomenon of football. Fairness of result is not at centre stage in football. Football is about the drama, about the tension and the emotions it provokes. This moves us to the realm of aesthetics. I reject the idea of the aesthetics of football as the disinterested aesthetic appreciation, which traditionally has been deemed central to aesthetics. Instead, I (...) argue that we should try and develop an agon aesthetics where our aesthetic appreciation is understood as involving and being embedded in our engagement in the game. The drama of football is staged but not scripted. The aesthetics of competitions like football matches—the agon aesthetics—lies in engaging in the conflict that a competition is, while being aware that the conflict is not over ordinary world or everyday life issues, but unnecessary and invented for the very purpose of having a conflict to enjoy. (shrink)
We develop an extension of the familiar linear mixed logit model to allow for the direct estimation of parametric non-linear functions defined over structural parameters. Classic applications include the estimation of coefficients of utility functions to characterize risk attitudes and discounting functions to characterize impatience. There are several unexpected benefits of this extension, apart from the ability to directly estimate structural parameters of theoretical interest.
Recently, a number of critical social theorists have argued that the analysis of social relations of unfreedom should take into account the phenomenon of self-subordination. In my article, I draw on Hegel’s theory of recognition to elucidate this phenomenon and show that recognition can be not only a means of self-realization, but also of subjugation. I develop my argument in three steps: As a first step, I reconstruct the idea of social pathologies in the tradition of Critical Theory. In the (...) course of this reconstruction, it becomes clear that the analysis of social pathologies should focus on the binding force of recognition. As a second step, I reinterpret Hegel and show that a close reading of the relationship of lordship and bondage can help us to understand how a subject can become bound by recognition. As a third step, I make an attempt at reactualizing Hegel’s idea. Following Sartre’s analysis of anti-Semitism, I outline three stages of how subjects can gradually come to subordinate themselves and become entrapped in social relations of unfreedom such as race, class or gender. (shrink)
Bayesian approaches for estimating multilevel latent variable models can be beneficial in small samples. Prior distributions can be used to overcome small sample problems, for example, when priors that increase the accuracy of estimation are chosen. This article discusses two different but not mutually exclusive approaches for specifying priors. Both approaches aim at stabilizing estimators in such a way that the Mean Squared Error of the estimator of the between-group slope will be small. In the first approach, the MSE is (...) decreased by specifying a slightly informative prior for the group-level variance of the predictor variable, whereas in the second approach, the decrease is achieved directly by using a slightly informative prior for the slope. Mathematical and graphical inspections suggest that both approaches can be effective for reducing the MSE in small samples, thus rendering them attractive in these situations. The article also discusses how these approaches can be implemented in Mplus. (shrink)
We exhibit a finite lattice without critical triple that cannot be embedded into the enumerable Turing degrees. Our method promises to lead to a full characterization of the finite lattices embeddable into the enumerable Turing degrees.
ABSTRACTGraeme Wood’s The Way of the Strangers gets as close as is humanly possible to an ethnography of recruiters and sympathizers of the Islamic State. Contrary to much writing on radical Islamism, Wood convincingly shows that the Islamic State’s ideas—rooted in a literalist reading of ancient Islamic sources—are central in motivating many of the movement’s followers. His accounts of individual adherents also suggests, however, that ideas are not the only factor, as certain personality traits influence who is attracted to radical (...) Islamist movements. (shrink)
Like many of their contemporaries Bernard Nieuwentijt and Pieter van Musschenbroek were baffled by the heterodox conclusions which Baruch Spinoza drew in the Ethics. As the full title of the Ethics—Ethica ordine geometrico demonstrata—indicates, these conclusions were purportedly demonstrated in a geometrical order, i.e. by means of pure mathematics. First, I highlight how Nieuwentijt tried to immunize Spinoza’s worrisome conclusions by insisting on the distinction between pure and mixed mathematics. Next, I argue that the anti-Spinozist underpinnings of Nieuwentijt’s distinction between (...) pure and mixed mathematics resurfaced in the work of van Musschenbroek. By insisting on the distinction between pure and mixed mathematics, Nieuwentijt and van Musschenbroek argued that Spinoza abused mathematics by making claims about things that exist in rerum natura by relying on a pure mathematical approach. In addition, by insisting that mixed mathematics should be painstakingly based on mathematical ideas that correspond to nature, van Musschenbroek argued that René Descartes’ natural-philosophical project abused mathematics by introducing hypotheses, i.e. ideas, that do not correspond to nature. (shrink)
We give an algorithm for deciding whether an embedding of a finite partial order [Formula: see text] into the enumeration degrees of the [Formula: see text]-sets can always be extended to an embedding of a finite partial order [Formula: see text].
Newton’s immensely famous, but tersely written, General Scholium is primarily known for its reference to the argument of design and Newton’s famous dictum “hypotheses non fingo”. In the essay at hand, I shall argue that this text served a variety of goals and try to add something new to our current knowledge of how Newton tried to accomplish them. The General Scholium highlights a cornucopia of features that were central to Newton’s natural philosophy in general: matters of experimentation, methodological issues, (...) theological matters, matters related to the instauration of prisca sapientia, epistemological claims central to Newton’s empiricism, and, finally, metaphysical issues. For Newton these matters were closely interwoven. I shall address these matters based on a thorough study of the extant manuscript material. (shrink)
In this essay, I shall take up the theme of Galileo’s notion of cause, which has already received considerable attention. I shall argue that the participants in the debate as it stands have overlooked a striking and essential feature of Galileo’s notion of cause. Galileo not only reformed natural philosophy, he also – as I shall defend – introduced a new notion of causality and integrated it in his scientific practice. Galileo’s conception of causality went hand in hand with his (...) methodology. It is my claim that Galileo was trying to construct a new scientifically useful notion of causality. This new notion of causality is an interventionist notion. (shrink)
First concerns about the use of nanosilver were raised almost a decade ago, but assessing the risks has been extremely challenging scientifically, and regulation to protect environmental and human health remains controversial. In order to understand the known risks and issues associated with the use of nanosilver, we carried out a DPSIR analysis and analysed drivers, pressures, state, impacts and potential policy responses. We found that most concerns relate to the potential development of multi-resistant bacteria and the environmental impacts of (...) nanosilver. From the DPSIR analysis, we found that new legislation for nanomaterials in general and nanosilver-specific changes in the current European chemical, biocide and medical legislation were the optimal policy responses, along with limiting the overall use of nanosilver. In order to qualify the identified potential policy responses, we carried out a stakeholder analysis, in order to explore possibilities for reaching consensus amongst stakeholders. Through the stakeholder analysis, the interests, views, power and influence of the identified stakeholders were mapped. Overall, the policy options identified in the DPSIR analysis were deemed not to be implementable, as industry and NGOs seem to have fundamentally conflicting views and interests. The use of the combination of DPSIR and stakeholder analysis proved valuable for use in cases of complexity, as they compensate for each other’s limitations and open up for a discussion what can be done to reduce risks. (shrink)
In this essay, I call attention to Kant’s and Whewell’s attempt to provide bridging principles between a priori principles and scientific laws. Part of Kant’s aim in the Opus postumum (ca. 1796-1803) was precisely to bridge the gap between the metaphysical foundations of natural science (on the Metaphysical Foundations of Natural Science (1786) see section 1) and physics by establishing intermediary concepts or ‘Mittelbegriffe’ (henceforth this problem is referred to as ‘the bridging-problem’). I argue that the late-Kant attempted to show (...) that the concept of ‘moving force’, an intermediary concept derived from a priori principles, could be given empirical content so that concrete scientific knowledge is arrived at. Thus, the late-Kant wished not only to show that proper scientific laws are necessary a priori (as he had shown in the Metaphysical Foundations of Natural Science) but also that intermediary concepts could be derived from a priori principles which, when interpreted empirically, resulted in the specific forces as established by physics (see section 2). Of course, William Whewell never knew about Kant’s Opus postumum and his attempt to bridge the gap between the metaphysical foundations of science and physics. However, it is striking that Whewell had similar concerns about the Critique of Pure Reason and the Metaphysical Foundations of Natural Science as Kant himself. According to Whewell, the Kantian project was incomplete because it did not show how ‘modifications’ (in the sense of concretizations) of a priori principles could result in empirical laws (section 3). Next, it will be argued, by taking into account several of Whewell’s philosophical notebooks which have scarcely been studied systematically, that Whewell’s doctrine of Fundamental Ideas grew out of his dissatisfaction with the Kantian project with respect to the bridging problem and that his own philosophical position should be seen as an attempt to bypass the bridging-problem. (shrink)
In this essay, I shall show that the so-called inferential (Suárez 2003 and 2004 ) and interpretational (Contessa 2007 ) accounts of scientific representation are respectively unsatisfactory and too weak to account for scientific representation ( pars destruens ). Along the way, I shall also argue that the pragmatic similarity (Giere 2004 and Giere 2010 ) and the partial isomorphism (da Costa and French 2003 and French 2003 ) accounts are unable to single out scientific representation. In the pars construens (...) I spell out a limiting case account which has explanatory surplus vis à vis the approaches which I have previously reviewed. My account offers an adequate treatment of scientific representation, or so I shall try to argue. Central to my account is the notion of a pragmatic limiting case, which will be characterized in due course. (shrink)
This chapter examines the series of drastic epistemological and methodological transformations in the status of hypotheses in British natural philosophy during the seventeenth century. It explains that hypotheses played a rather marginal role in Francis Bacon's methodological thought because he believed they lacked any physical content, although they occupied a centre stage in the Bacon-inspired natural philosophy program of Robert Boyle and Robert Hooke. The chapter mentions that Boyle and Hooke provided a new definition of hypothesis, which is that of (...) something conceived of as causally sufficient and probable explications of natural phenomena that stand in an evidential relation to the natural phenomena they serve to elucidate. (shrink)
For Thomas Reid, Isaac Newton's scientific methodology in natural philosophy was a source of inspiration for philosophical methodology in general. I shall look at how Reid adapted Newton's views on methodology in natural philosophy. We shall see that Reid radicalized Newton's methodology and, thereby, begins to pave the way for the positivist movement, of which the origin is traditionally associated with the Frenchman Auguste Comte. In the Reidian adaptation of Newtonianism, we can already notice the beginnings of the anti-causal trend (...) that would become so popular in the age of positivism. (shrink)
SUMMARYIn this paper I will probe into Herman Boerhaave's appropriation of Isaac Newton's natural philosophy. It will be shown that Newton's work served multiple purposes in Boerhaave's oeuvre, for he appropriated Newton's work differently in different contexts and in different episodes in his career. Three important episodes in, and contexts of, Boerhaave's appropriation of Newton's natural philosophical ideas and methods will be considered: 1710–11, the time of his often neglected lectures on the place of physics in medicine; 1715, when he (...) delivered his most famous rectorial address; and, finally, 1731/2, in publishing his Elementa chemiae. Along the way, I will spell out the implications of Boerhaave's case for our understanding of the reception, or use, of Newton's ideas more generally. (shrink)
We characterize the structure of computably categorical trees of finite height, and prove that our criterion is both necessary and sufficient. Intuitively, the characterization is easiest to express in terms of isomorphisms of (possibly infinite) trees, but in fact it is equivalent to a Σ03-condition. We show that all trees which are not computably categorical have computable dimension ω. Finally, we prove that for every n≥ 1 in ω, there exists a computable tree of finite height which is δ0n+1-categorical but (...) not δ0n-categorical. (shrink)
n this paper I deal with a neglected topic with respect to unification in Newton’s Principia. I will clarify Newton’s notion and practice of unification . In order to do so, I will use the recent theories on unification as tools of analysis . I will argue, after showing that neither Kitcher’s nor Schurz’s account aptly capture Newton’s notion and practice of unification, that Salmon’s later work is a good starting point for analysing this notion and its practice in the (...) Principia. Finally, I will supplement Salmon’s account in order to answer the question at stake.Keywords: Explanation; Isaac Newton; Principia; Unification. (shrink)
In this paper an analysis of Newton’s argument for universal gravitation is provided. In the past, the complexity of the argument has not been fully appreciated. Recent authors like George E. Smith and William L. Harper have done a far better job. Nevertheless, a thorough account of the argument is still lacking. Both authors seem to stress the importance of only one methodological component. Smith stresses the procedure of approximative deductions backed-up by the laws of motion. Harper stresses “systematic dependencies” (...) between theoretical parameters and phenomena. I will argue that Newton used a variety of different inferential strategies: causal parsimony considerations, deductions, demonstrative inductions, abductions and thought-experiments. Each of these strategies is part of Newton’s famous argument. (shrink)
. What is a logic? Which properties are preserved by maps between logics? What is the right notion for equivalence of logics? In order to give satisfactory answers we generalize and further develop the topological approach of [4] and present the foundations of a general theory of abstract logics which is based on the abstract concept of a theory. Each abstract logic determines a topology on the set of theories. We develop a theory of logic maps and show in what (...) way they induce (continuous, open) functions on the corresponding topological spaces. We also establish connections to well-known notions such as translations of logics and the satisfaction axiom of institutions [5]. Logic homomorphisms are maps that behave in some sense like continuous functions and preserve more topological structure than logic maps in general. We introduce the notion of a logic isomorphism as a (not necessarily bijective) function on the sets of formulas that induces a homeomorphism between the respective topological spaces and gives rise to an equivalence relation on abstract logics. Therefore, we propose logic isomorphisms as an adequate and precise notion for equivalence of logics. Finally, we compare this concept with another recent proposal presented in [2]. (shrink)
In this paper we study an alternative approach to the concept of abstract logic and to connectives in abstract logics. The notion of abstract logic was introduced by Brown and Suszko —nevertheless, similar concepts have been investigated by various authors. Considering abstract logics as intersection structures we extend several notions to their κ -versions, introduce a hierarchy of κ -prime theories, which is important for our treatment of infinite connectives, and study different concepts of κ -compactness. We are particularly interested (...) in non-topped intersection structures viewed as semi-lattices with a minimal meet-dense subset, i.e., with a minimal generator set. We study a chain condition which is sufficient for a minimal generator set, implies compactness of the logic, and in regular logics is equivalent to compactness of the consequence relation together with the existence of a inconsistent set, where κ is the cofinality of the cardinality of the logic. Some of these results are known in a similar form in the context of closure spaces, we give extensions to intersection structures and to big cardinals presenting new proofs based on set-theoretical tools. The existence of a minimal generator set is crucial for our way to define connectives. Although our method can be extended to further non-classical connectives we concentrate here on intuitionistic and infinite ones. Our approach leads us to the concept of the set of complete theories which is stable under all considered connectives and gives rise to the definition of the topological space of the logic. Topological representations of abstract logics by means of this space remain to be further investigated. (shrink)
In this essay I argue against I. Bernard Cohen's influential account of Newton's methodology in the Principia: the 'Newtonian Style'. The crux of Cohen's account is the successive adaptation of 'mental constructs' through comparisons with nature. In Cohen's view there is a direct dynamic between the mental constructs and physical systems. I argue that his account is essentially hypothetical-deductive, which is at odds with Newton's rejection of the hypothetical-deductive method. An adequate account of Newton's methodology needs to show how Newton's (...) method proceeds differently from the hypothetical-deductive method. In the constructive part I argue for my own account, which is model based: it focuses on how Newton constructed his models in Book I of the Principia. I will show that Newton understood Book I as an exercise in determining the mathematical consequences of certain force functions. The growing complexity of Newton's models is a result of exploring increasingly complex force functions (intra-theoretical dynamics) rather than a successive comparison with nature (extra-theoretical dynamics). Nature did not enter the scene here. This intra-theoretical dynamics is related to the 'autonomy of the models'. (shrink)
We present $\in_I$-Logic (Epsilon-I-Logic), a non-Fregean intuitionistic logic with a truth predicate and a falsity predicate as intuitionistic negation. $\in_I$ is an extension and intuitionistic generalization of the classical logic $\in_T$ (without quantifiers) designed by Sträter as a theory of truth with propositional self-reference. The intensional semantics of $\in_T$ offers a new solution to semantic paradoxes. In the present paper we introduce an intuitionistic semantics and study some semantic notions in this broader context. Also we enrich the quantifier-free language by (...) the new connective < that expresses reference between statements and yields a finer characterization of intensional models. Our results in the intuitionistic setting lead to a clear distinction between the notion of denotation of a sentence and the here-proposed notion of extension of a sentence (both concepts are equivalent in the classical context). We generalize the Fregean Axiom to an intuitionistic version not valid in $\in_I$. A main result of the paper is the development of several model constructions. We construct intensional models and present a method for the construction of standard models which contain specific (self-)referential propositions. (shrink)
We prove that a enumerable degree is contiguous iff it is locally distributive. This settles a twenty-year old question going back to Ladner and Sasso. We also prove that strong contiguity and contiguity coincide, settling a question of the first author, and prove that no $m$-topped degree is contiguous, settling a question of the first author and Carl Jockusch [11]. Finally, we prove some results concerning local distributivity and relativized weak truth table reducibility.
In the paper, I present Christopher Gauker's critique of the view that we talk to each other as a way to make ourselves understood (the received view of linguistic communication) and his alternative theory. I show that both his critique and his alternative fail, and defend the received view of linguistic communication.
Mathematical and philosophical Newton Content Type Journal Article Pages 1-10 DOI 10.1007/s11016-010-9520-2 Authors Steffen Ducheyne, Centre for Logic and Philosophy of Science, Ghent University, Blandijnberg 2, 9000 Ghent, Belgium Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
In this essay the authors explore the nature of efficient causal explanation in Newton’s "Principia and The Opticks". It is argued that: (1) In the dynamical explanations of the Principia, Newton treats the phenomena under study as cases of Hall’s second kind of atypical causation. The underlying concept of causation is therefore a purely interventionist one. (2) In the descriptions of his optical experiments, Newton treats the phenomena under study as cases of Hall’s typical causation. The underlying concept of causation (...) is therefore a mixed interventionist/mechanicist one. (shrink)
Currently, electronic agents are being designed and implemented that, unprecedentedly, will be capable of performing legally binding actions. These advances necessitate a thorough treatment of their legal consequences. In our paper, we first demonstrate that electronic agents behave structurally similar to human agents. Then we study how declarations of intention stated by an electronic agent are related to ordinary declarations of intention given by natural persons or legal entities, and also how the actions of electronic agents in this respect have (...) to be classified under German law. We discuss four different approaches of classifying agent declarations. As one of these, we propose the concept of an electronic person (i.e., agents with limited liability), enrolment of agents into an agent register, and agent liability funds as means to serve the needs of all contracting parties. (shrink)
In this essay, my aim is twofold: to clarify how the late Mill conceived of the certainty of inductive generalizations and to offer a systematic clarification of the limited domain of application of the Mill’s Canons of Induction. I shall argue that Mill’s views on the certainty of knowledge changed overtime and that this change was accompanied by a new view on the certainty of the inductive results yielded by the Canons of Induction. The key message of the later editions (...) of The System of Logic as conceived by the late Mill was no longer that by the Canons of Induction we can establish scientific certainty and true causes, but rather that the Canons are useful in establishing causal laws in a provisional way. (shrink)