More and more organisations formulate a code of conduct in order to stimulate responsible behaviour among their members. Much time and energy is usually spent fixing the content of the code but many organisations get stuck in the challenge of implementing and maintaining the code. The code then turns into nothing else than the notorious "paper in the drawer", without achieving its aims. The challenge of implementation is to utilize the dynamics which have emerged from the formulation of the code. (...) This will support a continuous process of reflection on the central values and standards contained in the code. This paper presents an assessment method, based on the EFQM model, which intends to support this implementation process. (shrink)
An ethical evaluation of employee participation to decision-making has to be based, obviously, on a theory about ethics, but also on an understanding of the role and the impact of participation in the organisation. This paper aims at sketching different organisational paradigms, and analysing their normative prescriptions w.r.t. participation. It will appear that the recognition of the social nature of man and the acknowledgement of the existence of differentiated goals could enhance the positive outcomes of participation. Next, we will examine (...) to what extent a systemic approach and a stakeholder concept of the firm can meet these requirements. The necessity of a global approach to participation will be emphasised. Consequently, we suggest in the last section of this paper that participation should be extended to the definition of a shared value horizon. (shrink)
In a recent contribution to this journal, Etzioni (1998) has introduced a “communitarian note on stakeholder theory” based on a principle of fairness. While we do not challenge the principle of fairness itself, we claim that when this principle is applied only to those who invest in the corporation, it cannot serve as the ground for an ethical stakeholder theory. A focus on low-skilled workers as astakeholder group will help us to illustrate this claim.
The elucidations and regimentations of grounding offered in the literature standardly take it to be a necessary connection. In particular, authors often assert, or at least assume, that if some facts ground another fact, then the obtaining of the former necessitates the latter; and moreover, that grounding is an internal relation, in the sense of being necessitated by the existence of the relata. In this article, I challenge the necessitarian orthodoxy about grounding by offering two prima facie counterexamples. First, some (...) physical facts may ground a certain phenomenal fact without necessitating it; and they may co-exist with the latter without grounding it. Second, some instantiations of categorical properties may ground the instantiation of a dispositional one without necessitating it; and they may co-exist without grounding it. After arguing that these may be genuine counterexamples, I ask whether there are modal constraints on grounding that are not threatened by them. I propose two: that grounding supervenes on what facts there are, and that every grounded fact supervenes on what grounds there are. Finally, I attempt to provide a rigorous formulation of the latter supervenience claim and discuss some technical questions that arise if we allow descending grounding chains of transfinite length. (shrink)
The emerging consensus in the philosophy of cognition is that cognition is situated, i.e., dependent upon or co-constituted by the body, the environment, and/or the embodied interaction with it. But what about emotions? If the brain alone cannot do much thinking, can the brain alone do some emoting? If not, what else is needed? Do (some) emotions (sometimes) cross an individual's boundary? If so, what kinds of supra-individual systems can be bearers of affective states, and why? And does that make (...) emotions ?embedded? or ?extended? in the sense cognition is said to be embedded and extended? Section 2 shows why it is important to understand in which sense body, environment, and our embodied interaction with the world contribute to our affective life. Section 3 introduces some key concepts of the debate about situated cognition. Section 4 draws attention to an important disanalogy between cognition and emotion with regard to the role of the body. Section 5 shows under which conditions a contribution by the environment results in non-trivial cases of ?embedded? emotions. Section 6 is concerned with affective phenomena that seem to cross the organismic boundaries of an individual, in particular with the idea that emotions are ?extended? or ?distributed.? (shrink)
I suggest a way of extending Stalnaker’s account of assertion to allow for centered content. In formulating his account, Stalnaker takes the content of assertion to be uncentered propositions: entities that are evaluated for truth at a possible world. I argue that the content of assertion is sometimes centered: the content is evaluated for truth at something within a possible world. I consider Andy Egan’s proposal for extending Stalnaker’s account to allow for assertions with centered content. I argue that Egan’s (...) account does not succeed. Instead, I propose an account on which the contents of assertion are identified with sets of multi-centered worlds. I argue that such a view not only provides a plausible account of how assertions can have centered content, but also preserves Stalnaker’s original insight that successful assertion involves the reduction of shared possibilities. (shrink)
It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be (...) explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal–mechanical explanation. 1 Introduction2 What a Great Many Phenomena Bayesian Decision Theory Can Model3 The Case of Information Integration4 How Do Bayesian Models Unify?5 Bayesian Unification: What Constraints Are There on Mechanistic Explanation?5.1 Unification constrains mechanism discovery5.2 Unification constrains the identification of relevant mechanistic factors5.3 Unification constrains confirmation of competitive mechanistic models6 ConclusionAppendix. (shrink)
We define a notion of difference-making for partial grounds of a fact in rough analogy to existing notions of difference-making for causes of an event. Using orthodox assumptions about ground, we show that it induces a non-trivial division with examples of partial grounds on both sides. We then demonstrate the theoretical fruitfulness of the notion by applying it to the analysis of a certain kind of putative counter-example to the transitivity of ground recently described by Jonathan Schaffer. First, we show (...) that our conceptual apparatus of difference-making enables us to give a much clearer description than Schaffer does of what makes the relevant instances of transitivity appear problematic. Second, we suggest that difference-making is best seen as a mark of good grounding-based explanations rather than a necessary condition on grounding, and argue that this enables us to deal with the counter-example in a satisfactory way. Along the way, we show that Schaffer's own proposal for salvaging a form of transitivity by moving to a contrastive conception of ground is unsuccessful. We conclude by sketching some natural strategies for extending our proposal to a more comprehensive account of grounding-based explanations. (shrink)
This comprehensive new book introduces the core history of phenomenology and assesses its relevance to contemporary psychology, philosophy of mind, and cognitive science. From critiques of artificial intelligence research programs to ongoing work on embodiment and enactivism, the authors trace how phenomenology has produced a valuable framework for analyzing cognition and perception, whose impact on contemporary psychological and scientific research, and philosophical debates continues to grow. The first part of _An Introduction to Phenomenology_ is an extended overview of the history (...) and development of phenomenology, looking at its key thinkers, focusing particularly on Husserl, Heidegger and Merleau-Ponty, as well as its cultural and intellectual precursors. In the second half Chemero and Käufer turn their attention to the contemporary interpretations and uses of phenomenology in cognitive science, showing that phenomenology is a living source of inspiration in contemporary interdisciplinary studies of the mind. Käufer and Chemero have written a clear, jargon-free account of phenomenology, providing abundant examples and anecdotes to illustrate and to entertain. This book is an ideal introduction to phenomenology and cognitive science for the uninitiated, as well as for philosophy and psychology students keen to deepen their knowledge. (shrink)
Metaphysical grounding is standardly taken to be irreflexive: nothing grounds itself. Kit Fine has presented some puzzles that appear to contradict this principle. I construct a particularly simple variant of those puzzles that is independent of several of the assumptions required by Fine, instead employing quantification into sentence position. Various possible responses to Fine's puzzles thus turn out to apply only in a restricted range of cases.
Analytic philosophy is once again in a methodological frame of mind. Nowhere is this more evident than in metaphysics, whose practitioners and historians are actively reflecting on the nature of ontological questions, the status of their answers, and the relevance of contributions both from other areas within philosophy and beyond. Such reflections are hardly new: the debate between Willard van Orman Quine and Rudolf Carnap about how to understand and resolve ontological questions is widely seen as a turning point in (...) twentieth-century analytic philosophy. And indeed, this volume is occasioned by the fact that the deflationary approach to metaphysics advocated by Carnap in that debate is once again attracting considerable interest and support. Eleven original essays by many of today's leading voices in metametaphysics aim to deepen our understanding of Carnap's contributions to metaontology and to explore how this legacy might be mined for insights into the contemporary debate. (shrink)
There is currently disagreement about whether the phenomenon of first-person, or de se, thought motivates a move towards special kinds of contents. Some take the conclusion that traditional propositions are unable to serve as the content of de se belief to be old news, successfully argued for in a number of influential works several decades ago.1 Recently, some philosophers have challenged the view that there exist uniquely de se contents, claiming that most of the philosophical community has been under the (...) grip of an attractive but unmotivated myth.2 At the very least, this latter group has brought into question the arguments in favor of positing special kinds of content for de se belief; I think they have successfully shown that these arguments are not as conclusive, or fully articulated, as many have taken them to be. In this paper I will address these challenges directly and I will present and defend an argument for the conclusion that the phenomenon of de se thought does indeed motivate the move to a special kind of content, content that is uniquely de se. First, I characterize a notion of de se belief that is neutral with respect to friends and foes of uniquely de se content. I then argue for a determination thesis relating de se belief to belief content: that there is no difference in de se belief without a difference in belief content. I argue that various proposals for rejecting this determination thesis are unsuccessful. In the last part of the paper, I employ this determination thesis to argue for the existence of a type of belief content that is uniquely de se. (shrink)
The goal of this paper is to examine moods, mostly in comparison to emotions. Nearly all of the features that allegedly distinguish moods from emotions are disputed though. In a first section I comment on duration, intentionality, and cause in more detail, and develop intentionality as the most promising distinguishing characteristic. In a second section I will consider the huge variety of moods, ranging from shallow environmentally triggered transient moods to deep existential moods that last much longer. I will explore (...) what their sources are, and how they impact one another, other affective processes, and our being in the world. I follow several eminent emotion researchers and try to carve out their insights, many seemingly mutually excluding each other. As it will turn out, most of them are, in fact, not excluding each other, but contribute to a layered picture of moods that fits well in between emotions and personality traits. Eventually, I will shortly discuss what we can do with our moods. (shrink)
In his 2010 paper ‘Grounding and Truth-Functions’, Fabrice Correia has developed the first and so far only proposal for a logic of ground based on a worldly conception of facts. In this paper, we show that the logic allows the derivation of implausible grounding claims. We then generalize these results and draw some conclusions concerning the structural features of ground and its associated notion of relevance, which has so far not received the attention it deserves.
Simulation techniques, especially those implemented on a computer, are frequently employed in natural as well as in social sciences with considerable success. There is mounting evidence that the "model-building era" (J. Niehans) that dominated the theoretical activities of the sciences for a long time is about to be succeeded or at least lastingly supplemented by the "simulation era". But what exactly are models? What is a simulation and what is the difference and the relation between a model and a simulation? (...) These are some of the questions addressed in this article. I maintain that the most significant feature of a simulation is that it allows scientists to imitate one process by another process. "Process" here refers solely to a temporal sequence of states of a system. Given the observation that processes are dealt with by all sorts of scientists, it is apparent that simulations prove to be a powerful interdisciplinarily acknowledged tool. Accordingly, simulations are best suited to investigate the various research strategies in different sciences more carefully. To this end, I focus on the function of simulations in the research process. Finally, a somewhat detailed case-study from nuclear physics is presented which, in my view, illustrates elements of a typical simulation in physics. (shrink)
Jan Sprenger and Stephan Hartmann offer a fresh approach to central topics in philosophy of science, including causation, explanation, evidence, and scientific models. Their Bayesian approach uses the concept of degrees of belief to explain and to elucidate manifold aspects of scientific reasoning.
The view known as animalism asserts that we are human animals—that each of us is an instance of the Homo sapiens species. The standard argument for this view is known as the thinking animal argument . But this argument has recently come under attack. So, here, a new argument for animalism is introduced. The animal ancestors argument illustrates how the case for animalism can be seen to piggyback on the credibility of evolutionary theory. Two objections are then considered and answered.
Bayesian epistemology addresses epistemological problems with the help of the mathematical theory of probability. It turns out that the probability calculus is especially suited to represent degrees of belief (credences) and to deal with questions of belief change, confirmation, evidence, justification, and coherence. Compared to the informal discussions in traditional epistemology, Bayesian epis- temology allows for a more precise and fine-grained analysis which takes the gradual aspects of these central epistemological notions into account. Bayesian epistemology therefore complements traditional epistemology; it (...) does not re- place it or aim at replacing it. (shrink)
The concept of supervenience and a regimented concept of grounding are often taken to provide rival explications of pre-theoretical concepts of dependence and determination. Friends of grounding typically point out that supervenience claims do not entail corresponding grounding claims. Every fact supervenes on itself, but is not grounded in itself, and the fact that a thing exists supervenes on the fact that its singleton exists, but is not grounded in it. Common lore has it, though, that grounding claims do entail (...) corresponding supervenience claims. In this article, I show that this assumption is problematic. On one way of understanding it, the corresponding supervenience claim is just an entailment claim under a different name. On another way of understanding it, the corresponding claim is a distinctive supervenience claim, but its specification gives rise to what I call the "reference type problem": to associate the classes of facts that are the relata of grounding with the types of facts that are the relata of supervenience. However it is understood, supervenience rules out prima facie possibilities: alien realizers, blockers, heterogeneous realizers, floaters, and heterogeneous blockers. Instead of being rival explications of one and the same pre-theoretical concept, grounding and supervenience may be complementary concepts capturing different aspects of determination and dependence. (shrink)
This paper focuses on the question of how to resolve disagreement and uses the Lehrer-Wagner model as a formal tool for investigating consensual decision-making. The main result consists in a general definition of when agents treat each other as epistemic peers (Kelly 2005; Elga 2007), and a theorem vindicating the “equal weight view” to resolve disagreement among epistemic peers. We apply our findings to an analysis of the impact of social network structures on group deliberation processes, and we demonstrate their (...) stability with the help of numerical simulations. (shrink)
In the past decade well-designed research studies have shown that the practice of collaborative philosophical inquiry in schools can have marked cognitive and social benefits. Student academic performance improves, and so too does the social dimension of schooling. These findings are timely, as many countries in Asia and the Pacific are now contemplating introducing Philosophy into their curricula. This paper gives a brief history of collaborative philosophical inquiry before surveying the evidence as to its effectiveness. The evidence is canvassed under (...) two categories: schooling and thinking skills; and schooling, socialisation and values. In both categories there is clear evidence that even short-term teaching of collaborative philosophical inquiry has marked positive effects on students. The paper concludes with suggestions for further research and a final claim that the presently-available research evidence is strong enough to warrant implementing collaborative philosophical inquiry as part of a long-term policy. (shrink)
Fundamentality plays a pivotal role in discussions of ontology, supervenience, and possibility, and other key topics in metaphysics. However, there are two different ways of characterising the fundamental: as that which is not grounded, and as that which is the ground of everything else. I show that whether these two characterisations pick out the same property turns on a principle—which I call “Dichotomy”—that is of independent interest in the theory of ground: that everything is either fully grounded or not even (...) partially grounded. I then argue that Dichotomy fails: some facts have partial grounds that cannot be complemented to a full ground. Rejecting Dichotomy opens the door to recognising a bifurcation in our notion of fundamentality. I sketch some of the far-reaching metaphysical consequences this might have, with reference to big-picture views such as Humeanism. Since Dichotomy is entailed by the standard account of partial ground, according to which partial grounds are subpluralities of full grounds, a non-standard account is needed. In a technical “Appendix”, I show that truthmaker semantics furnishes such an account, and identify a semantic condition that corresponds to Dichotomy. (shrink)
Causal queries about singular cases are ubiquitous, yet the question of how we assess whether a particular outcome was actually caused by a specific potential cause turns out to be difficult to answer. Relying on the causal power framework, Cheng and Novick () proposed a model of causal attribution intended to help answer this question. We challenge this model, both conceptually and empirically. We argue that the central problem of this model is that it treats causal powers that are probabilistically (...) sufficient to generate the effect on a particular occasion as actual causes of the effect, and thus neglects that sufficient causal powers can be preempted in their efficacy. Also, the model does not take into account that reasoners incorporate uncertainty about the underlying general causal structure and strength of causes when making causal inferences. We propose a new measure of causal attribution and embed it into the structure induction model of singular causation. Two experiments support the model. (shrink)
Effective field theories have been a very popular tool in quantum physics for almost two decades. And there are good reasons for this. I will argue that effective field theories share many of the advantages of both fundamental theories and phenomenological models, while avoiding their respective shortcomings. They are, for example, flexible enough to cover a wide range of phenomena, and concrete enough to provide a detailed story of the specific mechanisms at work at a given energy scale. So will (...) all of physics eventually converge on effective field theories? This paper argues that good scientific research can be characterised by a fruitful interaction between fundamental theories, phenomenological models and effective field theories. All of them have their appropriate functions in the research process, and all of them are indispensable. They complement each other and hang together in a coherent way which I shall characterise in some detail. To illustrate all this I will present a case study from nuclear and particle physics. The resulting view about scientific theorising is inherently pluralistic, and has implications for the debates about reductionism and scientific explanation. (shrink)
Several theories of emergence will be distinguished. In particular, these are synchronic, diachronic, and weak versions of emergence. While the weaker theories are compatible with property reductionism, synchronic emergentism and strong versions of diachronic emergentism are not. Synchronice mergentism is of particular interest for the discussion of downward causation. For such a theory, a system's property is taken to be emergent if it is irreducible, i.e., if it is not reductively explainable. Furthermore, we have to distinguish two different types of (...) irreducibility with quite different consequences: If, on the one hand, a system's property is irreducible because of the irreducibility of the system's parts' behavior on which the property supervenes, we seem to have a case of "downward causation". This kind of downward causation does not violate the principle of the causal closure of the physical domain. If, on the other hand, a systemic property is irreducible because it is not exhaustively analyzable in terms of its causal role, downward causation is not implied. Rather, it is dubitable how unanalyzable properties might play any causal role at all. Thus, epiphenomenalism seems to be implied. The failure to keep apart the two kinds of irreducibility has muddled recent debate about the emergence of properties considerably. (shrink)
Among the questions to be raised under the heading of “personal identity” are these: “What are we?” (fundamental nature question) and “Under what conditions do we persist through time?” (persistence question). Against the dominant neo-Lockean approach to these questions, the view known as animalism answers that each of us is an organism of the species Homo sapiens and that the conditions of our persistence are those of animals. Beyond describing the content and historical background of animalism and its rivals, this (...) entry explores some of the arguments for and objections to this controversial account of our nature and persistence. (shrink)
Science strives for coherence. For example, the findings from climate science form a highly coherent body of knowledge that is supported by many independent lines of evidence: greenhouse gas emissions from human economic activities are causing the global climate to warm and unless GHG emissions are drastically reduced in the near future, the risks from climate change will continue to grow and major adverse consequences will become unavoidable. People who oppose this scientific body of knowledge because the implications of cutting (...) GHG emissions—such as regulation or increased taxation—threaten their worldview or livelihood cannot provide an alternative view that is coherent by the standards of conventional scientific thinking. Instead, we suggest that people who reject the fact that the Earth’s climate is changing due to greenhouse gas emissions oppose whatever inconvenient finding they are confronting in piece-meal fashion, rather than systematically, and without considering the implications of this rejection to the rest of the relevant scientific theory and findings. Hence, claims that the globe “is cooling” can coexist with claims that the “observed warming is natural” and that “the human influence does not matter because warming is good for us.” Coherence between these mutually contradictory opinions can only be achieved at a highly abstract level, namely that “something must be wrong” with the scientific evidence in order to justify a political position against climate change mitigation. This high-level coherence accompanied by contradictory subordinate propositions is a known attribute of conspiracist ideation, and conspiracism may be implicated when people reject well-established scientific propositions. (shrink)
Fundamental theories are hard to come by. But even if we had them, they would be too complicated to apply. Quantum chromodynamics is a case in point. This theory is supposed to govern all strong interactions, but it is extremely hard to apply and test at energies where protons, neutrons and ions are the effective degrees of freedom. Instead, scientists typically use highly idealized models such as the MIT Bag Model or the Nambu Jona-Lasinio Model to account for phenomena in (...) this domain, to explain them and to gain nderstanding. Based on these models, which typically isolate a single feature of QCD and disregard many others, scientists attempt to get a better understanding of the physics of strong interactions. But does this practice make sense? Is it justified to use these models for the purposes at hand? Interestingly, these models do not even provide an accurate description of the mass spectrum of protons, neutrons and pions and their lowest lying excitations well - despite several adjustable parameters. And yet, the models are heavily used. I'll argue that a qualitative story, which establishes an explanatory link between the fundamental theory and a model, plays an important role in model acceptance in these cases. (shrink)
The concept of emergence is widely used in both the philosophy of mind and in cognitive science. In the philosophy of mind it serves to refer to seemingly irreducible phenomena, in cognitive science it is often used to refer to phenomena not explicitly programmed. There is no unique concept of emergence available that serves both purposes.
The aggregation of consistent individual judgments on logically interconnected propositions into a collective judgment on those propositions has recently drawn much attention. Seemingly reasonable aggregation procedures, such as propositionwise majority voting, cannot ensure an equally consistent collective conclusion. The literature on judgment aggregation refers to that problem as the discursive dilemma. In this paper, we motivate that many groups do not only want to reach a factually right conclusion, but also want to correctly evaluate the reasons for that conclusion. In (...) other words, we address the problem of tracking the true situation instead of merely selecting the right outcome. We set up a probabilistic model analogous to Bovens and Rabinowicz (2006) and compare several aggregation procedures by means of theoretical results, numerical simulations and practical considerations. Among them are the premise-based, the situation-based and the distance-based procedure. Our findings confirm the conjecture in Hartmann, Pigozzi and Sprenger (2008) that the premise-based procedure is a crude, but reliable and sometimes even optimal form of judgment aggregation. (shrink)
Theoretical models are an important tool for many aspects of scientific activity. They are used, i.a., to structure data, to apply theories or even to construct new theories. But what exactly is a model? It turns out that there is no proper definition of the term "model" that covers all these aspects. Thus, I restrict myself here to evaluate the function of models in the research process while using "model" in the loose way physicists do. To this end, I distinguish (...) four kinds of models. These are (1) models as special theories, (2) models as a substitute for a theory, (3) toy models and (4) developmental models. I argue that models of the types (3) and (4) are considerably useful in the process of theory construction. This will be demonstrated in an extended case-study from High-Energy Physics. (shrink)
We propose an action-oriented understanding of emotion. Emotions are modifications of a basic form of goal-oriented striving characteristic of human life. They are appetitive orientations: pursuits of the good, avoidances of the bad. Thus, emotions are not truly distinct from, let alone opposed to, actions -- as erroneously suggested by the classical understanding of emotions as 'passions'. In the present paper, we will outline and defend this broadly enactive approach and motivate its main claims. Our proposal gains plausibility from a (...) literature- and interview-based investigation of emotional changes characteristic of clinical depression. Much narrative evidence from patient reports points towards the conclusion that many of those changes might result from a catastrophic alteration of the basic form of goal-pursuit at the root of human emotionality. The experience of profound depression could in this respect be a kind of inverted image of non-pathological emotionality--a highly unnatural passivity, giving rise to a profound -- and quite horrifying -- sense of incapacity. (shrink)
Recently, some philosophers have argued that we should take quantification of any order to be a legitimate and irreducible, sui generis kind of quantification. In particular, they hold that a semantic theory for higher-order quantification must itself be couched in higher-order terms. Øystein Linnebo has criticized such views on the grounds that they are committed to general claims about the semantic values of expressions that are by their own lights inexpressible. I show that Linnebo's objection rests on the assumption of (...) a notion of semantic value or contribution which both applies to expressions of any order, and picks out, for each expression, an extra-linguistic correlate of that expression. I go on to argue that higher-orderists can plausibly reject this assumption, by means of a hierarchy of notions they can use to describe the extra-lingustic correlates of expressions of different orders. (shrink)
Many historians of philosophy, with all their intended praise, let the philosophers speak mere nonsense. They do not guess the purpose of the philosophers.… They cannot see beyond what the philosophers actually said, to what they really meant to say.Mou Zongsan (1909–1995) is one of the key figures of contemporary New Confucianism (當代新儒家) who to this day remains largely unknown and grossly understudied in the West.1 This neglect by the Western academy contrasts sharply with the ever-growing output of literature by (...) Chinese and Taiwanese scholars in which Mou Zongsan emerges as one of the most discussed and most controversial Chinese philosophers of the twentieth century. Given this unfortunate East-West divide—as .. (shrink)
This article focuses on existential feelings. To begin with, it depicts how they differ from other affective phenomena and what type of intentionality they manifest. Furthermore, a detailed analysis shows that existential feelings can be subdivided, first, into elementary and nonelementary varieties, and second, into three foci of primary relatedness: oneself, the social environment, and the world as such. Eventually, five strategies of emotion regulation are examined with respect to their applicability to existential feelings. In the case of harmful existential (...) feelings, it turns out that none seems fitting except one, attentional deployment. (shrink)
In 2015 scientists called for a partial ban on genome editing in human germline cells. This call was a response to the rapid development of the CRISPR–Cas9 system, a molecular tool that allows researchers to modify genomic DNA in living organisms with high precision and ease of use. Importantly, the ban was meant to be a trust-building exercise that promises a ‘prudent’ way forward. The goal of this paper is to analyse whether the ban can deliver on this promise. To (...) do so the focus will be put on the precedent on which the current ban is modelled, namely the Asilomar ban on recombinant DNA technology. The analysis of this case will show that the Asilomar ban was successful because of a specific two-step containment strategy it employed and that this two-step approach is also key to making the current ban work. It will be argued, however, that the Asilomar strategy cannot be transferred to human genome editing and that the current ban therefore fails to deliver on its promise. The paper will close with a reflection on the reasons for this failure and on what can be learned from it about the regulation of novel molecular tools. (shrink)
A lot of research has recently been done on the topic of ground, and in particular on the logic of ground. According to a broad consensus in that debate, ground is hyperintensional in the sense that even logically equivalent truths may differ with respect to what grounds them, and what they ground. This renders pressing the question of what we may take to be the ground-theoretic content of a true statement, i.e. that aspect of the statement’s overall content to which (...) ground is sensitive. I propose a novel answer to this question, namely that ground tracks how, rather than just by what, a statement is made true. I develop that answer in the form of a formal theory of ground-theoretic content and show how the resulting framework may be used to articulate plausible theories of ground, including in particular a popular account of the grounds of truth-functionally complex truths that has proved difficult to accommodate on alternative views of content. (shrink)
In this paper I consider two strategies for providing tenseless truth-conditions for tensed sentences: the token-reflexive theory and the date theory. Both theories have faced a number of objections by prominent A-theorists such as Quentin Smith and William Lane Craig. Traditionally, these two theories have been viewed as rival methods for providing truth-conditions for tensed sentences. I argue that the debate over whether the token-reflexive theory or the date theory is true has arisen from a failure to distinguish between conditions (...) for the truth of tensed tokens and conditions for the truth of propositions expressed by tensed tokens. I demonstrate that there is a true formulation of the token-reflexive theory that provides necessary and sufficient conditions for the truth of tensed tokens, and there is a true formulation of the date theory that provides necessary and sufficient conditions for the truth of propositions expressed by tensed tokens. I argue that once the views are properly formulated, the A-theorist’s objections fail to make their mark. However, I conclude by claiming that even though there is a true formulation of the token-reflexive theory and a true formulation of the date theory, the New B-theory nonetheless fails to provide a complete account of the truth and falsity of tensed sentences. (shrink)