Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...) trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos. -/- Findings Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation and astronomical trajectories appear possible. -/- Originality/value Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue. (shrink)
Perceptual theories of emotion purport to avoid the problems of traditional cognitivism and noncognitivism by modelling emotion on perception, which shares the most conspicuous dimensions of emotion, intentionality and phenomenality. In this paper, I shall reconstrue and discuss four key arguments that perceptual theorists have presented in order to show that emotion is a kind of perception, or that there are close analogies between emotion and perception. These arguments are, from stronger to weaker claims: the perceptual system argument; the argument (...) from noninferential structure; the argument from epistemic role; and the argument from phenomenology. I argue that, while the arguments in favour of assimilating emotion to perception fail, the analogies between emotion and perception are not as close as perceptual theorists suggest even if some emotions resemble perception more than others, thanks to the two-levelled structure of emotional processing. (shrink)
Research on the phenomenology of agency for joint action has so far focused on the sense of agency and control in joint action, leaving aside questions on how it feels to act together. This paper tries to fill this gap in a way consistent with the existing theories of joint action and shared emotion. We first reconstruct Pacherie’s account on the phenomenology of agency for joint action, pointing out its two problems, namely the necessary trade-off between the sense of self- (...) and we-agency; and the lack of affective phenomenology of joint action in general. After elaborating on these criticisms based on our theory of shared emotion, we substantiate the second criticism by discussing different mechanisms of shared affect—feelings and emotions—that are present in typical joint actions. We show that our account improves on Pacherie’s, first by introducing our agentive model of we-agency to overcome her unnecessary dichotomy between a sense of self- and we-agency, and then by suggesting that the mechanisms of shared affect enhance not only the predictability of other agents’ actions as Pacherie highlights, but also an agentive sense of we-agency that emerges from shared emotions experienced in the course and consequence of joint action. (shrink)
Existing scientific concepts of group or shared or collective emotion fail to appreciate several elements of collectivity in such emotions. Moreover, the idea of shared emotions is threatened by the individualism of emotions that comes in three forms: ontological, epistemological, and physical. The problem is whether or not we can provide a plausible account of ?straightforwardly shared? emotions without compromising our intuitions about the individualism of emotions. I discuss two philosophical accounts of shared emotions that explain the collectivity of emotions (...) in terms of their intentional structure: Margaret Gilbert's plural subject account, and Hans Bernhard Schmid's phenomenological account. I argue that Gilbert's view fails because it relegates affective experience into a contingent role in emotions and because a joint commitment to feel amounts to the creation of a feeling rule rather than to an emotion. The problems with Schmid's view are twofold: first, a phenomenological fusion of feelings is not necessary for shared emotions and second, Schmid is not sensitive enough to different forms of shared concerns. I then outline my own typology that distinguishes between weakly, moderately, and strongly shared emotions on the basis of the participants? shared concerns of different degree of collectivity, on the one hand, and the synchronization of their emotional responses, on the other hand. All kind of shared emotions in my typology are consistent with the individualism of emotions, while the question about ?straightforward sharing? is argued to be of secondary importance. (shrink)
The rise of the radical populist right has been linked to fundamental socioeconomic changes fueled by globalization and economic deregulation. Yet, socioeconomic factors can hardly fully explain the rise of the new right. We suggest that emotional processes that affect people’s identities provide an additional explanation for the current popularity of the new radical right, not only among low- and medium-skilled workers, but also among the middle classes whose insecurities manifest as fears of not being able to live up to (...) salient social identities and their constitutive values, and as shame about this actual or anticipated inability. This link between fear and shame is particularly salient in contemporary capitalist societies where responsibility for success and failure is increasingly individualized, and failure is stigmatized through unemployment, receiving welfare benefits, or labor migration. Under these conditions, we identify two psychological mechanisms behind the rise of the new populist right. The first mechanism of ressentiment explains how negative emotions – fear and insecurity, in particular – transform through repressed shame into anger, resentment and hatred towards perceived ‘enemies’ of the self and associated social groups, such as refugees, immigrants, the long-term unemployed, political and cultural elites, and the ‘mainstream’ media. The second mechanism relates to the emotional distancing from social identities that inflict shame and other negative emotions, and instead promotes seeking meaning and self-esteem from aspects of identity perceived to be stable and to some extent exclusive, such as nationality, ethnicity, religion, language and traditional gender roles. (shrink)
Philosophers widely agree that emotions may have or lack appropriateness or fittingness, which in the emotional domain is an analogue of truth. I defend de Sousa's account of emotional truth by arguing that emotions have cognitive content as digitalized evaluative perceptions of the particular object of emotion, in terms of the relevant formal property. I argue that an emotion is true if and only if there is an actual fit between the particular and the formal objects of emotion, and the (...) emotion's propositional content is semantically satisfied, or the target of the emotion exists. Emotions meet the syntactic and disciplinary requirements of minimally truth-apt states. Appropriate fit occurs when lower-level properties of particular objects of emotion provide sufficient warrant to make ascription of the relevant formal properties superassertable. (shrink)
It is this demand to address questions emerging from these experiential and normative perspectives to which this book on emotions, ethics, and authenticity ...
The present study combines dehumanization research with the concept of organizational trust to examine how employees perceive various types of maltreatment embedded within the organizational practices that form the ethical climate of an organization. With the help of grounded theory methodology, we analyzed 188 employment exit interview transcripts from an ICT subcontracting company. By examining perceived trustworthiness and perceived humanness, we found that dehumanizing employees can deteriorate trust within organizations. The violations found in the empirical material were divided into animalistic (...) and mechanistic forms of dehumanization and linked to perceived integrity and benevolence, respectively. Based on the results, a model describing the link between dehumanization and trust is presented and discussed in relation to the ways in which perceptions of humanness become rooted in practices and affect the basic assumptions underlying ethical organizational behavior. (shrink)
Affect control theory is a sociological theory developed for modeling and predicting emotions and social behaviors in social interaction. In this commentary, I identify a few potential problems in the theory, as presented in the target article and elsewhere, and in its suggested compatibility with other major emotion theories. The first problem concerns ACT’s capacity to model emotion generation insofar as emotions have nonconceptual content. The second problem focuses on the limits of modeling interaction on the basis of fixed affective (...) meanings of identities. Finally, ACT has problems with explaining the dynamic change of affective meanings, given its tenets that people seek to maintain the established affective meanings of social roles and situations and deflections are not expressed in behavior but compensated by identity-confirming behavior. (shrink)
Existing economic models of prosociality have been rather silent in terms of proximate psychological mechanisms. We nevertheless identify the psychologically most informed accounts and offer a critical discussion of their hypotheses for the proximate psychological explanations. Based on convergent evidence from several fields of research, we argue that there nevertheless is a more plausible alternative proximate account available: the social motivation hypothesis. The hypothesis represents a more basic explanation of the appeal of prosocial behavior, which is in terms of anticipated (...) social rewards. We also argue in favor of our own social motivation hypothesis over Robert Sugden’s fellow-feeling account (due originally to Adam Smith). We suggest that social motivation not only stands as a proximate account in its own right but also provides a plausible scaffold for other more sophisticated motivations (e.g., fellow-feelings). We conclude by discussing some possible implications of the social motivation hypothesis on existing modeling practice. (shrink)
Leading linguists and philosophers report on all aspects of compositionality, the notion that the meaning of an expression can be derived from its parts. This book explores every dimension of this field, reporting critically on different lines of research, revealing connections between them, and highlighting current problems and opportunities.
The paper develops an account of minimal traces devoid of representational content and exploits an analogy to a predictive processing framework of perception. As perception can be regarded as a prediction of the present on the basis of sparse sensory inputs without any representational content, episodic memory can be conceived of as a “prediction of the past” on the basis of a minimal trace, i.e., an informationally sparse, merely causal link to a previous experience. The resulting notion of episodic memory (...) will be validated as a natural kind distinct from imagination. This trace minimalist view contrasts with two theory camps dominating the philosophical debate on memory. On one side, we face versions of the Causal Theory that hold on to the idea that episodic remembering requires a memory trace that causally links the event of remembering to the event of experience and carries over representational content from the content of experience to the content of remembering. The Causal Theory, however, fails to account for the epistemic generativity of episodic memory and is psychologically and information-theoretically implausible. On the other side, a new camp of simulationists is currently forming up. Motivated by empirical and conceptual deficits of the Causal Theory, they reject not only the necessity of preserving representational content, but also the necessity of a causal link between experience and memory. They argue that remembering is nothing but a peculiar form of imagination, peculiar only in that it has been reliably produced and is directed towards an episode of one’s personal past. Albeit sharing their criticism of the Causal Theory and, in particular, rejecting its demand for an intermediary carrier of representational content, the paper argues that a causal connection to experience is still necessary to fulfill even the minimal requirements of past-directedness and reliability. (shrink)
Learning through social interaction has been documented widely; however, how introverted people are socially engaged in learning is largely unknown. The aim of this study was, first, to examine the reliability and validity of the social engagement scale among students at Finnish comprehensive schools. Then, we aimed to examine the interaction effect of introversion and social engagement on self-esteem, schoolwork engagement, and school burnout. Based on a sample of 862 ninth grade students in Finland, we found that a two-factor model (...) best fitted the social engagement scale. Further, we found that introverts with high social engagement have higher self-esteem than introverts with low social engagement. Our results implied that introverts should be given extra support when they encounter group work in school. (shrink)
The basic human ability to treat quantitative information can be divided into two parts. With proto-arithmetical ability, based on the core cognitive abilities for subitizing and estimation, numerosities can be treated in a limited and/or approximate manner. With arithmetical ability, numerosities are processed (counted, operated on) systematically in a discrete, linear, and unbounded manner. In this paper, I study the theory of enculturation as presented by Menary (2015) as a possible explanation of how we make the move from the proto-arithmetical (...) ability to arithmetic proper. I argue that enculturation based on neural reuse provides a theoretically sound and fruitful framework for explaining this development. However, I show that a comprehensive explanation must be based on valid theoretical distinctions and involve several stages in the development of arithmetical knowledge. I provide an account that meets these challenges and thus leads to a better understanding of the subject of enculturation. (shrink)
In this paper I study the development of arithmetical cognition with the focus on metaphorical thinking. In an approach developing on Lakoff and Núñez, I propose one particular conceptual metaphor, the Process → Object Metaphor, as a key element in understanding the development of mathematical thinking.
Following Marr’s famous three-level distinction between explanations in cognitive science, it is often accepted that focus on modeling cognitive tasks should be on the computational level rather than the algorithmic level. When it comes to mathematical problem solving, this approach suggests that the complexity of the task of solving a problem can be characterized by the computational complexity of that problem. In this paper, I argue that human cognizers use heuristic and didactic tools and thus engage in cognitive processes that (...) make their problem solving algorithms computationally suboptimal, in contrast with the optimal algorithms studied in the computational approach. Therefore, in order to accurately model the human cognitive tasks involved in mathematical problem solving, we need to expand our methodology to also include aspects relevant to the algorithmic level. This allows us to study algorithms that are cognitively optimal for human problem solvers. Since problem solving methods are not universal, I propose that they should be studied in the framework of enculturation, which can explain the expected cultural variance in the humanly optimal algorithms. While mathematical problem solving is used as the case study, the considerations in this paper concern modeling of cognitive tasks in general. (shrink)
Moral philosophers and psychologists often assume that people judge morally lucky and morally unlucky agents differently, an assumption that stands at the heart of the Puzzle of Moral Luck. We examine whether the asymmetry is found for reflective intuitions regarding wrongness, blame, permissibility, and punishment judg- ments, whether people’s concrete, case-based judgments align with their explicit, abstract principles regarding moral luck, and what psychological mechanisms might drive the effect. Our experiments produce three findings: First, in within-subjects experiments favorable to reflective (...) deliberation, the vast majority of people judge a lucky and an unlucky agent as equally blameworthy, and their actions as equally wrong and permissible. The philosophical Puzzle of Moral Luck, and the challenge to the very possibility of systematic ethics it is frequently taken to engender, thus simply do not arise. Second, punishment judgments are significantly more outcome- dependent than wrongness, blame, and permissibility judgments. While this constitutes evidence in favor of current Dual Process Theories of moral judgment, the latter need to be qualified: punishment and blame judgments do not seem to be driven by the same process, as is commonly argued in the literature. Third, in between-subjects experiments, outcome has an effect on all four types of moral judgments. This effect is mediated by negligence ascriptions and can ultimately be explained as due to differing probability ascriptions across cases. (shrink)
Are our beliefs justified only relatively to a specific culture or society? Is it possible to give reasons for the superiority of our scientific, epistemic methods? Markus Seidel sets out to answer these questions in his critique of epistemic relativism. Focusing on the work of the most prominent, explicitly relativist position in the sociology of scientific knowledge – so-called 'Edinburgh relativism' or the 'Strong Programme' –, he scrutinizes the key arguments for epistemic relativism from a philosophical perspective: underdetermination and (...) norm-circularity. His main negative result is that these arguments fall short of establishing epistemic relativism. -/- Despite arguing for epistemic absolutism, Seidel aims to provide an account of non-relative justification that nevertheless integrates the basic, correct intuition of the epistemic relativist. His main positive result is that the epistemic absolutist can very well accept the idea that people using different standards of justification can be equally justified in holding their beliefs: Rational disagreement, he maintains, is perfectly possible. -/- The book provides a detailed critique of relativism in the sociology of scientific knowledge and beyond. With its constructive part it aims at making conciliatory steps in a highly embittered discussion between sociology and philosophy of science. (shrink)
Since its first delivery in 1993, J.L. Schellenberg’s atheistic argument from divine hiddenness keeps generating lively debate in various quarters in the philosophy of religion. Over time, the author has responded to many criticisms of his argument, both in its original evidentialist version and in its subsequent conceptualist version. One central problem that has gone undetected in these exchanges to date, we argue, is how Schellenberg’s explicit-recognition criterion for revelation contains discriminatory tendencies against mentally handicapped persons. Viewed from this angle, (...) our present critique imparts Schellenberg’s position with a philosophical dilemma: (1) endorsing divine discrimination to the effect that God does not love ‘cognitive-affective outsiders’ or (2) giving up on explicit recognition. Either way, the hiddenness argument does not succeed. (shrink)
Interdisciplinarity is widely considered necessary to solving many contemporary problems, and new funding structures and instruments have been created to encourage interdisciplinary research at universities. In this article, we study a small technical university specializing in green technology which implemented a strategy aimed at promoting and developing interdisciplinary collaboration. It did so by reallocating its internal research funds for at least five years to “research platforms” that required researchers from at least two of the three schools within the university to (...) participate. Using data from semi-structured interviews from researchers in three of these platforms, we identify specific tensions that the strategy has generated in this case: in the allocation of platform resources, in the division of labor and disciplinary relations, in choices over scientific output and academic careers. We further show how the particular platform format exacerbates the identified tensions in our case. We suggest that certain features of the current platform policy incentivize shallow interdisciplinary interactions, highlighting potential limits on the value of attempting to push for interdisciplinarity through internal funding. (shrink)
The doctrine that meanings are entitieswith a determinate and independent reality is often believed tohave been undermined by Quine's thought experiment of radicaltranslation, which results in an argument for the indeterminacy oftranslation. This paper argues to the contrary. Starting fromQuine's assumption that the meanings of observation sentences arestimulus meanings, i.e., set-theoretical constructions of neuronalstates uniquely determined by inter-subjectively observable facts,the paper shows that this meaning assignment, up to isomorphism,is uniquely extendable to all expressions that occur inobservation sentences. To do so, (...) a theorem recently proven byHodges is used. To derive the conclusion, one only has to assumethat languages are compositional, abide by a generalized contextprinciple and by what I call the category principle. Theseassumptions originating in Frege and Husserl are coherent withQuine's overall position. It is concluded that Quine'snaturalistic approach does not justify scepticism with regard tomeaning, but should rather result in a view that affiliatessemantics with neuroscience. (shrink)
The two main domains of high culture - the arts and the sciences - seem to be completely different, simply unrelated. Is there any sense then in talking about culture in the singular as a unity? A positive answer to this question presupposes that there is a single conceptual scheme, in terms of which it is possible to articulate both the underlying similarities and the basic differences between these domains. This article argues that - at least in respect of ‘classical’ (...) modernity - there is such a framework: the normatively conceived Author-Work-Recipient relation. It allows the disclosure of the paradoxical unity of culture: its two main realms are constituted as polar opposites and thus as strictly complementary. Through such an organization, culture could fulfil an affirmative, compensatory role. At the same time however, it also allowed culture to acquire the character of social critique, a function realized through the antagonistically opposed projects of Enlightenment and Romanticism - projects whose illusions are now evident. (shrink)
What has the self to be like such that introspective awareness of it is possible? The paper asks if Descartes’s idea of an inner self can be upheld and discusses this issue by invoking two principles: the phenomenal transparency of experience and the semantic compositionality of conceptual content. It is assumed that self-awareness is a second-order state either in the domain of experience or in the domain of thought. In the former case self-awareness turns out empty if experience is transparent. (...) In the latter, it can best be conceived of as a form of mental quotation. Various proposed analyses of direct and indirect quotation are discussed and tested regarding their applicability to thought. It is concluded that, on the assumption of compositionality, the inner self is only insofar accessible to awareness as it has an accessible phonological structure, as apparently only inner speech does. (shrink)
The paper argues that cognitive states of biological systems are inherently temporal. Three adequacy conditions for neuronal models of representation are vindicated: the compositionality of meaning, the compositionality of content, and the co-variation with content. Classicist and connectionist approaches are discussed and rejected. Based on recent neurobiological data, oscillatory networks are introduced as a third alternative. A mathematical description in a Hilbert space framework is developed. The states of this structure can be regarded as conceptual representations satisfying the three conditions.
The relationship between psychological states and the brain remains an unresolved issue in philosophy of psychology. One appealing solution that has been influential both in science and in philosophy is Dennett’s concept of the intentional stance, according to which beliefs and desires are real and objective phenomena, but not necessarily states of the brain. A fundamental shortcoming of this approach is that it does not seem to leave any causal role for beliefs and desires in influencing behavior. In this paper, (...) I show that intentional states ascribed from the intentional stance should be seen as real causes, develop this to an independently plausible ontological position, and present a response to the latest interventionist causal exclusion worries. (shrink)
This paper proposes and defends an account of what it is to act for reasons. In the first part, I will discuss the desire-belief and the deliberative model of acting for reasons. I will argue that we can avoid the weaknesses and retain the strengths of both views, if we pursue an alternative according to which acting for reasons involves taking something as a reason. In the main part, I will develop an account of what it is to take something (...) as a reason for action. On the basis of this, I will then offer a new account of what it is to act for reasons. (shrink)
The idea of levels of organization plays a central role in the philosophy of the life sciences. In this article, I first examine the explanatory goals that have motivated accounts of levels of organization. I then show that the most state-of-the-art and scientifically plausible account of levels of organization, the account of levels of mechanism proposed by Bechtel and Craver, is fundamentally problematic. Finally, I argue that the explanatory goals can be reached by adopting a deflationary approach, where levels of (...) organization give way to more well-defined and fundamental notions, such as scale and composition. (shrink)
The book addresses the constitution of the high culture of modernity as an uneasy unity of the sciences, including philosophy, and the arts. Their internal dynamism and strain is established through, on the one hand, the relationship of the author - work - recipient, and, on the other, the respective roles of experts and the market.
This paper concerns anti-Humean intuitions about connections in nature. It argues for the existence of a de re link that is not necessity.Some anti-Humeans tacitly assume that metaphysical necessity can be used for all sorts of anti-Humean desires. Metaphysical necessity is thought to stick together whatever would be loose and separate in a Hume world, as if it were a kind of universal superglue.I argue that this is not feasible. Metaphysical necessity might connect synchronically co-existent properties—kinds and their essential features, (...) for example—but it is difficult to see how it could also serve as the binding force for successions of events. That is, metaphysical necessity seems not to be fit for diachronic, causal affairs in which causal laws, causation, or dispositions are involved. A different anti-Humean connection in nature has to do that job.My arguments focus mainly on a debate which has been the battleground for Humean vs. anti-Humean intuitions for many decades—namely, the analysis of dispositional predicates—yet I believe (but do not argue here) that the arguments generalise to causation and causal laws straightforwardly. (shrink)
This paper offers a critique of sustainability reporting and, in particular, a critique of the modern disconnect between the practice of sustainability reporting and what we consider to be the urgent issue of our era: sustaining the life-supporting ecological systems on which humanity and other species depend. Tracing the history of such reporting developments, we identify and isolate the concept of the ‘triple bottom line’ (TBL) as a core and dominant idea that continues to pervade business reporting, and business engagement (...) with sustainability. Incorporating an entity’s economic, environmental and social performance indicators into its management and reporting processes, we argue, has become synonymous with corporate sustainability; in the process, concern for ecology has become sidelined. Moreover, this process has become reinforced and institutionalised through SustainAbility’s biennial benchmarking reports, KPMG’s triennial surveys of practice, initiatives by the accountancy profession and, particularly, the Global Reporting Initiative (GRI)’s sustainability reporting guidelines. We argue that the TBL and the GRI are insufficient conditions for organizations contributing to the sustaining of the Earth’s ecology. Paradoxically, they may reinforce business-as-usual and greater levels of un-sustainability. (shrink)
This paper combines two ideas: (1) That the Lewisian best system analysis of lawhood (BSA) can cope with laws that have exceptions (cf. Braddon-Mitchell in Noûs 35(2):260–277, 2001; Schrenk in The metaphysics of ceteris paribus laws. Ontos, Frankfurt, 2007). (2) That a BSA can be executed not only on the mosaic of perfectly natural properties but also on any set of special science properties (cf., inter alia, Schrenk 2007, Selected papers contributed to the sections of GAP.6, 6th international congress of (...) the society for analytical philosophy. Mentis, Paderborn/Münster, 2008; Cohen and Callender in Philos Stud 145:1–34, 2009, Erkenntnis 73:427–447, 2010). Bringing together (1) and (2) results in an analysis of special science ceteris paribus laws. (shrink)
In very general terms, an agent is a being with the capacity to act, and 'agency' denotes the exercise or manifestation of this capacity. The philosophy of action provides us with a standard conception and a standard theory of action. The former construes action in terms of intentionality, the latter explains the intentionality of action in terms of causation by the agent’s mental states and events. From this, we obtain a standard conception and a standard theory of agency. There are (...) alternative conceptions of agency, and it has been argued that the standard theory fails to capture agency. Further, it seems that genuine agency can be exhibited by beings that are not capable of intentional action, and it has been argued that agency can and should be explained without reference to causally efficacious mental states and events. Debates about the nature of agency have flourished over the past few decades in philosophy and in other areas of research. In philosophy, the nature of agency is an important issue in the philosophy of mind, the philosophy of psychology, the debates on free will and moral responsibility, in ethics, meta-ethics, and in the debates on the nature of reasons and practical rationality. For the most part, this entry focuses on conceptual and metaphysical questions concerning the nature of agency. In the final sections, it provides an overview of empirically informed accounts of the sense of agency and of various empirical challenges to the commonsense assumption that our reasons and our conscious intentions make a real difference to how we act. (shrink)
According to contextualism, the extension of claims of personal taste is dependent on the context of utterance. According to truth relativism, their extension depends on the context of assessment. On this view, when the tastes of a speaker change, so does the truth value of a previously uttered taste claim, and if it is false, the speaker is required to retract it. Both views make strong empirical assumptions, which are here put to the test for the first time in three (...) experiments with over 740 participants. It turns out that the linguistic behaviour of ordinary English speakers is consistent with contextualist predictions and inconsistent with the predictions of the most widely discussed form of truth relativism advocated by John MacFarlane. (shrink)
Societal and technological development during the last century has enabled Western economies to achieve a high standard of living. Yet this profusion of wealth has led to several outcomes that are undesirable and/or unsustainable. There is thus an imperative need for a fundamental and rapid transition towards more sustainable practices. While broad conceptual frameworks for managing sustainability transitions have been suggested in prior literature, these need to be further developed to suit contexts in which the overall vision is arguably clear, (...) such as in the case of consuming animal-originated foodstuffs. In this article we introduce a novel transition management framework that is based upon the dimensions of sustainability. The suggested transition management process includes the identification of objectives and obstacles, the listing of options and their opportunities and threats as well as the evaluation of the outcomes (the Five O’s). We argue that sustainability transition management should be a process in which the identification of the relevant dimensions of sustainability and related objectives forms the foundation for strategic, tactical and operational governance activities. We illustrate the practical applicability of the framework in the case of transition towards plant-based diets. (shrink)
Metaphysics and science have a long but troubled relationship. In the twentieth century the Logical Positivists argued metaphysics was irrelevant and that philosophy should be guided by science. However, metaphysics and science attempt to answer many of the same, fundamental questions: What are laws of nature? What is causation? What are natural kinds? -/- In this book, Markus Schrenk examines and explains the central questions and problems in the metaphysics of science. He reviews the development of the field from (...) the early modern period through to the latest research, systematically assessing key topics including -/- dispositions, counterfactual conditionals, laws of nature, causation, natural kinds, essence, necessity. -/- With the addition of chapter summaries and annotated further reading, Metaphysics of Science is a much-needed, clear and informative survey of this exciting area of philosophical research. It is essential reading for students and scholars of philosophy of science and metaphysics. (shrink)
Alfred Mele’s zygote argument is widely considered to be the strongest version of the manipulation argument against compatibilism (about free will and determinism). Opponents have focused largely on the first of its two premises and on the overall dialectic. My focus here will be on the underlying thought experiment—the Diana scenario—and on the second premise of the argument. I will argue that reflection on the Diana scenario shows that the second premise does not hold, and we will see that my (...) objection to the second premise helps to defend the claim that manipulation arguments face, in general, a dilemma. (shrink)
This paper addresses various solutions to Meno's Problem: Why is it that knowledge is more valuable than merely true belief? Given both a pragmatist as well as a veritist understanding of epistemic value, it is argued that a reliabilist analysis of knowledge, in general, promises a hopeful strategy to explain the extra value of knowledge. It is, however, shown that two recent attempts to solve Meno's Problem within reliabilism are severely flawed: Olsson's conditional probability solution and Goldman's value autonomization solution. (...) The paper proceeds with a discussion of the purpose of having a higher value of knowledge as opposed to merely true belief, both in evolutionary and social terms. It claims that under a reliabilist analysis of knowledge it can be explained how knowers could evolve rather than just truthful believers. Subsequently, the paper develops an account of how we can manipulate our testimonial environment in an epistemically beneficial way by valuing reliably produced true belief more that just true belief and so gives an indirect justification of the extra value of knowledge. (shrink)