To operate in an unpredictable environment, a vehicle with advanced driving assistance systems, such as a robot or a drone, not only needs to register its surroundings but also to combine data from different sensors into a world model, for which it employs filter algorithms. Such world models, as this article argues with reference to the SLAM problem in robotics, consist of nothing other than probabilities about states and events arising in the environment. The model, thus, contains a virtuality of (...) possible worlds that are the basis for adaptive behavior. The article shows that the current development of these technologies requires new concepts because their complex adaptive behaviors cannot be explained by referring them to mere algorithmic processes. Instead, it proposes the heuristic instrument of microdecisions to designate the temporality of decisions between alternatives that are created by probabilistic procedures of world modeling. Microdecisions are more than the implementation of deterministic processes—they decide between possibilities and, thus, always open up the potential of their otherness. By describing autonomous adaptive technologies with this heuristic, the question of sovereignty inevitably arises. It forces us to re-think what autonomy means when decisions can be automated. (shrink)
According to tradition, logic is normative for reasoning. Gilbert Harman challenged the view that there is any straightforward connection between logical consequence and norms of reasoning. Authors including John MacFarlane and Hartry Field have sought to rehabilitate the traditional view. I argue that the debate is marred by a failure to distinguish three types of normative assessment, and hence three ways to understand the question of the normativity of logic. Logical principles might be thought to provide the reasoning agent with (...) first-personal directives; they might be thought to serve as third-personal evaluative standards; or they might underwrite our third-personal appraisals of others whereby we attribute praise and blame. I characterize the three normative functions in general terms and show how a failure to appreciate this threefold distinction has led disputants to talk past one another. I further show how the distinction encourages fruitful engagement with and, ultimately, resolution of the question. (shrink)
Responding to recent concerns about the reliability of the published literature in psychology and other disciplines, we formed the X-Phi Replicability Project to estimate the reproducibility of experimental philosophy. Drawing on a representative sample of 40 x-phi studies published between 2003 and 2015, we enlisted 20 research teams across 8 countries to conduct a high-quality replication of each study in order to compare the results to the original published findings. We found that x-phi studies – as represented in our sample (...) – successfully replicated about 70% of the time. We discuss possible reasons for this relatively high replication rate in the field of experimental philosophy and offer suggestions for best research practices going forward. (shrink)
This paper explores an apparent tension between two widely held views about logic: that logic is normative and that there are multiple equally legitimate logics. The tension is this. If logic is normative, it tells us something about how we ought to reason. If, as the pluralist would have it, there are several correct logics, those logics make incompatible recommendations as to how we ought to reason. But then which of these logics should we look to for normative guidance? I (...) argue that inasmuch as pluralism draws its motivation from its ability to defuse logical disputes—that is, disputes between advocates of rival logics—it is unable to provide an answer: pluralism collapses into monism with respect to either the strongest or the weakest admissible logic. (shrink)
This article offers an overview of inferential role semantics. We aim to provide a map of the terrain as well as challenging some of the inferentialist’s standard commitments. We begin by introducing inferentialism and placing it into the wider context of contemporary philosophy of language. §2 focuses on what is standardly considered both the most important test case for and the most natural application of inferential role semantics: the case of the logical constants. We discuss some of the (alleged) benefits (...) of logical inferentialism, chiefly with regards to the epistemology of logic, and consider a number of objections. §3 introduces and critically examines the most influential and most fully developed form of global inferentialism: Robert Brandom’s inferentialism about linguistic and conceptual content in general. Finally, in §4 we consider a number of general objections to IRS and consider possible responses on the inferentialist’s behalf. (shrink)
If feeling a genuine emotion requires believing that its object actually exists, and if this is a belief we are unlikely to have about fictional entities, then how could we feel genuine emotions towards these entities? This question lies at the core of the paradox of fiction. Since its original formulation, this paradox has generated a substantial literature. Until recently, the dominant strategy had consisted in trying to solve it. Yet, it is more and more frequent for scholars to try (...) to dismiss it using data and theories coming from psychology. In opposition to this trend, the present paper argues that the paradox of fiction cannot be dissolved in the ways recommended by the recent literature. We start by showing how contemporary attempts at dissolving the paradox assume that it emerges from theoretical commitments regarding the nature of emotions. Next, we argue that the paradox of fiction rather emerges from everyday observations, the validity of which is independent from any such commitment. This is why we then go on to claim that a mere appeal to psychology in order to discredit these theoretical commitments cannot dissolve the paradox. We bring our discussion to a close on a more positive note, by exploring how the paradox could in fact be solved by an adequate theory of the emotions. (shrink)
In this paper, we argue that, barring a few important exceptions, the phenomenon we refer to using the expression “being moved” is a distinct type of emotion. In this paper’s first section, we motivate this hypothesis by reflecting on our linguistic use of this expression. In section two, pursuing a methodology that is both conceptual and empirical, we try to show that the phenomenon satisfies the five most commonly used criteria in philosophy and psychology for thinking that some affective episode (...) is a distinct emotion. Indeed, being moved, we claim, is the experience of a positive core value (particular object) perceived by the moved subject as standing out (formal object) in the circumstances triggering the emotion. Drawing on numerous examples, we describe the distinctively rich phenomenology characteristic of the experience as well as the far-reaching action-tendencies and functions associated with it. Having thus shown that the candidate emotion seem to satisfy the five criteria, we go on, in section three, to compare it with sadness and joy, arguing that it should not be confused with either. Finally, in section four, we illustrate the explanatory power of our account of “being moved” by showing how it can shed light on, and maybe even justify, the widespread distrust we feel towards the exhibition of ‘sentimentality’. On the whole and if we are right, we have uncovered an emotion which, though never or rarely talked about, is of great interest and no small importance. (shrink)
Epistemic utility theory is generally coupled with veritism. Veritism is the view that truth is the sole fundamental epistemic value. Veritism, when paired with EUT, entails a methodological commitment: norms of epistemic rationality are justified only if they can be derived from considerations of accuracy alone. According to EUT, then, believing truly has epistemic value, while believing falsely has epistemic disvalue. This raises the question as to how the rational believer should balance the prospect of true belief against the risk (...) of error. A strong intuitive case can be made for a kind of epistemic conservatism – that we should disvalue error more than we value true belief. I argue that none of the ways in which advocates of veritist EUT have sought to motivate conservatism can be squared with their methodological commitments. Short of any such justification, they must therefore either abandon their most central methodological principle or else adopt a permissive line with respect to epistemic risk. (shrink)
In this paper I examine the question of logic’s normative status in the light of Carnap’s Principle of Tolerance. I begin by contrasting Carnap’s conception of the normativity of logic with that of his teacher, Frege. I identify two core features of Frege’s position: first, the normative force of the logical laws is grounded in their descriptive adequacy; second, norms implied by logic are constitutive for thinking as such. While Carnap breaks with Frege’s absolutism about logic and hence with the (...) notion that any system of logic should have a privileged claim to correctness, I argue that there is a sense in which Carnap’s framework-relative conception of logical norms has a constitutive role to play: though they are not constitutive for the conceptual activity for thinking, they do nevertheless set the ground rules that make certain forms of scientific inquiry possible in the first place. I conclude that Carnap’s principle of tolerance is tamer than one might have thought and that, despite remaining differences, Frege’s and Carnap’s conceptions of logic have more in common than one might have thought. (shrink)
In the past decade, experimental philosophy---the attempt at making progress on philosophical problems using empirical methods---has thrived in a wide range of domains. However, only in recent years has aesthetics succeeded in drawing the attention of experimental philosophers. The present paper constitutes the first survey of these works and of the nascent field of 'experimental philosophy of aesthetics'. We present both recent experimental works by philosophers on topics such as the ontology of aesthetics, aesthetic epistemology, aesthetic concepts, and imagination, as (...) well as research from other disciplines that not only are relevant to philosophy of aesthetics but also open new avenues of research for experimental philosophy of aesthetics. Overall, we conclude that the birth of an experimental philosophy of aesthetics is good news not only for aesthetics but also for experimental philosophy itself, as it contributes to broaden the scope of experimental philosophy. (shrink)
We challenge an argument that aims to support Aesthetic Realism by claiming, first, that common sense is realist about aesthetic judgments because it considers that aesthetic judgments can be right or wrong, and, second, that becauseAesthetic Realism comes from and accounts for “folk aesthetics,” it is the best aesthetic theory available.We empirically evaluate this argument by probing whether ordinary people with no training whatsoever in the subtle debates of aesthetic philosophy consider their aesthetic judgments as right or wrong. Having shown (...) that the results do not support the main premise of the argument, we discuss the consequences for Aesthetic Realism and address possible objections to our study. (shrink)
Logic has traditionally been construed as a normative discipline; it sets forth standards of correct reasoning. Explosion is a valid principle of classical logic. It states that an inconsistent set of propositions entails any proposition whatsoever. However, ordinary agents presumably do — occasionally, at least — have inconsistent belief sets. Yet it is false that such agents may, let alone ought to, believe any proposition they please. Therefore, our logic should not recognize explosion as a logical law. Call this the (...) ‘normative argument against explosion’. Arguments of this type play — implicitly or explicitly — a central role in motivating paraconsistent logics. Branden Fitelson, in a throwaway remark, has conjectured that there is no plausible ‘bridge principle’ articulating the normative link between logic and reasoning capable of supporting such arguments. This paper offers a critical evaluation of Fitelson’s conjecture, and hence of normative arguments for paraconsistency and the conceptions of logic’s normative status on which they repose. It is argued that Fitelson’s conjecture turns out to be correct: normative arguments for paraconsistency probably fail. (shrink)
Logic, the tradition has it, is normative for reasoning. But is that really so? And if so, in what sense is logic normative for reasoning? As Gilbert Harman has reminded us, devising a logic and devising a theory of reasoning are two separate enterprises. Hence, logic's normative authority cannot reside in the fact that principles of logic just are norms of reasoning. Once we cease to identify the two, we are left with a gap. To bridge the gap one would (...) need to produce what John MacFarlane has appropriately called a 'bridge principle', i.e. a general principle articulating a substantive and systematic link between logical entailment and norms of reasoning. This is Harman's skeptical challenge. In this paper, I argue that Harman's skeptical challenge can be met. I show how candidate bridge principles can be systematically generated and evaluated against a set of well-motivated desiderata. Moreover, I argue that bridge principles advanced by MacFarlane himself and others, for all their merit, fail to address the problem originally set forth by Harman and so do not meet the skeptical challenge. Finally, I develop a bridge principle that meets Harman's requirements as well as being substantive. (shrink)
'Education and Social Change' sheds light on Florian Znaniecki's most original program of the sociology of education. The volume contains newly discovered reports from the research under the auspices of the Columbia University in the thirties, focused on educating to participate in democratic social order and cultural innovation. Preparation for cooperative interactions with leaders lies at the core of the analysis. Included are several texts published in English which clearly expound Znaniecki's analysis of social processes in education. The key (...) idea of transforming educational systems in the direction of self-education still proves relevant. (shrink)
ABSTRACT This article examines Bergson’s critique of intensive magnitude in Time and Free Will. I demonstrate how his rejection of a different kind of quantity that is ordinal and does not allow measurement, and the underlying strict dualism of quantity and quality, is inconsistent with both the letter and the spirit of his later philosophy. I dismantle two main strategies for explaining these inconsistencies. Furthermore, I argue that Bergson’s simplistic conception of quantity in terms of homogeneous multiplicity, which is operative (...) in his rejection of an alternative conception of quantity, lacks justification in the face of the transformations that the concept of quantity underwent in the history of mathematics and physics. (shrink)
Human rights have not played an overwhelmingly prominent role in CSR in the past. Similarly, CSR has had relatively little influence on what is now called the “business and human rights debate.” This contribution uncovers some of the reasons for the rather peculiar disconnect between these two debates and, based on it, presents some apparent synergies and complementarities between the two. A closer integration of the two debates, as it argues, would allow for the formulation of an expansive and demanding (...) conception of corporate human rights obligations. Such a conception does not stop with corporate obligations “merely” to respect human rights, but includes an extended focus on proactive company involvement in the protection and realization of human rights. In other words, the integration of the two debates provides the space within which to formulate positive human rights obligations for corporations. (shrink)
Martin Heidegger gilt als einer der bedeutendsten und zugleich umstrittensten Philosophen des zwanzigsten Jahrhunderts. Werk und Person üben bis heute diesseits und jenseits philosophischer Diskussionen eine erhebliche Faszination aus. Dies liegt nicht allein an der außergewöhnlichen Originalität seines Denkens und der Kraft seiner Sprache, sondern auch an seinen schwerwiegenden politischen Verstrickungen im Zusammenhang mit der Machtergreifung Hitlers. Florian Grosser zeichnet die wesentlichen Stationen und Entwicklungslinien von Heideggers verschlungenem Denkweg aus einem halben Jahrhundert nach. Dabei arbeitet er die spezifische Gefährlichkeit (...) von Heideggers politischem Denken klar heraus, insbesondere sein radikal revolutionäres und überzogen antagonistisches Verständnis des Politischen, das ihn hindert, reale politische Phänomene angemessen zu verstehen. Grosser geht der Frage nach, ob Heideggers Denken, wie vielfach unterstellt, tatsächlich eine innere Verwandtschaft mit dem Nationalsozialismus aufweist. Die Differenzen immerhin sind gravierend. Die essayistisch gehaltene Darstellung, die im Grenzbereich zwischen Philosophie und Politikwissenschaft angesiedelt ist, erweckt die festgefahrene, seit Jahrzehnten in den starren Mustern von Anklage und Verteidigung verharrende Kontroverse um den "Fall Heidegger" zu neuem Leben. (shrink)
This paper argues that logical inferentialists should reject multiple-conclusion logics. Logical inferentialism is the position that the meanings of the logical constants are determined by the rules of inference they obey. As such, logical inferentialism requires a proof-theoretic framework within which to operate. However, in order to fulfil its semantic duties, a deductive system has to be suitably connected to our inferential practices. I argue that, contrary to an established tradition, multiple-conclusion systems are ill-suited for this purpose because they fail (...) to provide a 'natural' representation of our ordinary modes of inference. Moreover, the two most plausible attempts at bringing multiple conclusions into line with our ordinary forms of reasoning, the disjunctive reading and the bilateralist denial interpretation, are unacceptable by inferentialist standards. (shrink)
In this paper, we argue that, barring a few important exceptions, the phenomenon we refer to using the expression “being moved” is a distinct type of emotion. In this paper’s first section, we motivate this hypothesis by reflecting on our linguistic use of this expression. In section two, pursuing a methodology that is both conceptual and empirical, we try to show that the phenomenon satisfies the five most commonly used criteria in philosophy and psychology for thinking that some affective episode (...) is a distinct emotion. Indeed, being moved, we claim, is the experience of a positive core value (particular object) perceived by the moved subject as standing out (formal object) in the circumstances triggering the emotion. Drawing on numerous examples, we describe the distinctively rich phenomenology characteristic of the experience as well as the far-reaching action-tendencies and functions associated with it. Having thus shown that the candidate emotion seem to satisfy the five criteria, we go on, in section three, to compare it with sadness and joy, arguing that it should not be confused with either. Finally, in section four, we illustrate the explanatory power of our account of “being moved” by showing how it can shed light on, and maybe even justify, the widespread distrust we feel towards the exhibition of ‘sentimentality’. On the whole and if we are right, we have uncovered an emotion which, though never or rarely talked about, is of great interest and no small importance. (shrink)
This theoretical paper considers the morality of machine learning algorithms and systems in the light of the biases that ground their correctness. It begins by presenting biases not as a priori negative entities but as contingent external referents—often gathered in benchmarked repositories called ground-truth datasets—that define what needs to be learned and allow for performance measures. I then argue that ground-truth datasets and their concomitant practices—that fundamentally involve establishing biases to enable learning procedures—can be described by their respective morality, here (...) defined as the more or less accounted experience of hesitation when faced with what pragmatist philosopher William James called “genuine options”—that is, choices to be made in the heat of the moment that engage different possible futures. I then stress three constitutive dimensions of this pragmatist morality, as far as ground-truthing practices are concerned: the definition of the problem to be solved, the identification of the data to be collected and set up, and the qualification of the targets to be learned. I finally suggest that this three-dimensional conceptual space can be used to map machine learning algorithmic projects in terms of the morality of their respective and constitutive ground-truthing practices. Such techno-moral graphs may, in turn, serve as equipment for greater governance of machine learning algorithms and systems. (shrink)
Studying the folk concept of intentional action, Knobe (2003a) discovered a puzzling asymmetry: most people consider some bad side effects as intentional while they consider some good side effects as unintentional. In this study, we extend these findings with new experiments. The first experiment shows that the very same effect can be found in ascriptions of intentionality in the case of means for action. The second and third experiments show that means are nevertheless generally judged more intentional than side effects, (...) and that people do take into account the structure of the action when ascribing intentionality. We then discuss a number of hypotheses that can account for these data, using reactions times from our first experiment. (shrink)
Die vorliegende Studie erforscht den Zusammenhang zwischen den Konzepten des höchsten Gutes und des kategorischen Imperativs in Kants praktischer Philosophie. Nach einer originiellen Lesart des Autors gebietet der kategorische Imperativ, das eigene Glück stets nur als Bestandteil allgemeinen Glücks zu verfolgen. Das höchste Gut ist nun derjenige Zustand der Welt, der erreicht würde, wenn alle Menschen diesem Prinzip gemäß handeln würden und ihrem gemeinsamen Streben nach allgemeiner Glückseligkeit auch Erfolg beschieden wäre. Dieser Zustand ist ein notwendiges Ziel vernünftigen Handelns, das (...) sich aus dem kategorischen Imperativ ergibt, dessen Verfolgung aber trotzdem über das bloße Handeln nach verallgemeinerbaren Maximen hinausgeht. Durch eine Neuinterpretation von Kants These, im höchsten Gut sei das Glück stets proportional zur Tugend, gelingt es dem Autor, nicht nur diese These, sondern auch Kants Aussagen über Gerechtigkeit, Glückswürdigkeit und Hoffnung argumentativ an den kategorischen Imperativ zu binden und damit im Kontext des höchsten Guts zu rechtfertigen. Dem Autor zufolge hat das höchste Gut somit eine wichtige eigenständige Funktion in der kantischen Ethik, ohne ihren deontologischen Charakter in Frage zu stellen. (shrink)
Since at least Hume and Kant, philosophers working on the nature of aesthetic judgment have generally agreed that common sense does not treat aesthetic judgments in the same way as typical expressions of subjective preferences—rather, it endows them with intersubjective validity, the property of being right or wrong regardless of disagreement. Moreover, this apparent intersubjective validity has been taken to constitute one of the main explananda for philosophical accounts of aesthetic judgment. But is it really the case that most people (...) spontaneously treat aesthetic judgments as having intersubjective validity? In this paper, we report the results of a cross‐cultural study with over 2,000 respondents spanning 19 countries. Despite significant geographical variations, these results suggest that most people do not treat their own aesthetic judgments as having intersubjective validity. We conclude by discussing the implications of our findings for theories of aesthetic judgment and the purpose of aesthetics in general. (shrink)
The notion of harmony has played a pivotal role in a number of debates in the philosophy of logic. Yet there is little agreement as to how the requirement of harmony should be spelled out in detail or even what purpose it is to serve. Most, if not all, conceptions of harmony can already be found in Michael Dummett's seminal discussion of the matter in The Logical Basis of Metaphysics. Hence, if we wish to gain a better understanding of the (...) notion of harmony, we do well to start here. Unfortunately, however, Dummett's discussion is not always easy to follow. The following is an attempt to disentangle the main strands of Dummett's treatment of harmony. The different variants of harmony as well as their interrelations are clarified and their individual shortcomings qua interpretations of harmony are demonstrated. Though no attempt is made to give a detailed alternative account of harmony here, it is hoped that our discussion will lay the ground for an adequate rigorous treatment of this central notion. (shrink)
Do laypeople think that moral responsibility is compatible with determinism? Recently, philosophers and psychologists trying to answer this question have found contradictory results: while some experiments reveal people to have compatibilist intuitions, others suggest that people could in fact be incompatibilist. To account for this contradictory answers, Nichols and Knobe (2007) have advanced a ‘performance error model’ according to which people are genuine incompatibilist that are sometimes biased to give compatibilist answers by emotional reactions. To test for this hypothesis, we (...) investigated intuitions about determinism and moral responsibility in patients suffering from behavioural frontotemporal dementia. Patients suffering from bvFTD have impoverished emotional reaction. Thus, the ‘performance error model’ should predict that bvFTD patients will give less compatibilist answers. However, we found that bvFTD patients give answers quite similar to subjects in control group and were mostly compatibilist. Thus, we conclude that the ‘performance error model’ should be abandoned in favour of other available model that best fit our data. (shrink)
‘Frankfurt-style cases’ (FSCs) are widely considered as having refuted the Principle of Alternate Possibilities (PAP) by presenting cases in which an agent is morally responsible even if he could not have done otherwise. However, Neil Levy (J Philos 105:223–239, 2008) has recently argued that FSCs fail because we are not entitled to suppose that the agent is morally responsible, given that the mere presence of a counterfactual intervener is enough to make an agent lose responsibility-grounding abilities. Here, I distinguish two (...) kinds of Frankfurt counter-arguments against the PAP: the direct and the indirect counter-arguments. I then argue that Levy’s argument, if valid, can shed doubt on the indirect argument but leaves the direct argument untouched. I conclude that FSCs can still do their job, even if we grant that the mere presence of a counterfactual intervener can modify an agent’s abilities. (shrink)
Dass der Begriff 'Kreativität' mit dem künstlerischen Gestaltungsprozess verknüpft ist, scheint eine kaum hinterfragte Selbstverständlichkeit. Da durch die gestiegene Relevanz und die veränderte Funktion der Kreativität in Wissenschaft und Gesellschaft ihre Bedeutung jedoch immer wieder neu verhandelt wird, muss auch die Verbindung zwischen Kreativität und Kunst mit ihren Bedingungen und Funktionen neu gedacht werden. Florian Pfab nähert sich der 'Kreativität' aus einer begriffsanalytischen Perspektive. Mit den Mitteln der Systemtheorie entwirft er eine Definition, die den Forderungen an den Begriff zwischen (...) Schöpfermythos und evolutionärem Prinzip mit Blick auf den künstlerischen Gestaltungsprozess gerecht wird"--Back cover. (shrink)
Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...) are successfully used in exploratory contexts, scientists face a whole new challenge in forming the concepts required for understanding underlying mechanisms. (shrink)
The debate over whether free will and determinism are compatible is controversial, and produces wide scholarly discussion. This paper argues that recent studies in experimental philosophy suggest that people are in fact “natural compatibilists”. To support this claim, it surveys the experimental literature bearing directly or indirectly upon this issue, before pointing to three possible limitations of this claim. However, notwithstanding these limitations, the investigation concludes that the existing empirical evidence seems to support the view that most people have compatibilist (...) intuitions. (shrink)
_In times of economic crisis austerity becomes a rallying cry, but what does history tell us about its chances for success?_ Austerity is at the center of political debates today. Its defenders praise it as a panacea that will prepare the ground for future growth and stability. Critics insist it will precipitate a vicious cycle of economic decline, possibly leading to political collapse. But the notion that abstinence from consumption brings benefits to states, societies, or individuals is hardly new. This (...) book puts the debates of our own day in perspective by exploring the long history of austerity—a popular idea that lives on despite a track record of dismal failure. Florian Schui shows that arguments in favor of austerity were—and are today—mainly based on moral and political considerations, rather than on economic analysis. Unexpectedly, it is the critics of austerity who have framed their arguments in the language of economics. Schui finds that austerity has failed intellectually and in economic terms _every time_ it has been attempted. He examines thinkers who have influenced our ideas about abstinence from Aristotle through such modern economic thinkers as Smith, Marx, Veblen, Weber, Hayek, and Keynes, as well as the motives behind specific twentieth-century austerity efforts. The persistence of the concept cannot be explained from an economic perspective, Schui concludes, but only from the persuasive appeal of the moral and political ideas linked to it. (shrink)
Recently, Fahrbach and Park have argued that the pessimistic meta-induction about scientific theories is unsound. They claim that this very argument does not properly take into account scientific progress, particularly during the twentieth century. They also propose amended arguments in favour of scientific realism, which are supposed to properly reflect the history of science. I try to show that what I call the argument from scientific progress cannot explain satisfactorily why the current theories should have reached a degree of success (...) that excludes their future refutations and allows the inference to their truth. I further argue that this line of argumentation dismisses the burden of proof in a rather unfair manner by using a delaying tactic to postpone the question about the validity of the PMI in the future. (shrink)
Recent years have heralded increasing attention to the role of multinational corporations in regard to human rights violations. The concept of complicity has been of particular interest in this regard. This article explores the conceptual differences between silent complicity in particular and other, more "conventional" forms of complicity. Despite their far-reaching normative implications, these differences are often overlooked.Rather than being connected to specific actions as is the case for other forms of complicity, the concept of silent complicity is tied to (...) the identity, or the moral stature of the accomplice. More specifically, it helps us expose multinational corporations in positions of political authority. Political authority breeds political responsibility.Thus, corporate responsibility in regard to human rights may go beyond "doing no harm" and include apositive obligation to protect. Making sense of this duty leads to a discussion of the scope and limits of legitimate human rights advocacy by corporations. (shrink)
Increasingly, global businesses are confronted with the question of complicity in human rights violations committed by abusive host governments. This contribution specifically looks at silent complicity and the way it challenges conventional interpretations of corporate responsibility. Silent complicity impliesthat corporations have moral obligations that reach beyond the negative realm of doing no harm. Essentially, it implies that corporations have a moral responsibility to help protect human rights by putting pressure on perpetrating host governments involved in human rights abuses. This is (...) a controversial claim, which this contribution proposes to analyze with a view to understanding and determining the underlying conditions that need to be met in order for moral agents to be said to have such responsibilities in the category of the duty to protect human rights. (shrink)
The initial successes in recent years in harnessing machine learning technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician as (...) the agent responsible for the implementation of diagnosis, therapy, and care is unable to access the generation of findings and recommendations. There is widespread agreement that, generally, a complete traceability is preferable to opaque recommendations; however, there are differences about addressing ML-based systems whose functioning seems to remain opaque to some degree—even if so-called explicable or interpretable systems gain increasing amounts of interest. This essay approaches the epistemic foundations of ML-generated information specifically and medical knowledge generally to advocate differentiations of decision-making situations in clinical contexts regarding their necessary depth of insight into the process of information generation. Empirically accurate or reliable outcomes are sufficient for some decision situations in healthcare, whereas other clinical decisions require extensive insight into ML-generated outcomes because of their inherently normative implications. (shrink)
This paper examines the Leibnizian influence in Deleuze's theory of the spatium. Leibniz's critique of Cartesian extension and Newtonian space leads him to a conception of space in terms of internal determination and internal difference. Space is thus understood as a structure of individual relations internal to substances. Making some Nietzschean corrections to Leibniz, Deleuze understands the spatium in terms of individuating differences instead of individual relations. Leibnizian space is thus transformed into a genetic space producing both extension (quantity) and (...) quality. (shrink)
Identification of propositions as the core of attitudes and beliefs (De Houwer, 2014) has resulted in the development of implicit measures targeting personal evaluations of complex sentences (e.g., the IRAP or the RRT). Whereas their utility is uncontested, these paradigms are subject to limitations inherent in their block based design, such as allowing assessment of only a single belief at a time. We introduce the Propositional Evaluation Paradigm (PEP) for assessment of multiple propositional beliefs within a single experimental block. Two (...) experiments provide first evidence for the PEP’s validity. In Experiment 1 endorsement of racist beliefs measured with the PEP was related to criterion variables such as explicit racism assessed via questionnaire and indicators of behavioral tendencies. Experiment 2 indicates that the PEP’s implicit racism scores may predict actual behavior over and above explicit, self-report measures. Finally, Experiment 3 tested the PEP's applicability in the domain of hiring discrimination. Whereas general PEP based gender stereotypes were not related to hiring bias, results suggest a possible role of female stereotypes in hiring discrimination. In the context of these findings, we discuss both the potential and possible challenges in adopting the PEP to different beliefs. In sum, these initial findings suggest that the PEP may offer researchers a reliable and easily administrable option for the indirect assessment of propositional evaluations. (shrink)