This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, (...) that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelli-gence for moral enhancement, but not in artificial intelligence that relies solely on ambient intelligence technologies. (shrink)
Advocates of moral enhancement through pharmacological, genetic, or other direct interventions sometimes explicitly argue, or assume without argument, that traditional moral education and development is insufficient to bring about moral enhancement. Traditional moral education grounded in a Kohlbergian theory of moral development is indeed unsuitable for that task; however, the psychology of moral development and education has come a long way since then. Recent studies support the view that moral cognition is a higher-order process, unified at a functional level, and (...) that a specific moral faculty does not exist. It is more likely that moral cognition involves a number of different mechanisms, each connected to other cognitive and affective processes. Taking this evidence into account, we propose a novel, empirically informed approach to moral development and education, in children and adults, which is based on a cognitive-affective approach to moral dispositions. This is an interpretative approach that derives from the cognitive-affective personality system (Mischel and Shoda, 1995). This conception individuates moral dispositions by reference to the cognitive and affective processes that realise them. Conceived of in this way, moral dispositions influence an agent's behaviour when they interact with situational factors, such as mood or social context. Understanding moral dispositions in this way lays the groundwork for proposing a range of indirect methods of moral enhancement, techniques that promise similar results as direct interventions whilst posing fewer risks. (shrink)
This article presents an argument for the view that we can perceive temporal features without awareness. Evidence for this claim comes from recent empirical work on selective visual attention. An interpretation of selective attention as a mechanism that processes high-level perceptual features is offered and defended against one particular objection. In conclusion, time perception likely has an unconscious dimension and temporal mental qualities can be instantiated without ever being conscious.
This paper offers a theoretical framework that can be used to derive viable engineering strategies for the design and development of robots that can nudge people towards moral improvement. The framework relies on research in developmental psychology and insights from Stoic ethics. Stoicism recommends contemplative practices that over time help one develop dispositions to behave in ways that improve the functioning of mechanisms that are constitutive of moral cognition. Robots can nudge individuals towards these practices and can therefore help develop (...) the dispositions to, for example, extend concern to others, avoid parochialism, etc. (shrink)
It is not clear to what the projects of creating an artificial intelligence (AI) that does ethics, is moral, or makes moral judgments amounts. In this paper we discuss some of the extant metaethical theories and debates in moral philosophy by which such projects should be informed, specifically focusing on the project of creating an AI that makes moral judgments. We argue that the scope and aims of that project depend a great deal on antecedent metaethical commitments. Metaethics, therefore, plays (...) the role of an Archimedean fulcrum in this context, very much like the Archimedean role that it is often taken to take in context of normative ethics (Dworkin 1996; Dreier 2002; Fantl 2006; Ehrenberg 2008). (shrink)
Speakers’ perception of a visual scene influences the language they use to describe it—which objects they choose to mention and how they characterize the relationships between them. We show that visual complexity can either delay or facilitate description generation, depending on how much disambiguating information is required and how useful the scene's complexity can be in providing, for example, helpful landmarks. To do so, we measure speech onset times, eye gaze, and utterance content in a reference production experiment in which (...) the target object is either unique or non-unique in a visual scene of varying size and complexity. Speakers delay speech onset if the target object is non-unique and requires disambiguation, and we argue that this reflects the cost of deciding on a high-level strategy for describing it. The eye-tracking data demonstrate that these delays increase when speakers are able to conduct an extensive early visual search, implying that when speakers scan too little of the scene early on, they may decide to begin speaking before becoming aware that their description is underspecified. Speakers’ content choices reflect the visual makeup of the scene—the number of distractors present and the availability of useful landmarks. Our results highlight the complex role of visual perception in reference production, showing that speakers can make good use of complexity in ways that reflect their visual processing of the scene. (shrink)
In this paper, we report on an experiment with The Walking Dead (TWD), which is a narrative-driven adventure game with morally charged decisions set in a post-apocalyptic world filled with zombies. This study aimed to identify physiological markers of moral decisions and non-moral decisions using infrared thermal imaging (ITI). ITI is a non-invasive tool used to capture thermal variations due to blood flow in specific body regions that might be caused by sympathetic activity. Results show that moral decisions seem to (...) elicit a significant decrease in temperature in the chin region 20 seconds after participants are presented with a moral decision. However, given the small sample involved, and the lack of significance in other regions, future studies might be needed to confirm the results obtained in this work. (shrink)
This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...) their own moral reasoning and decision-making could be improved: one’s actions, character, or other evaluable attributes fall short of one’s values and moral beliefs; one sometimes misjudges or is uncertain about what the right thing to do is in particular situations, given one’s values; one is uncertain about some fundamental moral questions or recognizes a possibility that some of one’s core moral beliefs and values are mistaken. We sketch why one might think that AI tools could be used to support moral improvement in those areas, and describe two types of assistance: preparatory assistance, including advice and training supplied in advance of moral deliberation; and on-the-spot assistance, including on-the-spot advice and facilitation of moral functioning over the course of moral deliberation. Then, we turn to some of the ethical issues that AEAs might raise, looking in particular at three under-appreciated problems posed by the use of AI for moral self-improvement: namely, reliance on sensitive moral data; the inescapability of outside influences on AEAs; and AEA usage prompting the user to adopt beliefs and make decisions without adequate reasons. (shrink)
Unlike human soldiers, autonomous weapons systems are unaffected by psychological factors that would cause them to act outside the chain of command. This is a compelling moral justification for their development and eventual deployment in war. To achieve this level of sophistication, the software that runs AWS will have to first solve two problems: the frame problem and the representation problem. Solutions to these problems will inevitably involve complex software. Complex software will create security risks and will make AWS critically (...) vulnerable to hacking. I claim that the political and tactical consequences of hacked AWS far outweigh the purported advantages of AWS not being affected by psychological factors and always following orders. Therefore, one of the moral justifications for the deployment of AWS is undermined. (shrink)
Transcranial magnetic stimulation is used to make inferences about relationships between brain areas and their functions because, in contrast to neuroimaging tools, it modulates neuronal activity. The central aim of this article is to critically evaluate to what extent it is possible to draw causal inferences from repetitive TMS data. To that end, we describe the logical limitations of inferences based on rTMS experiments. The presented analysis suggests that rTMS alone does not provide the sort of premises that are sufficient (...) to warrant strong inferences about the direct causal properties of targeted brain structures. Overcoming these limitations demands a close look at the designs of rTMS studies, especially the methodological and theoretical conditions which are necessary for the functional decomposition of the relations between brain areas and cognitive functions. The main points of this article are that TMS-based inferences are limited in that stimulation-related causal effects are not equivalent to structure-related causal effects due to TMS side effects, the electric field distribution, and the sensitivity of neuroimaging and behavioral methods in detecting structure-related effects and disentangling them from confounds. Moreover, the postulated causal effects can be based on indirect effects. A few suggestions on how to manage some of these limitations are presented. We discuss the benefits of combining rTMS with neuroimaging in experimental reasoning and we address the restrictions and requirements of rTMS control conditions. The use of neuroimaging and control conditions allows stronger inferences to be gained, but the strength of the inferences that can be drawn depends on the individual experiment’s designs. Moreover, in some cases, TMS might not be an appropriate method of answering causality-related questions or the hypotheses have to account for the limitations of this technique. We hope this summary and formalization of the reasoning behind rTMS research can be of use not only for scientists and clinicians who intend to interpret rTMS results causally but also for philosophers interested in causal inferences based on brain stimulation research. (shrink)
Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
Introduction: Philosophy in Mind / Michaelis Michael and John O’Leary-Hawthorne -- AI and the Synthetic A Priori / Jose Benardete -- Armchair Metaphysics /Frank Jackson -- Doubts About Conceptual Analysis /Gilbert Harman -- Deflationary Self-Knowledge / Andre Gallois -- How to Get to Know One’s Own Mind: Some Simple Ways / Annette Baier -- Psychology in Perspective / Huw Price -- Can Philosophy of Language Provide the Key to the Foundations of Ethics? /Karl-Otto Apel --Unprincipled Decisions / Lee Overton -- (...) Philosophy and Commonsense: The Case of Weakness of Will / Jeanette Kennett and Michael Smith -- Reasoning and Representing / Robert Brandom -- The Problem of Consciousness / John Searle -- Godel’s Theorem and the Mind... Again / Graham Priest -- Epistemology and the Diet Revolution / Gilbert Harman -- Truth-Aptness and Belief / John O’Leary-Hawthorne -- Cubism, Perspective, Belief / Michaelis Michael. Objectivity and Modern Idealism: What is The Question? / Gideon Rosen. (shrink)
In this paper, I offer an account of the dependence relation between perception of change and the subjective flow of time that is consistent with some extant empirical evidence from priming by unconscious change. This view is inspired by the one offered by William James, but it is articulated in the framework of contemporary functionalist accounts of mental qualities and higher-order theories of consciousness. An additional advantage of this account of the relationship between perception of change and subjective time is (...) that is makes sense of instances where we are not consciously aware of changes but still experience the flow of time. (shrink)
Quality Space Theory is a holistic model of qualitative states. On this view, individual mental qualities are defined by their locations in a space of relations, which reflects a similar space of relations among perceptible properties. This paper offers an extension of Quality Space Theory to temporal perception. Unconscious segmentation of events, the involvement of early sensory areas, and asymmetries of dominance in multi-modal perception of time are presented as evidence for the view.
The article reviews the various ramifications in the discussion on leadership, focusing on the view of leadership as relationships between leaders and followers. Three main types of leader-follower relations are discussed, and their specific characteristics are described: regressive relations, symbolic relations, and developmental relations. After analyzing the major implications, as well as the conceptual limitations, of these perspectives, the article suggests directions for a more integrative conceptualization of leader-follower relations.
In my dissertation I critically survey existing theories of time consciousness, and draw on recent work in neuroscience and philosophy to develop an original theory. My view depends on a novel account of temporal perception based on the notion of temporal qualities, which are mental properties that are instantiated whenever we detect change in the environment. When we become aware of these temporal qualities in an appropriate way, our conscious experience will feature the distinct temporal phenomenology that is associated with (...) the passing of time. The temporal qualities model of perception makes two predictions about the mechanisms of time perception; one that time perception is modality specific and the other that it can occur without awareness. My argument for this view partially depends on a number of psychophysical experiments that I designed and implemented myself and which investigate subjective time distortions caused by looming visual stimuli. These results show that the mechanisms of conscious experience of time are distinct from the mechanisms of time perception, as my theory of temporal qualities predicts. (shrink)
The book introduces a conception of discourse ethics, an intersubjectivist version of Kantian ethics. Analyzing contributions from Jürgen Habermas, Karl-Otto Apel, Wolfgang Kuhlmann, Albrecht Wellmer, Robert Alexy, Klaus Günther, Rainer Forst, Marcel Niquet and others, it reconstructs critical discussions on the justification of the principle of morality (part I) and on the various proposals on how to apply it (part II). It defends an alternative model of how discourse ethics can provide guidance under non-ideal circumstances and avoid both arbitrariness and (...) rigorism. (shrink)
Two questions are addressed in this article: 1. Why are people attracted to leaders? 2. How are leaders' images construed? The first question is analyzed by using the concept of “deity” as a frame of reference for an “ideal model” of leadership. God as a “screen of projections” can satisfy the believer's fundamental needs and desires, as well as serving as a reference for causal attributions and a provider of transcendental meaning. Using Construal Level Theory, deity, as a frame of (...) reference, also facilitates analysis of the second question. This analysis explains universal principles underlying the leadership construal, and the psychological principles and culture-bound processes relevant to construing different images of leadership in different collectives. (shrink)
Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde.
A history of logic -- Patterns of reasoning -- A language and its meaning -- A symbolic language -- 1850-1950 mathematical logic -- Modern symbolic logic -- Elements of set theory -- Sets, functions, relations -- Induction -- Turning machines -- Computability and decidability -- Propositional logic -- Syntax and proof systems -- Semantics of PL -- Soundness and completeness -- First order logic -- Syntax and proof systems of FOL -- Semantics of FOL -- More semantics -- Soundness and (...) completeness -- Why is first order logic "First Order"? (shrink)
Two questions are addressed in this article: 1. Why are people attracted to leaders? 2. How are leaders' images construed? The first question is analyzed by using the concept of “deity” as a frame of reference for an “ideal model” of leadership. God as a “screen of projections” can satisfy the believer's fundamental needs and desires, as well as serving as a reference for causal attributions and a provider of transcendental meaning. Using Construal Level Theory, deity, as a frame of (...) reference, also facilitates analysis of the second question. This analysis explains universal principles underlying the leadership construal, and the psychological principles and culture-bound processes relevant to construing different images of leadership in different collectives. (shrink)
Floridi’s Theory of Strongly Semantic Information posits the Veridicality Thesis. One motivation is that it can serve as a foundation for information-based epistemology being an alternative to the tripartite theory of knowledge. However, the Veridicality thesis is false, if ‘information’ is to play an explanatory role in human cognition. Another motivation is avoiding the so-called Bar-Hillel/Carnap paradox. But this paradox only seems paradoxical, if ‘information’ and ‘informativeness’ are synonymous, logic is a theory of inference, or validity suffices for rational inference; (...) a, b, and c are false. (shrink)
Predictions about autonomous weapon systems are typically thought to channel fears that drove all the myths about intelligence embodied in matter. One of these is the idea that the technology can get out of control and ultimately lead to horrifi c consequences, as is the case in Mary Shelley’s classic Frankenstein. Given this, predictions about AWS are sometimes dismissed as science-fiction fear-mongering. This paper considers several analogies between AWS and other weapon systems and ultimately offers an argument that nuclear weapons (...) and their effect on the development of modern asymmetrical warfare are the best analogy to the introduction of AWS. The fi nal section focuses on this analogy and offers speculations about the likely consequences of AWS being hacked. These speculations tacitly draw on myths and tropes about technology and AI from popular fi ction, such as Frankenstein, to project a convincing model of the risks and benefi ts of AWS deployment. (shrink)
Programming computers to engage in moral reasoning is not a new idea (Anderson and Anderson 2011a). Work on the subject has yielded concrete examples of computable linguistic structures for a moral grammar (Mikhail 2007), the ethical governor architecture for autonomous weapon systems (Arkin 2009), rule-based systems that implement deontological principles (Anderson and Anderson 2011b), systems that implement utilitarian principles, and a hybrid approach to programming ethical machines (Wallach and Allen 2008). This chapter considers two philosophically informed strategies for engineering software (...) that can engage in moral reasoning: algorithms based on philosophical moral theories and analogical reasoning from standard cases.1 Based on the challenges presented to the algorithmic approach, I argue that a combination of these two strategies holds the most promise and show concrete examples of how such an architecture could be built using contemporary engineering techniques. (shrink)
Famously, David Lewis argued that we can avoid the apparent paradoxes of time travel by introducing a notion of personal time, which by and large follows the causal flow of the time traveler's life history. This paper argues that a related approach can be adapted for use by three-dimensionalists in response to Ted Sider's claim that three-dimensionalism is inconsistent with time travel. In contrast to Lewis (and others who follow him on this point), however, this paper argues that the order (...) of events captured by so-called "personal time" should be thought of as causal, rather than temporal. (shrink)
In this article, I consider the reception of images that are present in a city space. I focus on the juxtaposition of computer‑generated images covering fences surrounding construction sites and the real spaces which they screen from view. I postulate that a visual experience is dependent on input from the other human senses. While looking at objects, we are not only standing in front of them but are being influenced by them. Seeing does not leave a physical trace on the (...) object; instead the interference is more subtle — it influences the way in which we perceive space. Following in the footsteps of Sarah Pink, Michael Taussig and William J. T. Mitchell, I show that seeing (to paraphrase the title of an article by the last of the above mentioned scholars) is a cultural practice. The last part of the article presents a visual essay as a method that can contribute to cultural urban studies. I give as an example of such a method a photo‑essay about chosen construction sites in Poznań, which I photographed between December 2014 and June 2015. (shrink)
We say that a semantical function is correlated with a syntactical function F iff for any structure A and any sentence we have A F A .It is proved that for a syntactical function F there is a semantical function correlated with F iff F preserves propositional connectives up to logical equivalence. For a semantical function there is a syntactical function F correlated with iff for any finitely axiomatizable class X the class –1X is also finitely axiomatizable (i.e. iff is (...) continuous in model class topology). (shrink)
This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...) needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context. (shrink)