An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...) about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed. (shrink)
The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...) critically evaluating such proposals. (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...) to examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
. The management of ethics within organisations typically occurs within a problem-solving frame of reference. This often results in a reactive, problem-based and externally induced approach to managing ethics. Although basing ethics management interventions on dealing with and preventing current and possible future unethical behaviour are often effective in that it ensures compliance with rules and regulations, the approach is not necessarily conducive to the creation of sustained ethical cultures. Nor does the approach afford (mainly internal) stakeholders the opportunity to (...) be co-designers of the organisations ethical future. The aim of this paper is to present Appreciative Inquiry (AI) as an alternative approach for developing a shared meaning of ethics within an organisation with a view to embrace and entrench ethics, thereby creating a foundation for the development of an ethical cul- ture over time. A descriptive case study based on an application of AI is used to illustrate the utility of AI as a way of thinking and doing to precede and complement problem-based ethics management systems and interventions. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, (...) what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...) - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
The paper examines some aspects of today’s debate on trust and e-trust and, more specifically, issues of legal responsibility for the production and use of robots. Their impact on human-to-human interaction has produced new problems both in the fields of contractual and extra-contractual liability in that robots negotiate, enter into contracts, establish rights and obligations between humans, while reshaping matters of responsibility and risk in trust relations. Whether or not robotrust concerns human-to-robot or even robot-to-robot relations, there is a (...) new generation of cases involving human-to-human contractual and extra-contractual liability for robots’ behaviour because, for the first time, legal systems hold you responsible for what an artificial system autonomously decides to do. (shrink)
This paper is addressed to recent theoretical discussions of the Anthropocene, in particular Bernard Stiegler’s Neganthropocene, which argues: “As we drift past tipping points that put future biota at risk, while a post-truth regime institutes the denial of ‘climate change’, and as Silicon Valley assistants snatch decision and memory, and as gene-editing and a financially-engineered bifurcation advances over the rising hum of extinction events and the innumerable toxins and conceptual opiates that Anthropocene Talk fascinated itself with—in short, as ‘the (...) Anthropocene’ discloses itself as a dead-end trap…”. The objective of this paper is therefore twofold: to discuss how the Anthropocene is appropriated to certain ideological discourses to maintain the hegemony of precisely those systems of production that have most accelerated climate change etc.; to consider how the factography of the Anthropocene is exploited in this process to mask the ideological character of industry-aligned “technocratic” environmental management. The paper is not concerned with specific case studies in terms of government and industry policy, or climate science, but rather with the ways in which the discourse of the Anthropocene has been inflected within the humanities and the broader cultural field—that is to say, ideologically, as a system or logic of meaning. How the Anthropocene “means” is, in this respect, a question of some importance. This paper does not attempt to address all the facets of this question, but focuses upon a central “apocalyptic” strain in the discourse of the Anthropocene drawn particularly from Francis Fukuyama’s millennial posthumanism and centred in the question of “sustainability” as catastrophe management—with the risk that real environmental degradation will become an alibi for a revived neoliberalism. In other words, that the critical Earth system transformations that characterise the Anthropocene are themselves commodities, and that the project of their amelioration is in process of defining a future “crisis” rhetoric with a global political franchise. The ideological import of the Anthropocene stems precisely from the fact that it is planetary and, while catalysed by human agency, independent in its specific behaviour from it. The Anthropocene objectively presents as the contemporary counterpart of the Cold War doctrine of Mutually Assured Destruction and the most compelling argument for a new kind of technological “arms race.” But it also presents as the condition of an emerging ideological discourse which will determine how this race is run. From the discourse on “energy security” to the widespread “security crackdown” on environmental activists across the so-called developed & developing world, the Anthropocene has come to represent the co-option of a scientific factography for the thinly disguised resurgence of “ideological science” of the Fukuyamaesque variety. For Fukuyama, the true meaning of “posthuman” is thus the accomplishment of humanity’s historical mission. As the “End of History” designates an end of ideological struggle, so too the dénouement of the Anthropocene and the “ends of man” represent the accomplished purpose of species warfare: dominion, not simply over the world, but over all possible worlds. According to this narrative, science—like technology—must be uniquely at the service of the maintenance of the global order, organised around a universal appeal to “crisis management.” It is precisely for this reason that what calls itself post-human masks the return of an ever-more-apocalyptic Humanism. (shrink)
Recommender systems are recently developed computer-assisted tools that support social and informational needs of various communities and help users exploit huge amounts of data for making optimal decisions. In this study, we present a new recommender system for assessment and risk prediction in child welfare institutions in Israel. The system exploits a large diachronic repository of manually completed questionnaires on functioning of welfare institutions and proposes two different rule-based computational models. The system accepts users’ requests via a simple graphical (...) interface, calculates the institutions’ profiles according to user preferences, and presents assessment scores, trends and comparative analyses of the corresponding data using assorted visual aids. Based on the analysis, the system offers three different strategies for objective assessment of the institutions’ functioning and risks. Qualitative and quantitative evaluation of the system’s effectiveness and accuracy demonstrates that it substantially improves the assessment process of a welfare institution. Moreover, it provides an effective tool for objective large-scale analysis of the institution’s overall state and trends, which were previously based primarily on the institution supervisors’ subjective judgment and intuition. In addition, the proposed recommender system has great practical and social impact as it may help identify and avert potential problems, malfunctions, flaws, risks and even tragic incidents in child welfare institutions, as well as increase their overall functioning levels. As a result, as a long-term social implication, the system may also help reduce inequality and social gaps in the Israeli society. (shrink)
The import of computational learning theories and techniques on the ethics of human-robot interaction is explored in the context of recent developments of personal robotics. An epistemological reflection enables one to isolate a variety of background hypotheses that are needed to achieve successful learning from experience in autonomous personal robots. The conjectural character of these background hypotheses brings out theoretical and practical limitations in our ability to predict and control the behaviour of learning robots in their interactions with humans. Responsibility (...) ascription problems, which concern damages caused by learning robot actions, are analyzed in the light of these epistemic limitations. Finally, a broad framework is outlined for ethically motivated scientific inquiries, which aim at improving our capability to understand, anticipate, and selectively cope with harmful errors by learning robots. (shrink)
October 14, 2007: Studying how a broker's brain works. swissinfo. "To help maintain its competitive edge, the Swiss banking industry is investing heavily in financial engineering. Its latest recruit is economist Peter Bossaerts. swissinfo talked to Bossaerts, a leading expert in neuroeconomics – the study of how we make financial choices - about his recent appointment as professor at the Federal Institute of Technology in Lausanne.... swissinfo: So what exactly is neuroeconomics? Peter Bossaerts: It's a mixture of decisional theory - (...) mathematical theories applied in risk-based decision-making - and neuroscience.... Neurofinance, therefore, tries to understand how choices are made in a risky world. It looks closely at the workings of the brain while taking into account human emotions.... swissinfo: What is the aim of your work? P.B.: Firstly, to make progress on how people make choices when dealing with risk.... Neuroeconomics should also help improve decisional theory, which doesn't work in the real world where rules are vague and probabilities are unknown. The aim is to build up artificial intelligence based on a theory where decision-making is repeated." >>> Neuroscience, Cognitive Science, Finance & Investing, Applications. (shrink)
Various potential strategic interactions between a “strong” Artificial intelligence and humans are analyzed using simple 2 × 2 order games, drawing on the New Periodic Table of those games developed by Robinson and Goforth. Strong risk aversion on the part of the human player leads to shutting down the AI research program, but alternative preference orderings by the human and the AI result in Nash equilibria with interesting properties. Some of the AI-Human games have multiple equilibria, and in other (...) cases Pareto-improvement over the Nash equilibrium could be attained if the AI’s behavior towards humans could be guaranteed to be benign. The preferences of a superintelligent AI cannot be known in advance, but speculation is possible as to its ranking of alternative states of the world, and how it might assimilate the accumulated wisdom of humanity. (shrink)
Significant technological advancements over the last two decades have led to enhanced accessibility to computing devices and the Internet. Our society is experiencing an ever-growing integration of the Internet into everyday lives, and this has transformed the way we obtain and exchange information, communicate and interact with one another as well as conduct business. However, the term ‘Internet addiction’ has emerged from problematic and excessive Internet usage which leads to the development of addictive cyber-behaviours, causing health and social problems. The (...) most commonly used intervention treatments such as motivational interviewing, cognitive-behavioural therapy, and retreat or inpatient care mix a variety of psychotherapy theories to treat such addictive behaviour and try to address underlying psychosocial issues that are often coexistent with IA, but the efficacy of these approaches is not yet proved. The aim of this paper is to address the question of whether it is possible to cure IA with the Internet. After detailing the current state-of-the-art including various IA definitions, risk factors, assessment methods and IA treatments, we outline the main research challenges that need to be solved. Moreover, we propose an Internet-based IA Recovery Framework which uses AI to closely observe, visualize and analyse patient’s Internet usage behaviour for possible staged intervention. The proposal to use smart Internet-based systems to control IA can be expected to be controversial. This paper is intended to stimulate further discussion and research in IA recovery through Internet-based frameworks. (shrink)
The paper questions the expert system paradigm, both in terms of its range of application, and as a significant contribution to the understanding of artificial intelligence. The viewpoint is that of the systems designer who must judge the applicability of these methods in imminent and future systems. The expert system paradigm, (ESP for short), is criticised not because it is ubiquitously wrong, but because its range of application appears to be very limited, and much promise is made of its application (...) in areas where its success is likely to be little more than a matter of luck. The paper considers the success in both academic and commercial settings. It is suggested that the contribution of the ESP to the wider ambitions of AI is modest, and to the practical user is still a considerable and largely unquantifiable risk. (shrink)
It is supposedly easier to connect with other human beings in the era of ubiquitous technology. Connecting requires action and an element of risk taking in a context of dynamic uncertainty and incomplete information. The article explores what is involved in developing sustainable connections. We reflect on the context of “Socially Useful Artificial Intelligence”, the focus of the first article in issue 1.1.1987 of AI & Society, and explore subsequent research in a changing world. The arguments are illustrated through (...) an account of the development of the Penny University, from a London coffee house to a potential international virtual institution. (shrink)
Abstract -/- Most discussions of risk are developed in broadly consequentialist terms, focusing on the outcomes of risks as such. This paper will provide an alternative account of risk from a virtue ethical perspective, shifting the focus to the decision to take the risk. Making ethical decisions about risk is, we will argue, not fundamentally about the actual chain of events that the decision sets in process, but about the reasonableness of the decision to take the (...)risk in the first place. A virtue ethical account of risk is needed because the notion of the ‘reasonableness’ of the decision to take the risk is affected by the complexity of the moral status of particular instances of risk-taking and the risk-taker’s responsiveness to these contextual features. The very idea of ‘reasonable risk’ welcomes judgments about the nature of the risk itself, raises questions about complicity, culpability and responsibility, while at its heart, involves a judgement about the justification of risk which unavoidably focuses our attention on the character of the individuals involved in risk making decisions. -/- Keywords: Risk; ethics; morality; responsibility; virtue; choice; reasons . (shrink)
Risk communication has been generally categorized as a warning act, which is performed in order to prevent or minimize risk. On the other side, risk analysis has also underscored the role played by information in reducing uncertainty about risk. In both approaches the safety aspects related to the protection of the right to health are on focus. However, it seems that there are cases where a risk cannot possibly be avoided or uncertainty reduced, this is (...) for instance valid for the declaration of side effects associated with pharmaceutical products or when a decision about drug approval or retirement must be delivered on the available evidence. In these cases, risk communication seems to accomplish other tasks than preventing risk or reducing uncertainty. The present paper analyzes the legal instruments which have been developed in order to control and manage the risks related to drugs – such as the notion of “development risk” or “residual risk” – and relates them to different kinds of uncertainty. These are conceptualized as epistemic, ecological, metric, ethical, and stochastic, depending on their nature. By referring to this taxonomy, different functions of pharmaceutical risk communication are identified and connected with the legal tools of uncertainty management. The purpose is to distinguish the different functions of risk communication and make explicit their different legal nature and implications. (shrink)
A capability approach has been proposed to risk analysis, where risk is conceptualized as the probability that capabilities are reduced. Capabilities refer to the genuine opportunities of individuals to achieve valuable doings and beings, such as being adequately nourished. Such doings and beings are called functionings. A current debate in risk analysis and other fields where a capability approach has been developed concerns whether capabilities or actual achieved functionings should be used. This paper argues that in (...) class='Hi'>risk analysis the consequences of hazardous scenarios should be conceptualized in terms of capabilities, not achieved functionings. Furthermore, the paper proposes a method for assessing capabilities, which considers the levels of achieved functionings of other individuals with similar boundary conditions. The capability of an individual can then be captured statistically based on the variability of the achieved functionings over the considered population. (shrink)
Although the AI paradigm is useful for building knowledge-based systems for the applied natural sciences, there are dangers when it is extended into the domains of business, law and other social systems. It is misleading to treat knowledge as a commodity that can be separated from the context in which it is regularly used. Especially when it relates to social behaviour, knowledge should be treated as socially constructed, interpreted and maintained through its practical use in context. The meanings of terms (...) in a knowledge-base are assumed to be references to an objective reality whereas they are instruments for expressing values and exercising power. Expert systems that are not perspicuous to the expert community will lose their meanings and cease to contain genuine knowledge, as they will be divorced from the social processes essential for the maintenance of both meaning and knowledge. Perspicuity is usually sacrificed when knowledge is represented in a formalism, with the result that the original problem is compounded with a second problem of penetrating the representation language. Formalisms that make business and legal problems easier to understand are one essential research goal, not only in the quest for intelligent machines to replace intelligent human beings, but also in the wiser quest for computers to support collaborative work and other forms of social problem solving. (shrink)
The paper re-expresses arguments against the normative validity of expected utility theory in Robin Pope (1983, 1991a, 1991b, 1985, 1995, 2000, 2001, 2005, 2006, 2007). These concern the neglect of the evolving stages of knowledge ahead (stages of what the future will bring). Such evolution is fundamental to an experience of risk, yet not consistently incorporated even in axiomatised temporal versions of expected utility. Its neglect entails a disregard of emotional and financial effects on well-being before a particular (...) class='Hi'>risk is resolved. These arguments are complemented with an analysis of the essential uniqueness property in the context of temporal and atemporal expected utility theory and a proof of the absence of a limit property natural in an axiomatised approach to temporal expected utility theory. Problems of the time structure of risk are investigated in a simple temporal framework restricted to a subclass of temporal lotteries in the sense of David Kreps and Evan Porteus (1978). This subclass is narrow but wide enough to discuss basic issues. It will be shown that there are serious objections against the modification of expected utility theory axiomatised by Kreps and Porteus (1978, 1979). By contrast the umbrella theory proffered by Pope that she has now termed SKAT, the Stages of Knowledge Ahead Theory, offers an epistemically consistent framework within which to construct particular models to deal with particular decision situations. A model by Caplin and Leahy (2001) will also be discussed and contrasted with the modelling within SKAT (Pope, Leopold and Leitner 2007). (shrink)
The term ’risk’ is used in a wide range of situations, but there is no real consensus of what it means. ‘Risk ‘is often stipulatively defined as “a probability for the occurrence of a negative event” or something similar. This formulation is however not very informative, and it fails to capture many of our intuitions about the concept or risk. One way of trying to find a common definition of a term within a group is to use (...) a Socratic Dialogue (SD). This method is fairly new, and it is rather different from the original Socratic dialogues (at least if we are to judge from how they are described by Plato). The best explanation for the name ought to be that it is inspired by the original Socratic dialogues. The SD in its modern form was originally developed as a tool for enabling laymen to perform rather advanced concept analyses under the supervision of a professional philosopher. The formal goal of the method is to find a common way of perceiving of a particular term, or at least to find out exactly how the members of the group differ in their understandings of the term, and why. The largest gain from the process has in practice turned out to be a higher awareness among the participants of different ways of understanding the term, and the ideas and intuitions behind it. This has turned out to be very useful in educational settings, but the method has also been used with great success both in research, and in e.g. business, public administration and nongovernmental organisations. In the present case, a Socratic dialogue on the concept of risk was performed within the framework of a Ph D-course about risk and uncertainty at the Swedish University of Agriculture in Alnarp, Sweden. The participants on the course where all quite familiar with practical issues relating to risks. Both from the course work, and from their own research. (shrink)
The paper presents a Chinese philosophical point of view of AI, and presents a novel system of the AI machine. There are two basic relations or contradictions which drive computer developments forward. One is between software and hardware and the other is between data structure and system organization. It is suggested that a description of a future AI system should primarily start from these contradictions.
This article is concerned with the history and current state of research activities into medical expert systems (MES) in Japan. A brief review of expert systems' work over the last ten years is provided and here is a discussion on future directions of artificial intelligence (AI) applications in medicine, which we expect the Japanese AI community in medicine (AIM) to undertake.
Well-known critics of AI such as Hubert Dreyfus and Michael Polanyi tend to confuse cybernetics with AI. Such a confusion is quite misleading and should not be overlooked. In the first place, cybernetics is not vulnerable to criticism of AI as cognitivistic and behaviouristic. In the second place, AI researchers are recommended to consider the cybernetics approach as a way of overcoming the limitations of cognitivism and behaviourism.
In its forty years of existence, Artificial Intelligence has suffered both from the exaggerated claims of those who saw it as the definitive solution of an ancestral dream â that of constructing an intelligent machine-and from its detractors, who described it as the latest fad worthy of quacks. Yet AI is still alive, well and blossoming, and has left a legacy of tools and applications almost unequalled by any other field-probably because, as the heir of Renaissance thought, it represents a (...) possible bridge between the humanities and the natural sciences, philosophy and neurophysiology, psychology and integrated circuits-including systems that today are taken for granted, such as the computer interface with mouse pointer and windows. This writing describes a few results of AI that have modified the scientific world, as well as the way a layman sees computers: thetechnology of programming languages, such asLISP-witness the unique excellence of academic departments that have contributed to them-thecomputing workstations-of which our modern PC is but a vulgarised descendant-theapplications to the educational field-e.g., the realisation of some ideas of genetic epistemology-and tointerdisciplinary philosophy-such as Hofstadter's associations between the arts and mathematics-and the use ofAI techniques in music and musicology. All this has led to a generalisation of AI towards Negrotti's overallTheory of the Artificial, which encompasses further specialisation such asartificial reality, artificial life, and applications ofneural networks among others. (shrink)
Powerful, technically complex international compliance regimes have developed recently in certain professions that deal with risk: banking (the Basel II regime), accountancy (IFRS) and the actuarial profession. The need to deal with major risks has acted as a strong driver of international co-operation to create enforceable international semilegal systems, as happened earlier in such ﬁelds as international health regulations. This regulation in technical ﬁelds contrasts with the failure of an international general-purpose political and legal regime to develop. We survey (...) the new global regulatory systems in the actuarial, banking and accounting ﬁelds, with a view to showing how the need to deal reasonably with risk has resulted in an international de facto law solidly based on correct abstract principles of probability. (shrink)
The paper explores the influence of greenwash on green trust and discusses the mediation roles of green consumer confusion and green perceived risk. The research object of this study focuses on Taiwanese consumers who have the purchase experience of information and electronics products in Taiwan. This research employs an empirical study by means of the structural equation modeling. The results show that greenwash is negatively related to green trust. Therefore, this study suggests that companies must reduce their greenwash behaviors (...) to enhance their consumers’ green trust. In addition, this study finds out that green consumer confusion and green perceived risk mediate the negative relationship between greenwash and green trust. The results also demonstrate that greenwash is positively associated with green consumer confusion and green perceived risk which would negatively affect green trust. It means that greenwash does not only negatively affect green trust directly but also negatively influence it via green consumer confusion and green perceived risk indirectly. Hence, if companies would like to reduce the negative relationship between greenwash and green trust, they need to decrease their consumers’ green consumer confusion and green perceived risk. (shrink)
This paper argues that Lara Buchak’s risk-weighted expected utility theory fails to offer a true alternative to expected utility theory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expected utility theory. Being more permissive about dynamic choice or framing, however, undermines the theory’s claim to capturing a steady choice disposition in the (...) face of risk. I argue this poses a challenge to alternatives to expected utility theory more generally. (shrink)
Proponents of the value ladenness of science rely primarily on arguments from underdetermination or inductive risk, which share the premise that we should only consider values where the evidence runs out or leaves uncertainty; they adopt a criterion of lexical priority of evidence over values. The motivation behind lexical priority is to avoid reaching conclusions on the basis of wishful thinking rather than good evidence. This is a real concern, however, that giving lexical priority to evidential considerations over values (...) is a mistake and unnecessary for avoiding the wishful thinking. Values have a deeper role to play in science. (shrink)
A moderately risk averse person may turn down a 50/50 gamble that either results in her winning $200 or losing $100. Such behaviour seems rational if, for instance, the pain of losing $100 is felt more strongly than the joy of winning $200. The aim of this paper is to examine an influential argument that some have interpreted as showing that such moderate risk aversion is irrational. After presenting an axiomatic argument that I take to be the strongest (...) case for the claim that moderate risk aversion is irrational, I show that it essentially depends on an assumption that those who think that risk aversion can be rational should be skeptical of. Hence, I conclude that risk aversion need not be irrational. (shrink)
Attempt of trans-disciplinary analysis of the evolutionary value of bioethics is realized. Currently, there are High Tech schemes for management and control of genetic, socio-cultural and mental evolution of Homo sapiens (NBIC, High Hume, etc.). The biological, socio-cultural and technological factors are included in the fabric of modern theories and technologies of social and political control and manipulation. However, the basic philosophical and ideological systems of modern civilization formed mainly in the 17–18 centuries and are experiencing ever-increasing and destabilizing (...) class='Hi'>risk-taking pressure from the scientific theories and technological realities. The sequence of diagnostic signs of a new era once again split into technological and natural sciences’ from one hand, and humanitarian and anthropological sciences’, from other. The natural sciences series corresponds to a system of technological risks be solved using algorithms established safety procedures. The socio-humanitarian series presented anthropological risk. Global bioethics phenomenon is regarded as systemic socio-cultural adaptation for technology-driven human evolution. The conceptual model for meta-structure of stable evolutionary strategy of Homo sapiens (SESH) is proposes. In accordance to model, SESH composed of genetic, socio-cultural and techno-rationalist modules, and global bioethics as a tool to minimize existential evolutionary risk. An existence of objectively descriptive and value-teleological evolutionary trajectory parameters of humanity in the modern technological and civilizational context (1), and the genesis of global bioethics as a system social adaptation to ensure self-identity (2) are postulated. -/- . (shrink)
Lara Buchak sets out a new account of rational decision-making in the face of risk. She argues that the orthodox view is too narrow, and suggests an alternative, more permissive theory: one that allows individuals to pay attention to the worst-case or best-case scenario, and vindicates the ordinary decision-maker.
In this paper, we examine the relation between corporate social responsibility (CSR) and firm risk in controversial industry sectors. We develop and test two competing hypotheses of risk reduction and window dressing. Employing an extensive U.S. sample during the 1991-2010 period from controversial industry firms, such as alcohol, tobacco, gambling, and others, we find that CSR engagement inversely affects firm risk after controlling for various firm characteristics. To deal with endogeneity issue, we adopt a system equation approach (...) and difference regressions and continue to find that CSR engagement of firms in controversial industry sectors negatively affects firm risk. To examine the premise that firm risk is more of an issue for controversial firms, we further examine the difference between non-controversial and controversial firm samples, and find that the effect of risk reduction through CSR engagement is more economically and statistically significant in controversial industry firms than in non-controversial industry firms. These findings support the risk-reduction hypothesis, but not the window-dressing hypothesis, and the notion that the top management of U.S. firms in controversial industries is, in general, risk averse and that their CSR engagement helps their risk management efforts. (shrink)
The relationship of corporate social responsibility to risk management has been treated sporadically in the business society literature. Using real options theory, I develop the notion of corporate social responsibility as a real option its implications for risk management. Real options theory allows for a strategic view of corporate social responsibility. Specifically, real options theory suggests that corporate social responsibility should be negatively related to the firm’s ex ante downside business risk.
There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...) part because most of the final goals we could give an AI admit of so-called "perverse instantiations". I propose a novel solution to this puzzle: instruct the AI to love humanity. The proposal is compared with Yudkowsky's Coherent Extrapolated Volition, and Bostrom's Moral Modeling proposals. (shrink)
Crispin Wright maintains that we can acquire justification for our perceptual beliefs only if we have antecedent justification for ruling out any sceptical alternative. Wright contends that this fact doesn’t elicit scepticism, for we are non-evidentially entitled to accept the negation of any sceptical alternative. Sebastiano Moruzzi has challenged Wright’s contention by arguing that since our non-evidential entitlements don’t remove the epistemic risk of our perceptual beliefs, they don’t actually enable us to acquire justification for these beliefs. In this (...) paper I show that Wright’s responses to Moruzzi are ineffective and that Moruzzi’s argument is validated by probabilistic reasoning. I also suggest that Wright cannot answer Moruzzi’s challenge without endangering his epistemology of perception. (shrink)
In this article it is argued that the standard theoretical account of risk in the contemporary literature, which is cast along probabilistic lines, is flawed, in that it is unable to account for a particular kind of risk. In its place a modal account of risk is offered. Two applications of the modal account of risk are then explored. First, to epistemology, via the defence of an anti-risk condition on knowledge in place of the normal (...) anti-luck condition. Second, to legal theory, where it is shown that this account of risk can cast light on the debate regarding the extent to which a criminal justice system can countenance the possibility of wrongful convictions. (shrink)
Many of us believe (1) Saving a life is more important than averting any number of headaches. But what about risky cases? Surely: (2) In a single choice, if the risk of death is low enough, and the number of headaches at stake high enough, one should avert the headaches rather than avert the risk of death. And yet, if we will face enough iterations of cases like that in (2), in the long run some of those small (...) risks of serious harms will surely eventuate. And yet: (3) Isn't it still permissible for us to run these repeated risks, despite that knowledge? After all, if it were not, then many of the risky activities that we standardly think permissible would in fact be impermissible. Nobody has yet offered a principle that can accommodate all of 1-3. In this paper, I show that we can accommodate all of these judgements, by taking into account both ex ante and ex post perspectives. In doing so, I clear aside an important obstacle to a viable deontological decision theory. (shrink)
Existing research on the financial implications of corporate social responsibility (CSR) for firms has predominantly focused on positive aspects of CSR, overlooking that firms also undertake actions and initiatives that qualify as negative CSR. Moreover, studies in this area have not investigated how both positive and negative CSR affect the financial risk of firms. As such, in this research, the authors provide a framework linking both positive and negative CSR to idiosyncratic risk of firms. While investigating these relationships, (...) the authors also analyze the moderating role of financial leverage of firms. Overall, analysis of secondary information for firms from multiple industries over the years 2000–2009 shows that CSR has a significant effect on the idiosyncratic risk of firms, with positive CSR reducing risk and negative CSR increasing it. Results also show that the reduction in risk from positive CSR is not guaranteed, with firms having high levels of financial leverage witnessing lower idiosyncratic risk reduction. (shrink)
According to the orthodox treatment of risk preferences in decision theory, they are to be explained in terms of the agent's desires about concrete outcomes. The orthodoxy has been criticised both for conflating two types of attitudes and for committing agents to attitudes that do not seem rationally required. To avoid these problems, it has been suggested that an agent's attitudes to risk should be captured by a risk function that is independent of her utility and probability (...) functions. The main problem with that approach is that it suggests that attitudes to risk are wholly distinct from people's (non-instrumental) desires. To overcome this problem, we develop a framework where an agent's utility function is defined over chance propositions (i.e., propositions describing objective probability distributions) as well as ordinary (non-chance) ones, and argue that one should explain different risk attitudes in terms of different forms of the utility function over such propositions. (shrink)
The way that diseases such as high blood pressure (hypertension), high cholesterol, and diabetes are defined is closely tied to ideas about modifiable risk. In particular, the threshold for diagnosing each of these conditions is set at the level where future risk of disease can be reduced by lowering the relevant parameter (of blood pressure, low-density lipoprotein, or blood glucose, respectively). In this article, I make the case that these criteria, and those for diagnosing and treating other “ (...) class='Hi'>risk-based diseases,” reflect an unfortunate trend towards reclassifying risk as disease. I closely examine stage 1 hypertension and high cholesterol and argue that many patients diagnosed with these “diseases” do not actually have a pathological condition. In addition, though, I argue that the fact that they are risk factors, rather than diseases, does not diminish the importance of treating them, since there is good evidence that such treatment can reduce morbidity and mortality. For both philosophical and ethical reasons, however, the conditions should not be labeled as pathological.The tendency to reclassify risk factors as diseases is an important trend to examine and critique. (shrink)