In “Why We Need Friendly AI”, Luke Muehlhauser and Nick Bostrom propose that for our species to survive the impending rise of superintelligent AIs, we need to ensure that they would be human-friendly. This discussion note offers a more natural but bleaker outlook: that in the end, if these AIs do arise, they won’t be that friendly.
In general, existential threats are those that may potentially result in the extinction of the entire human species, if not significantly endanger its living population. Among the said threats include, but not limited to, pandemics and the impacts of a technological singularity. As regards pandemics, significant work has already been done on how to mitigate, if not prevent, the aftereffects of this type of disaster. For one, certain problem areas on how to properly manage pandemic responses have already been identified, (...) like the following: (a) not being able to learn from previous experiences, (b) the inability to act on warning signals, and (c) the failure to reach a global consensus on a problem (i.e., in a timely manner). In terms of a singularity, however, it may be said that further research is still needed, specifically on how to aptly respond to its projected negative outcomes. In this paper, by treating the three problem areas noted above as preliminary assessment measures of a country’s capacity to coordinate a national response to large-scale disasters, we examine the readiness of the Philippines in preparing for an intelligence explosion. By citing certain instances of how the said country, specifically its national government, faced the coronavirus disease 2019 pandemic, it puts forward the idea that the likely Philippine disaster response towards a singularity needs to be worked on, appealing for a more comprehensive assessment of such for a more informed response plan. (shrink)
In the field of machine consciousness, it has been argued that in order to build human-like conscious machines, we must first have a computational model of qualia. To this end, some have proposed a framework that supports qualia in machines by implementing a model with three computational areas (i.e., the subconceptual, conceptual, and linguistic areas). These abstract mechanisms purportedly enable the assessment of artificial qualia. However, several critics of the machine consciousness project dispute this possibility. For instance, Searle, in his (...) Chinese room objection, argues that however sophisticated a computational system is, it can never exhibit intentionality; thus, would also fail to exhibit consciousness or any of its varieties. This paper argues that the proposed architecture mentioned above answers the problem posed by Searle, at least in part. Specifically, it argues that we could reformulate Searle’s worries in the Chinese room in terms of the three-stage artificial qualia model. And by doing so, we could see that the person doing all the translations in the room could realize the three areas in the proposed framework. Consequently, this demonstrates the actualization of self-consciousness in machines. (shrink)
This article surveys different philosophical theories about the nature of truth. We give much importance to truth; some demand to know it, some fear it, and others would even die for it. But what exactly is truth? What is its nature? Does it even have a nature in the first place? When do we say that some truth-bearers are true? Philosophers offer varying answers to these questions. In this article, some of these answers are explored and some of the problems (...) raised against them are presented. (shrink)
This article examines who or what should be the target of feminist criticism. Throughout the discussion, the concept of memes is applied in analyzing systems such as patriarchy and feminism itself. Adapting Dawkins' theory on genes, this research puts forward the possibility that patriarchies and feminisms are memeplexes competing for the limited energy and memory space of humanity.
This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...) ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents. (shrink)
This article is a general introduction to the psychology of reasoning. Specifically, it focuses on the dual process theory of human cognition. Proponents of the said two-system view hold that human cognition involves two processes (viz., System 1 and System 2). System 1 is an automatic, intuitive thinking process where judgments and reasoning rely on fast thinking and ready-to-hand data. On the other hand, System 2 is a slow, logical cognitive process where our judgments and reasoning rely on reflective, careful (...) analysis and data evaluation. Supposedly, these two cognitive processes are at play in every thinking task, and they sometimes work together and sometimes go against each other. (shrink)
In elementary logic textbooks, Venn diagrams are used to analyze and evaluate the validity of syllogistic arguments. Although the method of Venn diagrams is shown to be a powerful analytical tool in these textbooks, it still has limitations. On the one hand, such method fails to represent singular statements of the form, “a is F.” On other hand, it also fails to represent identity statements of the form, “a is b.” Because of this, it also fails to give an account (...) of the validity of some obviously valid arguments that contain these types of statements as constituents. In this paper, owing to the developments in the literature on Venn diagrams, we offer a way of supplementing the rules of the Venn diagram found in textbooks, and show how this retooled Venn diagram technique could account for the problem cases. (shrink)
This article contends that certain types of Autonomous Weapons Systems (AWS) are susceptible to Hume’s Law. Hume’s Law highlights the seeming impossibility of deriving moral judgments, if not all evaluative ones, from purely factual premises. If autonomous weapons make use of factual data from their environments to carry out specific actions, then justifying their ethical decisions may prove to be intractable in light of the said problem. In this article, Hume’s original formulation of the no-ought-from-is thesis is evaluated in relation (...) to the dominant views regarding it (viz., moral non-descriptivism and moral descriptivism). Citing the objections raised against these views, it is claimed that, if there is no clear-cut solution to Hume’s is-ought problem that presently exists, then the task of grounding the moral judgements of AWS would still be left unaccounted for. (shrink)