There has been a recent surge in interest in two questions concerning the nature of perceptual experience; viz. the question of whether perceptual experience is sometimes cognitively penetrated and that of whether high-level properties are presented in perceptual experience. Only rarely have thinkers been concerned with the question of whether the two phenomena are interestingly related. Here we argue that the two phenomena are not related in any interesting way. We argue further that this lack of an interesting connection between (...) the two phenomena has potentially devastating consequences for naïve realism. Finally, we consider the possibility of a disunified view of experience that takes perceptual experience to be a matter of both being directly perceptually related to mind-independent objects and property instances as well as consciously representing these entities. (shrink)
Integration information theories posit that the integration of information is necessary and/or sufficient for consciousness. In this paper, we focus on three of the most prominent information integration theories: Information Integration Theory, Global Workspace Theory, and Attended Intermediate-Level Theory. We begin by explicating each theory and key concepts they utilize. We then argue that the current evidence indicates that the integration of information is neither necessary nor sufficient for consciousness. Unlike GWT and AIR, IIT maintains that conscious experience is both (...) necessary and sufficient for consciousness. We present empirical evidence indicating that simple features are experienced in the absence of feature integration and argue that it challenges IIT’s necessity claim. In addition, we challenge IIT’s sufficiency claim by presenting evidence from hemineglect cases and amodal completion indicating that contents may be integrated and yet fail to give rise to subjective experience. Moreover, we present empirical evidence from subjects with frontal lesions who are unable to carry out simple instructions and argue that they are irreconcilable with GWT. Lastly, we argue that empirical evidence indicating that patients with visual agnosia fail to identify objects they report being conscious of present a challenge to AIR’s necessity claim. (shrink)
It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...) system of commercial production of such artificial persons would have to meet. It then shows that it is possible for these requirements to be met, and that doing so will make the commercial production of artificial persons permissible. Lastly, it briefly presents one potential blueprint for how such a framework could look like—inspired by the real-world model of compensating the training of athletes—and then addresses some objections to the view. (shrink)
In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...) in building such artificial agents, their creators cannot help but evince an objectionable attitude akin to the Aristotelian vice of manipulativeness. (shrink)
This paper argues that even though massive technological unemployment will likely be one of the results of automation, we will not need to institute mass-scale redistribution of wealth to deal with its consequences. Instead, reasons are given for cautious optimism about the standards of living the newly unemployed workers may expect in the fully-automated future. It is not claimed that these predictions will certainly bear out. Rather, they are no less likely to come to fruition than the predictions of those (...) authors who predict that massive technological unemployment will lead to the suffering of the masses on such a scale that significant redistributive policies will have to be instituted to alleviate it. Additionally, the paper challenges the idea that the existence of a moral obligation to help the victims of massive unemployment justifies the coercive taking of anyone else’s property. (shrink)
There is a debate between David Barnett and Rory Madden concerning the features that “our naïve conception of conscious subjects” has. While Barnett claims that our conception demands that conscious subjects be simple, Madden holds that our conception demands that conscious beings be topologically integrated. In this paper, I aim to bring some empirical results concerning the rubber-hand illusions and bilocation illusions to bear on this topic. While I do not reach a definitive resolution to the dispute between Barnett and (...) Madden, I suggest that, provisionally, the empirical results favor Barnett’s proposal. (shrink)
In “Moods Are Not Colored Lenses: Perceptualism and the Phenomenology of Moods” Francisco Gallegos presents a challenge to a popular view about the phenomenology of being in a mood that he calls “perceptualism”. In this essay, I offer a partial defense of perceptualism about moods and argue that perceptualism and Gallegos’s preferred Heideggerian alternative need not be viewed as in opposition to one another.
With the advent of automated decision-making, governments have increasingly begun to rely on artificially intelligent algorithms to inform policy decisions across a range of domains of government interest and influence. The practice has not gone unnoticed among philosophers, worried about “algocracy”, and its ethical and political impacts. One of the chief issues of ethical and political significance raised by algocratic governance, so the argument goes, is the lack of transparency of algorithms. One of the best-known examples of philosophical analyses of (...) algocracy is John Danaher’s “The threat of algocracy”, arguing that government by algorithm undermines political legitimacy. In this paper, I will treat Danaher’s argument as a springboard for raising additional questions about the connections between algocracy, comprehensibility, and legitimacy, especially in light of empirical results about what we can expect the voters and policymakers to know. The paper has the following structure: in Sect. 2, I introduce the basics of Danaher’s argument regarding algocracy. In Sect. 3 I argue that the algocratic threat to legitimacy has troubling implications for social justice. In Sect. 4, I argue that, nevertheless, there seem to be good reasons for governments to rely on algorithmic decision support systems. Lastly, I try to resolve the apparent tension between the findings of the two preceding Sections. (shrink)
Must opponents of creating conscious artificial agents embrace anti-natalism? Must anti-natalists be against the creation of conscious artificial agents? This article examines three attempts to argue against the creation of potentially conscious artificial intelligence (AI) in the context of these questions. The examination reveals that the argumentative strategy each author pursues commits them to the anti-natalist position with respect to procreation; that is to say, each author's argument, if applied consistently, should lead them to embrace the conclusion that procreation is, (...) at best, morally problematic. However, the article also argues that anti-natalists can find the production of some possible artificially conscious AI permissible. Thus, the creation of potentially conscious AI could be accepted by both friends and foes of anti-natalism. (shrink)
Having been involved in a slew of recent scandals, many of the world’s largest technology companies (“Big Tech,” “Digital Titans”) embarked on devising numerous codes of ethics, intended to promote improved standards in the conduct of their business. These efforts have attracted largely critical interdisciplinary academic attention. The critics have identified the voluntary character of the industry ethics codes as among the main obstacles to their efficacy. This is because individual industry leaders and employees, flawed human beings that they are, (...) cannot be relied on voluntarily to conform with what justice demands, especially when faced with powerful incentives to pursue their own self-interest instead. Consequently, the critics have recommended a suite of laws and regulations to force the tech companies to better comply with the requirements of justice. At the same time, they have paid little attention to the possibility that individuals acting within the political context, e.g. as lawmakers and regulators, are also imperfect and need not be wholly compliant with what justice demands. This paper argues that such an omission is far from trivial. It creates a heavy argumentative burden on the part of the critics that they by and large fail to discharge. As a result, the case for Big Tech regulation that emerges from the recent literature has substantial lacunae and more work needs to be done before we can accept the critics’ calls for greater state involvement in the industry. (shrink)
In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...) retributive approach to machine crime in favor of prioritizing restitution. I argue that this shift better conforms to what justice demands when sophisticated artificial agents of uncertain moral status are concerned. (shrink)
Pyrrho and colleagues (2022) argue that the loss of health privacy can damage democratic values by increasing social polarization, removing individual choice, and limiting self-determination. As a remedy, the authors propose a data-regulation regime that prohibits companies from using such data for discriminatory purposes. Our commentary addresses three issues. First, we point out an additional problematic dimension of excessive health privacy loss, namely, the potential racialization of groups and individuals that it may likely contribute to. Second, we note that, in (...) our view, the authors’ argument for more regulation rests on an invidious comparison between the realistically described status quo and the idealized picture of the imagined regulatory regime that the authors briefly propose. Third, we argue that, despite existing regulations, both private and government actors frequently use private data in ways that lead to ethically problematic outcomes, especially when it comes to racialized communities. (shrink)
Kant, Wittgenstein, and Husserl all held that visual awareness of objects requires visual awareness of the space in which the objects are located. There is a lively debate in the literature on spatial perception whether this view is undermined by the results of experiments on a Balint’s syndrome patient, known as RM. I argue that neither of two recent interpretations of these results is able to explain RM’s apparent ability to experience motion. I outline some ways in which each interpretation (...) may respond to this challenge, and suggest which way of meeting the challenge is preferable. I conclude that RM retains some awareness of the larger space surrounding the objects he sees. (shrink)
In this paper, I provide an account of the spatiality of olfactory experiences in terms of topological properties. I argue that thinking of olfactory experiences as making the subject aware of topological properties enables us to address popular objections against the spatiality of smells, and it makes sense of everyday spatial olfactory phenomenology better than its competitors. I argue for this latter claim on the basis of reflection on thought experiments familiar from the philosophical literature on olfaction, as well as (...) on the basis of some empirical data about the localization of smells. I conclude by suggesting how the naïve‐topology framework could be applied in debates about the spatiality of other types of experiences. (shrink)
In this paper I argue that the phenomenal character of a mood experience wholly depends on affective modifications (appropriate for the mood in question) to the phenomenal characters of one's non-mood experiences. I argue that this view accounts for all distinctive aspects of mood phenomenology, in contrast to currently existing accounts of moods, each of which faces trouble accounting for some distinctive aspect of mood experience. I also explain how my view allows for holding both that moods seemingly lack intentional (...) objects and that their phenomenal character reduces to intentional content nonetheless. (shrink)
“The Epistemic Significance of Perceptual Learning” defends the view that perceptual experiences generate justification in virtue of their presentational phenomenology, preserve past justification in virtue of the influence of perceptual learning on them, and thereby allow new beliefs formed on their basis to also be partly based on that past justification. “The Real Epistemic Significance of Perceptual Learning” mounts challenges to these three claims. Here we explore some avenues for responding to those challenges.
In a stimulating recent article for this journal (van Wynsberghe and Robbins in Sci Eng Ethics 25(3):719–735, 2019), Aimee van Wynsberghe and Scott Robbins mount a serious critique of a number of reasons advanced in favor of building artificial moral agents (AMAs). In light of their critique, vW&R make two recommendations: they advocate a moratorium on the commercialization of AMAs and suggest that the argumentative burden is now shifted onto the proponents of AMAs to come up with new reasons for (...) building them. This commentary aims to explore the implications vW&R draw from their critique. In particular, it will raise objections to the moratorium argument and propose a presumptive case for commercializing AMAs. (shrink)
In this paper, I will examine the question of the space of visual imagery. I will ask whether in visually imagining an object or a scene, we also thereby imagine that object or scene as being in a space unrelated to the space we’re simultaneously perceiving or whether it is the case that the space of visual imagination is experienced as connected to the space of perceptual experience. I will argue that the there is no distinction between the spatial content (...) of visualization and the spatial content of visual perception. I will base my conclusion on two uncontroversial, empirically confirmed aspects of imagery: the perspectival character of imagery, and the possibility of superimposing an imagined object upon the perceived scene. (shrink)
The aim of this chapter is to shed new light on the question of what newly-sighted subjects are capable of seeing on the basis of previous experience with mind-independent, external objects and their properties through touch alone. This question is also known as “Molyneux’s Question.” Much of the empirically driven debate surrounding this question has been centered on the nature of the representational content of the subjects’ visual experiences. It has generally been assumed that the meaning of “seeing” deployed in (...) these disputes is more or less clear and unproblematic, and therefore requires no analysis or clarification. In this chapter, we wish to challenge this assumption. We argue that getting clear on the meaning of “seeing” is the only feasible way to determine whether the empirical attempts to answer Molyneux’s Question accurately capture what newly-sighted subjects are in fact capable of seeing. (shrink)