Social psychologists tell us that much of human behavior is automatic. It is natural to think that automatic behavioral dispositions are ethically desirable if and only if they are suitably governed by an agent’s reflective judgments. However, we identify a class of automatic dispositions that make normatively self-standing contributions to praiseworthy action and a well-lived life, independently of, or even in spite of, an agent’s reflective judgments about what to do. We argue that the fundamental questions for the "ethics of (...)automaticity" are what automatic dispositions are (and are not) good for and when they can (and cannot) be trusted. (shrink)
Automaticity is rapid and effortless cognition that operates without conscious awareness or deliberative control. An action is virtuous to the degree that it meets the requirements of the ethical virtues in the circumstances. What contribution does automaticity make to the ethical virtue of an action? How far is the automaticity discussed by virtue ethicists consonant with, or even supported by, the findings of empirical psychology? We argue that the automaticity of virtuous action is automaticity not (...) of skill, but of motivation. Automatic motivations that contribute to the virtuousness of an action include not only those that initiate action, but also those that modify action and those that initiate and shape deliberation. We then argue that both goal psychology and attitude psychology can provide the cognitive architecture of this automatic motivation. Since goals are essentially directed towards the agent’s own action whereas attitudes are not, we argue that goals might underpin some virtues while attitudes underpin others. We conclude that consideration of the cognitive architecture of ethical virtue ought to engage with both areas of empirical psychology and should be careful to distinguish among ethical virtues. (shrink)
Cognitive scientists have long noted that automated behavior is the rule, while consciousness acts of self-regulation are the exception to the rule. On the face of it automated actions appear to be immune to moral appraisal because they are not subject to conscious control. Conventional wisdom suggests that sleepwalking exculpates, while the mere fact that a person is performing a well-versed task unthinkingly does not. However, our apparent lack of conscious control while we are undergoing automaticity challenges the idea (...) that there is a relevant moral difference between these two forms of unconscious behavior. In both cases the agent lacks access to information that might help them guide their actions so as to avoid harms. In response it is argued that the crucial distinction between the automatic agent and the agent undergoing an automatism, such as somnambulism or petit mal epilepsy, lies in the fact that the former can preprogram the activation and interruption of automatic behavior. Given that, it is argued that there is elbowroom for attributing responsibility to automated agents based on the quality of their will. (shrink)
The objective of this paper is to characterize the rich interplay between automatic and cognitive control processes that we propose is the hallmark of skill, in contrast to habit, and what accounts for its flexibility. We argue that this interplay isn't entirely hierarchical and static, but rather heterarchical and dynamic. We further argue that it crucially depends on the acquisition of detailed and well-structured action representations and internal models, as well as the concomitant development of metacontrol processes that can be (...) used to shape and balance it. (shrink)
Recently, philosophers have appealed to empirical studies to argue that whenever we think that p, we automatically believe that p (Millikan 2004; Mandelbaum 2014; Levy and Mandelbaum 2014). Levy and Mandelbaum (2014) have gone further and claimed that the automaticity of believing has implications for the ethics of belief in that it creates epistemic obligations for those who know about their automatic belief acquisition. I use theoretical considerations and psychological findings to raise doubts about the empirical case for the (...) view that we automatically believe what we think. Furthermore, I contend that even if we set these doubts aside, Levy and Mandelbaum’s argument to the effect that the automaticity of believing creates epistemic obligations is not fully convincing. (shrink)
Moral judgements are based on automatic processes. Moral judgements are based on reason. In this paper, I argue that both of these claims are true, and show how they can be reconciled. Neither the automaticity of moral judgement nor the post hoc nature of conscious moral reasoning pose a threat to rationalist models of moral cognition. The relation moral reasoning bears to our moral judgements is not primarily mediated by episodes of conscious reasoning, but by the acquisition, formation and (...) maintenance ? in short: education ? of our moral intuitions. (shrink)
While the causal contributions of so-called ‘automatic’ processes to behavior are now widely acknowledged, less attention has been given to their normative role in the guidance of action. We develop an account of the normativity of automaticity that responds to and builds upon Tamar Szabó Gendler's account of ‘alief’, an associative and arational mental state more primitive than belief. Alief represents a promising tool for integrating psychological research on automaticity with philosophical work on mind and action, but Gendler (...) errs in overstating the degree to which aliefs are norm-insensitive. (shrink)
Dual process theorists in psychology maintain that the mind’s workings can be explained in terms of conscious or controlled processes and automatic processes. Automatic processes are largely nonconscious, that is, triggered by environmental stimuli without the agent’s conscious awareness or deliberation. Automaticity researchers contend that even higher level habitual social behaviors can be nonconsciously primed. This article brings work on automaticity to bear on our understanding of habitual virtuous actions. After examining a recent intuitive account of habitual actions (...) and habitual virtuous actions, the author offers her own explanation in terms of goal-dependent automaticity. This form of automaticity provides an account of habitual virtuous actions that explains the sense in which these actions are rational, that is, done for reasons. Habitual virtuous actions are rational in the sense of being purposive or goal-directed and are essentially linked with the agent’s psychological states. Unlike deliberative virtuous actions, the agent’s reasons for habitual virtuous actions are not present to her conscious awareness at the time of acting. (shrink)
Marc Lewis argues that addiction is not a disease, it is instead a dysfunctional outcome of what plastic brains ordinarily do, given the adaptive processes of learning and development within environments where people are seeking happiness, or relief, or escape. They come to obsessively desire substances or activities that they believe will deliver happiness and so on, but this comes to corrupt the normal process of development when it escalates beyond a point of functionality. Such ‘deep learning’ emerges from consumptive (...) habits, or ‘motivated repetition’, and although addiction is bad, it ferments out of the ordinary stuff underpinning any neural habit. Lewis gives us a convincing story about the process that leads from ordinary controlled consumption through to quite heavy addictive consumption, but I claim that in some extreme cases the eventual state of deep learning tips over into clinically significant impairment and disorder. Addiction is an elastic concept, and although it develops through mild and moderate forms, the impairment we see in severe cases needs to be acknowledged. This impairment, I argue, consists in the chronic automatic consumption present in late stage addiction. In this condition, the desiring self largely drops out the picture, as the addicted individual begins to mindlessly consume. This impairment is clinically significant because the machinery of motivated rationality has become corrupted. To bolster this claim I compare what is going on in these extreme cases with what goes on in people who dissociate in cases of depersonalization disorder. (shrink)
This paper considers the connection between automaticity, control and agency. Indeed, recent philosophical and psychological works play up the incompatibility of automaticity and agency. Specifically, there is a threat of automaticity, for automaticity eliminates agency. Such conclusions stem from a tension between two thoughts: that automaticity pervades agency and yet automaticity rules out control. I provide an analysis of the notions of automaticity and control that maintains a simple connection: automaticity entails the (...) absence of control. An appropriate analysis, however, shows that actions are forms of control and pervasively automatic even if automaticity implies the absence of control. Consequences are drawn for the theory of mental agency and the psychological concepts of automaticity and control. (shrink)
Actions performed in a state of automatism are not subject to moral evaluation, while automatic actions often are. Is the asymmetry between automatistic and automatic agency justified? In order to answer this question we need a model or moral accountability that does justice to our intuitions about a range of modes of agency, both pathological and non-pathological. Our aim in this paper is to lay the foundations for such an account.
What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. (...) He takes advantage of this paradox of internal automaticity to downplay the threats of external automaticity to our autonomy. We respond in this paper by challenging the legitimacy of the paradox. While Danaher assumes that internal and external automaticity are roughly equivalent, we argue that there are reasons why we should accept a large degree of internal automaticity, that it is actually essential to our sense of autonomy, and as such it is ethically good; however, the same does not go for external automaticity. Therefore, the similarity between the two is not as powerful as the paradox presumes. In conclusion, we make practical recommendations for how to better manage the integration of AI assistants into society. (shrink)
From our everyday commuting to the gold medalist’s world-class performance, skillful actions are characterized by fine-grained, online agentive control. What is the proper explanation of such control? There are two traditional candidates: intellectualism explains skillful agentive control by reference to the agent’s propositional mental states; anti-intellectualism holds that propositional mental states or reflective processes are unnecessary since skillful action is fully accounted for by automatic coping processes. I examine the evidence for three psychological phenomena recently held to support anti-intellectualism and (...) argue that it supports neither traditional candidate, but an intermediate attention-control account, according to which the top-down, intention-directed control of attention is a necessary component of skillful action. Only this account recognizes both the role of automatic control in skilled action and the need for higher-order cognition to thread automatic processes together into a unified, skillful performance. This applies to bodily skillful action in general, from the world-class performance of experts to mundane, habitual action. The attention-control account stresses that, for intentions to play their role as top-down modulators of attention, agents must sustain the intention’s activation; hence, the need for reflecting throughout performance. (shrink)
While the causal contributions of so‐called ‘automatic’ processes to behavior are now widely acknowledged, less attention has been given to their normative role in the guidance of action. We develop an account of the normativity of automaticity that responds to and builds upon Tamar Szabó Gendler's account of ‘alief’, an associative and arational mental state more primitive than belief. Alief represents a promising tool for integrating psychological research on automaticity with philosophical work on mind and action, but Gendler (...) errs in overstating the degree to which aliefs are norm‐insensitive. (shrink)
Attention serves to represent selectively relevant information at the expense of competing and irrelevant information, but the mechanisms and effects of attention are not unitary. The great variety of methods and techniques used to study automaticity and attention for facial expressions suggests that the time should now be ready for a better breaking down of the concepts of automaticity and attention into elementary constituents that are more tractable to investigations in cognitive neuroscience. This article reviews both the behavioral (...) and neuroimaging literature on the automatic perception of facial expressions of emotion in healthy volunteers and patients with brain damage. It focuses on aspects of automaticity in face perception that relate to task goals, attentional control, and conscious awareness. Behavioral and neuroimaging findings converge to support some degree of automaticity in processing facial expressions and is likely to reflect distinct components that should be better disentangled at both the behavioral and neural level. (shrink)
We investigated the automaticity of implicit sequence learning by varying perceptual load in a pure perceptual sequence learning paradigm. Participants responded to the randomly changing identity of a target, while the irrelevant target location was structured. In Experiment 1, the target was presented under low or high perceptual load during training, whereas testing occurred without load. Unexpectedly, no sequence learning was observed. In Experiment 2, perceptual load was introduced during the test phase to determine whether load is required to (...) express perceptual knowledge. Learning itself was unaffected by visuospatial demands, but more learning was expressed under high load test conditions. In Experiment 3, we demonstrated that perceptual load is not required for the acquisition of perceptual sequence knowledge. These findings suggest that perceptual load does not mediate the perceptual sequence learning process itself, supporting the automaticity of implicit learning, but is mandatory for the expression of pure perceptual sequence knowledge. (shrink)
A large part of the current debate among virtue ethicists focuses on the role played by phronesis, or wise practical reasoning, in virtuous action. The paradigmatic case of an action expressing phronesis is one where an agent explicitly reflects and deliberates on all practical options in a given situation and eventually makes a wise choice. Habitual actions, by contrast, are typically performed automatically, that is, in the absence of preceding deliberation. Thus they would seem to fall outside of the primary (...) focus of the current virtue ethical debate. By contrast, Bill Pollard has recently suggested that all properly virtuous actions must be performed habitually and therefore automatically, i.e. in the absence of moral deliberation. In this paper, Pollard’s suggestion is interpreted as the thesis that habitual automaticity is constitutive of virtue or moral excellence. By constructing an argument in favor of it and discussing several objections, the paper ultimately seeks to defend a qualified version of this thesis. (shrink)
What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. (...) He takes advantage of this paradox of internal automaticity to downplay the threats of external automaticity to our autonomy. We respond in this paper by challenging the legitimacy of the paradox. While Danaher assumes that internal and external automaticity are roughly equivalent, we argue that there are reasons why we should accept a large degree of internal automaticity, that it is actually essential to our sense of autonomy, and as such it is ethically good; however, the same does not go for external automaticity. Therefore, the similarity between the two is not as powerful as the paradox presumes. In conclusion, we make practical recommendations for how to better manage the integration of AI assistants into society. (shrink)
Hubert Dreyfus argues that explicit thought disrupts smooth coping at both the level of everyday tasks and of highly-refined skills. However, Barbara Montero criticises Dreyfus for extending what she calls the ‘principle of automaticity’ from our everyday actions to those of trained experts. In this paper, I defend Dreyfus’ account while refining his phenomenology. I examine the phenomenology of what I call ‘esoteric’ expertise to argue that the explicit thought Montero invokes belongs rather to ‘gaps’ between or above moments (...) of reflexive coping. However, I agree that the ‘principle of automaticity’ does not adequately capture the experience of performing such skills. Drawing on examples of expert performance in sport and improvised music and dance, I argue that esoteric action, at its best, is marked by a distinct state of non-conceptual awareness- an experience of spontaneity, flow and ‘owned-ness’- that distinguishes it from the automaticity of everyday actions. (shrink)
Attention is often dichotomized into controlled vs. automatic processing, where controlled processing is slow, flexible, and intentional, and automatic processing is fast, inflexible, and unintentional. In contrast to this strict dichotomy, there is mounting evidence for context-specific processes that are engaged rapidly yet are also flexible. In the present study we extend this idea to the domain of implicit learning to examine whether flexibility in automatic processes can be implemented through the reliance on contextual features. Across three experiments we show (...) that participants can learn implicitly two complementary sequences that are associated with distinct contexts, and that transfer of learning when the two contexts are randomly intermixed depends on the distinctiveness of the two contexts. Our results point to the role of context-specific processes in the acquisition and expression of implicit sequence knowledge, and also suggest that episodic details can be represented in sequence knowledge. (shrink)
What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. (...) He takes advantage of this paradox of internal automaticity to downplay the threats of external automaticity to our autonomy. We respond in this paper by challenging the legitimacy of the paradox. While Danaher assumes that internal and external automaticity are roughly equivalent, we argue that there are reasons why we should accept a large degree of internal automaticity, that it is actually essential to our sense of autonomy, and as such it is ethically good; however, the same does not go for external automaticity. Therefore, the similarity between the two is not as powerful as the paradox presumes. In conclusion, we make practical recommendations for how to better manage the integration of AI assistants into society. (shrink)
Moral judgements are based on automatic processes. Moral judgements are based on reason. In this paper, I argue that both of these claims are true, and show how they can be reconciled. Neither the automaticity of moral judgement nor the post hoc nature of conscious moral reasoning pose a threat to rationalist models of moral cognition. The relation moral reasoning bears to our moral judgements is not primarily mediated by episodes of conscious reasoning, but by the acquisition, formation and (...) maintenance – in short: education – of our moral intuitions. (shrink)