It is not rare in philosophy and psychology to see theorists fall into dichotomous thinking about mental phenomena. On one side of the dichotomy there are processes that I will label “unintelligent.” These processes are thought to be unconscious, implicit, automatic, unintentional, involuntary, procedural, and non-cognitive. On the other side, there are “intelligent” processes that are conscious, explicit, controlled, intentional, voluntary, declarative, and cognitive. Often, if a process or behavior is characterized by one of the features from either of the (...) above lists, the process or behavior is classified as falling under the category to which the feature belongs. For example, if a process is implicit this is usually considered sufficient for classifying it as “unintelligent” and for assuming that the remaining features that fall under the “unintelligent” grouping will apply to it as well. Accordingly, if a process or behavior is automatic, philosophers often consider it to be unintelligent. It is my goal in this paper to challenge the conceptual slip from “automatic” to “unintelligent”. I will argue that there are a whole range of properties highlighted by the existing psychological literature that make automaticity a much more complex phenomenon than is usually appreciated. I will then go on to discuss two further important relationships between automatic processes and controlled processes that arise when we think about automatic processes in the context of skilled behavior. These interactions should add to our resistance to classifying automaticity as unintelligent or mindless. In Sect. 1, I present a few representative cases of philosophers classifying automatic processes and behaviors as mindless or unintelligent. In Sect. 2, I review trends in the psychology of automaticity in order highlight a complex set of features that are characteristic, though not definitive, of automatic processes and behaviors. In Sect. 3, I argue that at least some automatic processes are likely cognitively penetrable. In Sect. 4, I argue that the structure of skilled automatic processes is shaped diachronically by practice, training and learning. Taken together, these considerations should dislodge the temptation to equate “automatic” with “unintelligent”. (shrink)
I argue that so-called automatic actions – routine performances that we successfully and effortlessly complete without thinking such as turning a door handle, downshifting to 4th gear, or lighting up a cigarette – pose a challenge to causalism, because they do not appear to be preceded by the psychological states which, according to the causal theory of action, are necessary for intentional action. I argue that causalism cannot prove that agents are simply unaware of the relevant psychological states when they (...) act automatically, because these content-specific psychological states aren’t always necessary to make coherent rational sense of the agent’s behaviour. I then dispute other possible grounds for the attribution of these psychological states, such as agents’ own self-attributions. In the final section I introduce an alternative to causalism, building on Frankfurt’s concept of guidance. (shrink)
Social psychologists tell us that much of human behavior is automatic. It is natural to think that automatic behavioral dispositions are ethically desirable if and only if they are suitably governed by an agent’s reflective judgments. However, we identify a class of automatic dispositions that make normatively self-standing contributions to praiseworthy action and a well-lived life, independently of, or even in spite of, an agent’s reflective judgments about what to do. We argue that the fundamental questions for the "ethics of (...)automaticity" are what automatic dispositions are (and are not) good for and when they can (and cannot) be trusted. (shrink)
A large part of the current debate among virtue ethicists focuses on the role played by phronesis, or wise practical reasoning, in virtuous action. The paradigmatic case of an action expressing phronesis is one where an agent explicitly reflects and deliberates on all practical options in a given situation and eventually makes a wise choice. Habitual actions, by contrast, are typically performed automatically, that is, in the absence of preceding deliberation. Thus they would seem to fall outside of the primary (...) focus of the current virtue ethical debate. By contrast, Bill Pollard has recently suggested that all properly virtuous actions must be performed habitually and therefore automatically, i.e. in the absence of moral deliberation. In this paper, Pollard’s suggestion is interpreted as the thesis that habitual automaticity is constitutive of virtue or moral excellence. By constructing an argument in favor of it and discussing several objections, the paper ultimately seeks to defend a qualified version of this thesis. (shrink)
Automaticity is rapid and effortless cognition that operates without conscious awareness or deliberative control. An action is virtuous to the degree that it meets the requirements of the ethical virtues in the circumstances. What contribution does automaticity make to the ethical virtue of an action? How far is the automaticity discussed by virtue ethicists consonant with, or even supported by, the findings of empirical psychology? We argue that the automaticity of virtuous action is automaticity not (...) of skill, but of motivation. Automatic motivations that contribute to the virtuousness of an action include not only those that initiate action, but also those that modify action and those that initiate and shape deliberation. We then argue that both goal psychology and attitude psychology can provide the cognitive architecture of this automatic motivation. Since goals are essentially directed towards the agent’s own action whereas attitudes are not, we argue that goals might underpin some virtues while attitudes underpin others. We conclude that consideration of the cognitive architecture of ethical virtue ought to engage with both areas of empirical psychology and should be careful to distinguish among ethical virtues. (shrink)
According to grounded cognition, words whose semantics contain sensory-motor features activate sensory-motor simulations, which, in turn, interact with spatial responses to produce grounded congruency effects. Growing evidence shows these congruency effects do not always occur, suggesting instead that the grounded features in a word's meaning do not become active automatically across contexts. Researchers sometimes use this as evidence that concepts are not grounded, further concluding that grounded information is peripheral to the amodal cores of concepts. We first review broad evidence (...) that words do not have conceptual cores, and that even the most salient features in a word's meaning are not activated automatically. Then, in three experiments, we provide further evidence that grounded congruency effects rely dynamically on context, with the central grounded features in a concept becoming active only when the current context makes them salient. Even when grounded features are central to a word's meaning, their activation depends on task conditions. (shrink)
The objective of this paper is to characterize the rich interplay between automatic and cognitive control processes that we propose is the hallmark of skill, in contrast to habit, and what accounts for its flexibility. We argue that this interplay isn't entirely hierarchical and static, but rather heterarchical and dynamic. We further argue that it crucially depends on the acquisition of detailed and well-structured action representations and internal models, as well as the concomitant development of metacontrol processes that can be (...) used to shape and balance it. (shrink)
Automatic imitation or “imitative compatibility” is thought to be mediated by the mirror neuron system and to be a laboratory model of the motor mimicry that occurs spontaneously in naturalistic social interaction. Imitative compatibility and spatial compatibility effects are known to depend on different stimulus dimensions—body movement topography and relative spatial position. However, it is not yet clear whether these two types of stimulus–response compatibility effect are mediated by the same or different cognitive processes. We present an interactive activation model (...) of imitative and spatial compatibility, based on a dual-route architecture, which substantiates the view they are mediated by processes of the same kind. The model, which is in many ways a standard application of the interactive activation approach, simulates all key results of a recent study by Catmur and Heyes (2011). Specifically, it captures the difference in the relative size of imitative and spatial compatibility effects; the lack of interaction when the imperative and irrelevant stimuli are presented simultaneously; the relative speed of responses in a quintile analysis when the imperative and irrelevant stimuli are presented simultaneously; and the different time courses of the compatibility effects when the imperative and irrelevant stimuli are presented asynchronously. (shrink)
Perceptual processes, in particular modular processes, have long been understood as being mandatory. But exactly what mandatoriness amounts to is left to intuition. This paper identifies a crucial ambiguity in the notion of mandatoriness. Discussions of mandatory processes have run together notions of automaticity and ballisticity. Teasing apart these notions creates an important tool for the modularist's toolbox. Different putatively modular processes appear to differ in their kinds of mandatoriness. Separating out the automatic from the ballistic can help the (...) modularist diagnose and explain away some putative counterexamples to multimodal and central modules, thereby helping us to better evaluate the evidentiary status of modularity theory. (shrink)
Critics of appraisal theory have difficulty accepting appraisal (with its constructive flavor) as an automatic process, and hence as a potential cause of most emotions. In response, some appraisal theorists have argued that appraisal was never meant as a causal process but as a constituent of emotional experience. Others have argued that appraisal is a causal process, but that it can be either rule-based or associative, and that the associative variant can be automatic. This article first proposes empirically investigating whether (...) rule-based appraisal can also be automatic and then proposes investigating the automatic nature of constructive (instead of rule-based) appraisal because the distinction between rule-based and associative is problematic. Finally, it discusses experiments that support the view that constructive appraisal can be automatic. (shrink)
The SALOMON project is a contribution to the automatic processing of legal texts. Its aim is to automatically summarise Belgian criminal cases in order to improve access to the large number of existing and future cases. Therefore, techniques are developed for identifying and extracting relevant information from the cases. A broader application of these techniques could considerably simplify the work of the legal profession.A double methodology was used when developing SALOMON: the cases are processed by employing additional knowledge to interpret (...) structural patterns and features on the one hand and by way of occurrence statistics of index terms on the other. As a result, SALOMON performs an initial categorisation and structuring of the cases and subsequently extracts the most relevant text units of the alleged offences and of the opinion of the court. The SALOMON techniques do not themselves solve any legal questions, but they do guide the user effectively towards relevant texts. (shrink)
Work on a computer program called SMILE + IBP (SMart Index Learner Plus Issue-Based Prediction) bridges case-based reasoning and extracting information from texts. The program addresses a technologically challenging task that is also very relevant from a legal viewpoint: to extract information from textual descriptions of the facts of decided cases and apply that information to predict the outcomes of new cases. The program attempts to automatically classify textual descriptions of the facts of legal problems in terms of Factors, a (...) set of classification concepts that capture stereotypical fact patterns that effect the strength of a legal claim, here trade secret misappropriation. Using these classifications, the program can evaluate and explain predictions about a problem’s outcome given a database of previously classified cases. This paper provides an extended example illustrating both functions, prediction by IBP and text classification by SMILE, and reports empirical evaluations of each. While IBP’s results are quite strong, and SMILE’s much weaker, SMILE + IBP still has some success predicting and explaining the outcomes of case scenarios input as texts. It marks the first time to our knowledge that a program can reason automatically about legal case texts. (shrink)
Are mechanisms for social attention influenced by culture? Evidence that social attention is triggered automatically by bottom-up gaze cues and is uninfluenced by top-down verbal instructions may suggest it operates in the same way everywhere. Yet considerations from evolutionary and cultural psychology suggest that specific aspects of one's cultural background may have consequence for the way mechanisms for social attention develop and operate. In more interdependent cultures, the scope of social attention may be broader, focusing on more individuals and relations (...) between those individuals. We administered a multi-gaze cueing task requiring participants to fixate a foreground face flanked by background faces and measured shifts in attention using eye tracking. For European Americans, gaze cueing did not depend on the direction of background gaze cues, suggesting foreground gaze alone drives automatic attention shifting; for East Asians, cueing patterns differed depending on whether the foreground cue matched or mismatched background cues, suggesting foreground and background gaze information were integrated. These results demonstrate that cultural background influences the social attention system by shifting it into a narrow or broad mode of operation and, importantly, provides evidence challenging the assumption that mechanisms underlying automatic social attention are necessarily rigid and impenetrable to culture. (shrink)
A structure is automatic if its domain, functions, and relations are all regular languages. Using the fact that every automatic structure is decidable, in the literature many decision problems have been solved by giving an automatic presentation of a particular structure. Khoussainov and Nerode asked whether there is some way to tell whether a structure has, or does not have, an automatic presentation. We answer this question by showing that the set of Turing machines that represent automata-presentable structures is ${\rm{\Sigma (...) }}_1^1 $-complete. We also use similar methods to show that there is no reasonable characterisation of the structures with a polynomial-time presentation in the sense of Nerode and Remmel. (shrink)
The last couple of years have seen a rapid growth of interest in the study of crossmodal correspondences – the tendency for our brains to preferentially associate certain features or dimensions of stimuli across the senses. By now, robust empirical evidence supports the existence of numerous crossmodal correspondences, affecting people’s performance across a wide range of psychological tasks – in everything from the redundant target effect paradigm through to studies of the Implicit Association Test, and from speeded discrimination/classification tasks through (...) to unspeeded spatial localisation and temporal order judgment tasks. However, one question that has yet to receive a satisfactory answer is whether crossmodal correspondences automatically affect people’s performance , as opposed to reflecting more of a strategic, or top-down, phenomenon. Here, we review the latest research on the topic of crossmodal correspondences to have addressed this issue. We argue that answering the question will require researchers to be more precise in terms of defining what exactly automaticity entails. Furthermore, one’s answer to the automaticity question may also hinge on the answer to a second question: Namely, whether crossmodal correspondences are all ‘of a kind’, or whether instead there may be several different kinds of crossmodal mapping . Different answers to the automaticity question may then be revealed depending on the type of correspondence under consideration. We make a number of suggestions for future research that might help to determine just how automatic crossmodal correspondences really are. (shrink)
This article discusses a challenge to the traditional intentional-causalist conceptions of action and intentionality as well as to our everyday and legal conceptions of responsibility, namely the psychological discovery that the greatest part of our alleged actions are performed automatically, that is unconsciously and without a proximal intention causing and sustaining them. The main part of the article scrutinizes several mechanisms of automatic behavior, how they work, and whether the resulting behavior is an action. These mechanisms include actions caused by (...) distal implementation intentions, four types of habit and habitualization, mimicry, and semantically induced automatic behavior. According to the intentional-causalist criterion, the automatic behaviors resulting from all but one of these mechanisms turn out to be actions and to be intentional; and even the behavior resulting from the remaining mechanism is something we can be responsible for. Hence, the challenge, seen from close up, does not really call the traditional conception of action and intentionality into question. (shrink)
A large number of cross-references to various bodies of text are used in legal texts, each serving a different purpose. It is often necessary for authorities and companies to look into certain types of these citations. Yet, there is a lack of automatic tools to aid in this process. Recently, citation graphs have been used to improve the intelligibility of complex rule frameworks. We propose an algorithm that builds the citation graph from a document and automatically labels each edge according (...) to its purpose. Our method uses the citing text only and thus works only on citations who’s purpose can be uniquely identified by their surrounding text. This framework is then applied to the US code. This paper includes defining and evaluating a standard gold set of labels that cover a vast majority of citation types which appear in the “US Code” but are still short enough for practical use. We also proposed a novel linear-chain conditional random field model that extracts the features required for labeling the citations from the surrounding text. We then analyzed the effectiveness of different clustering methods such as K-means and support vector machine to automatically label each citation with the corresponding label. Besides this, we talk about the practical difficulties of this task and give a comparison of human accuracy compared to our end-to-end algorithm. (shrink)
(2009). How automatic is “automatic vigilance”? The role of working memory in attentional interference of negative information. Cognition & Emotion: Vol. 23, No. 6, pp. 1106-1117.
Empirical evidence indicates that much of human behavior is unconscious and automatic. This has led some philosophers to be skeptical of responsible agency or personhood in the moral sense. I present two arguments defending agency from these skeptical concerns. My first argument, the “margin of error” argument, is that the empirical evidence is consistent with the possibility that our automatic behavior deviates only slightly from what we would do if we were in full conscious control. Responsible agency requires only that (...) our actions more or less express our conscious goals and values. My second argument is a non-realist defense of moral agency. If we are willing to reject metaethical realism about agents, then there may be good reasons why we should retain the concept of agency in our moralizing, even if automaticity undermines the belief that we are really in conscious control of our behavior. We can do this by adopting a “reactive attitudes” approach to moral responsibility. On this view, agency is determined not by actual features of human psychology but by the attitudes and practices that we ought to adopt in response to the actions and motivations of individuals. (shrink)
In this paper, we study the performance of baseline hidden Markov model (HMM) for segmentation of speech signals. It is applied on single-speaker segmentation task, using Hindi speech database. The automatic phoneme segmentation framework evolved imitates the human phoneme segmentation process. A set of 44 Hindi phonemes were chosen for the segmentation experiment, wherein we used continuous density hidden Markov model (CDHMM) with a mixture of Gaussian distribution. The left-to-right topology with no skip states has been selected as it is (...) effective in speech recognition due to its consistency with the natural way of articulating the spoken words. This system accepts speech utterances along with their orthographic “transcriptions” and generates segmentation information of the speech. This corpus was used to develop context-independent hidden Markov models (HMMs) for each of the Hindi phonemes. The system was trained using numerous sentences that are relevant to provide information to the passengers of the Metro Rail. The system was validated against a few manually segmented speech utterances. The evaluation of the experiments shows that the best performance is obtained by using a combination of two Gaussians mixtures and five HMM states. A category-wise phoneme error analysis has been performed, and the performance of the phonetic segmentation has been reported. The modeling of HMMs has been implemented using Microsoft Visual Studio 2005 (C++), and the system is designed to work on Windows operating system. The goal of this study is automatic segmentation of speech at phonetic level. (shrink)
Cognitive neuroscience is the branch of neuroscience that studies the neural mechanisms underpinning cognition and develops theories explaining them. Within cognitive neuroscience, computational neuroscience focuses on modeling behavior, using theories expressed as computer programs. Up to now, computational theories have been formulated by neuroscientists. In this paper, we present a new approach to theory development in neuroscience: the automatic generation and testing of cognitive theories using genetic programming (GP). Our approach evolves from experimental data cognitive theories that explain “the mental (...) program” that subjects use to solve a specific task. As an example, we have focused on a typical neuroscience experiment, the delayed-match-to-sample (DMTS) task. The main goal of our approach is to develop a tool that neuroscientists can use to develop better cognitive theories. (shrink)
Cognitive scientists have long noted that automated behavior is the rule, while consciousness acts of self-regulation are the exception to the rule. On the face of it automated actions appear to be immune to moral appraisal because they are not subject to conscious control. Conventional wisdom suggests that sleepwalking exculpates, while the mere fact that a person is performing a well-versed task unthinkingly does not. However, our apparent lack of conscious control while we are undergoing automaticity challenges the idea (...) that there is a relevant moral difference between these two forms of unconscious behavior. In both cases the agent lacks access to information that might help them guide their actions so as to avoid harms. In response it is argued that the crucial distinction between the automatic agent and the agent undergoing an automatism, such as somnambulism or petit mal epilepsy, lies in the fact that the former can preprogram the activation and interruption of automatic behavior. Given that, it is argued that there is elbowroom for attributing responsibility to automated agents based on the quality of their will. (shrink)
Brain monitoring combined with automatic analysis of EEGs provides a clinical decision support tool that can reduce time to diagnosis and assist clinicians in real-time monitoring applications (e.g., neurological intensive care units). Clinicians have indicated that a sensitivity of 95% with specificity below 5% was the minimum requirement for clinical acceptance. In this study, a high-performance automated EEG analysis system based on principles of machine learning and big data is proposed. This hybrid architecture integrates hidden Markov models (HMMs) for sequential (...) decoding of EEG events with deep learning-based postprocessing that incorporates temporal and spatial context. These algorithms are trained and evaluated using the Temple University Hospital EEG, which is the largest publicly available corpus of clinical EEG recordings in the world. This system automatically processes EEG records and classifies three patterns of clinical interest in brain activity that might be useful in diagnosing brain disorders: (1) spike and/or sharp waves, (2) generalized periodic epileptiform discharges, (3) periodic lateralized epileptiform discharges. It also classifies three patterns used to model the background EEG activity: (1) eye movement, (2) artifacts and (3) background. Our approach delivers a sensitivity above 90% while maintaining a specificity below 5%. We also demonstrate that this system delivers a low false alarm rate, which is critical for any spike detection application. (shrink)
This paper presents a novel argumentation framework to support Issue-Based Information System style debates on design alternatives, by providing an automatic quantitative evaluation of the positions put forward. It also identifies several formal properties of the proposed quantitative argumentation framework and compares it with existing non-numerical abstract argumentation formalisms. Finally, the paper describes the integration of the proposed approach within the design Visual Understanding Environment software tool along with three case studies in engineering design. The case studies show the potential (...) for a competitive advantage of the proposed approach with respect to state-of-the-art engineering design methods. (shrink)
In the current study, late Chinese–English bilinguals performed a facial expression identification task with emotion words in the task-irrelevant dimension, in either their first language or second language. The investigation examined the automatic access of the emotional content in words appearing in more than one language. Significant congruency effects were present for both L1 and L2 emotion word processing. Furthermore, the magnitude of emotional face-word Stroop effect in the L1 task was greater as compared to the L2 task, indicating that (...) in L1 participants could access the emotional information in words in a more reliable manner. In summary, these findings provide more support for the automatic access of emotional information in words in the bilinguals’ two languages as well as attenuated emotionality of L2 processing. (shrink)
Marc Lewis argues that addiction is not a disease, it is instead a dysfunctional outcome of what plastic brains ordinarily do, given the adaptive processes of learning and development within environments where people are seeking happiness, or relief, or escape. They come to obsessively desire substances or activities that they believe will deliver happiness and so on, but this comes to corrupt the normal process of development when it escalates beyond a point of functionality. Such ‘deep learning’ emerges from consumptive (...) habits, or ‘motivated repetition’, and although addiction is bad, it ferments out of the ordinary stuff underpinning any neural habit. Lewis gives us a convincing story about the process that leads from ordinary controlled consumption through to quite heavy addictive consumption, but I claim that in some extreme cases the eventual state of deep learning tips over into clinically significant impairment and disorder. Addiction is an elastic concept, and although it develops through mild and moderate forms, the impairment we see in severe cases needs to be acknowledged. This impairment, I argue, consists in the chronic automatic consumption present in late stage addiction. In this condition, the desiring self largely drops out the picture, as the addicted individual begins to mindlessly consume. This impairment is clinically significant because the machinery of motivated rationality has become corrupted. To bolster this claim I compare what is going on in these extreme cases with what goes on in people who dissociate in cases of depersonalization disorder. (shrink)
Interaction mining is about discovering and extracting insightful information from digital conversations, namely those human?human information exchanges mediated by digital network technology. We present in this article a computational model of natural arguments and its implementation for the automatic argumentative analysis of digital conversations, which allows us to produce relevant information to build interaction business analytics applications overcoming the limitations of standard text mining and information retrieval technology. Applications include advanced visualisations and abstractive summaries.
Legislation usually lacks a systematic organization which makes the management and the access to norms a hard problem to face. A more analytic semantic unit of reference (provision) for legislative texts was identified. A model of provisions (provisions types and their arguments) allows to describe the semantics of rules in legislative texts. It can be used to develop advanced semantic-based applications and services on legislation. In this paper an automatic bottom-up strategy to qualify existing legislative texts in terms of provision (...) types is described. (shrink)
Tested the 2-process theory of detection, search, and attention presented by the current authors in a series of experiments. The studies demonstrate the qualitative difference between 2 modes of information processing: automatic detection and controlled search; trace the course of the learning of automatic detection, of categories, and of automatic-attention responses; and show the dependence of automatic detection on attending responses and demonstrate how such responses interrupt controlled processing and interfere with the focusing of attention. The learning of categories is (...) shown to improve controlled search performance. A general framework for human information processing is proposed. The framework emphasizes the roles of automatic and controlled processing. The theory is compared to and contrasted with extant models of search and attention. (shrink)
The claim that empathy is both automatic and representational is criticized as follows: (a) five empathy-arousing processes ranging from conditioning and mimicry to prospective-taking show that empathy can be either automatic or representational, and only under certain circumstances, both; (b) although automaticity decreases, empathy increases with age and cognitive development; (c) observers' causal attributions can shift rapidly and produce more complex empathic responses than the theory allows.
Actions performed in a state of automatism are not subject to moral evaluation, while automatic actions often are. Is the asymmetry between automatistic and automatic agency justified? In order to answer this question we need a model or moral accountability that does justice to our intuitions about a range of modes of agency, both pathological and non-pathological. Our aim in this paper is to lay the foundations for such an account.
It is sometimes claimed that the Bayesian framework automatically implements Ockham’s razor—that conditionalizing on data consistent with both a simple theory and a complex theory more or less inevitably favours the simpler theory. It is shown here that the automatic razor doesn’t in fact cut it for certain mundane curve-fitting problems.
How might philosophers and art historians make the best use of one another's research? That, in nuce, is what this special issue considers with respect to questions concerning the nature of photography as an artistic medium; and that is what my essay addresses with respect to a specific case: the dialogue, or lack thereof, between the work of the philosopher Stanley Cavell and the art historian-critic Rosalind Krauss. It focuses on Krauss's late appeal to Cavell's notion of automatism to argue (...) that artists now have to invent their own medium, both to provide criteria against which to judge artistic success or failure and to insulate serious art from the vacuous generalization of the aesthetic in a media-saturated culture at large.1 Much in the spirit of ‘Avant-Garde and Kitsch’, paying attention to the medium is once again an artist's best line of defence against the encroachment of new media, the culture industry, and spectacle. That Krauss should appeal to Cavell at all, let alone in such a Greenbergian frame of mind, is surprising if one is familiar with the fraught history of debate about artistic media in art theory since Greenberg. Cavell's work in this domain has always been closely associated with that of Michael Fried, and the mutual estrangement of Fried and Krauss, who began their critical careers as two of Greenberg's leading followers, is legendary.2I have written about the close connection between Fried's and Cavell's conceptions of an artistic medium before.3 Whereas Fried's and Cavell's early conception of an artistic medium was in a sense collaborative, emerging from an ongoing exchange of ideas at Harvard in the latter half of the 1960s, Krauss's much later appeal to the ideas of automatism and the automatic underpinning Cavell's conception of the photographic substrate of film from the early 1970s is not. In what follows, I try to clarify both the grounds of this appeal and its upshot. Does Krauss's account shed new light on Cavell's, or is she trying to press his terms into service for which they are ill-served? Both could of course be true, the former as a consequence of the latter perhaps. Conversely, do the art historical and philosophical accounts pass one another by? Note that even if the latter were true, its explanation might still prove instructive in the context of an interdisciplinary volume seeking to bring art historians and philosophers into dialogue around the themes of agency and automatism, which is precisely what Krauss's appeal to Cavell turns on. (shrink)