Advances in AI are powering increasingly precise and widespread computationalpropaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
Media argumentation is a powerful force in our lives. From political speeches to television commercials to war propaganda, it can effectively mobilize political action, influence the public, and market products. This book presents a new and systematic way of thinking about the influence of mass media in our lives, showing the intersection of media sources with argumentation theory, informal logic, computational theory, and theories of persuasion. Using a variety of case studies that represent arguments that typically occur in (...) the mass media, Douglas Walton demonstrates how tools recently developed in argumentation theory can be usefully applied to the identification, analysis, and evaluation of media arguments. He draws upon the most recent developments in artificial intelligence, including dialogical theories of argument, which he developed, as well as speech act theory. Each chapter presents solutions to problems central to understanding, analyzing, and criticizing media argumentation. (shrink)
Media argumentation is a powerful force in our lives. From political speeches to television commercials to war propaganda, it can effectively mobilize political action, influence the public, and market products. This book presents a new and systematic way of thinking about the influence of mass media in our lives, showing the intersection of media sources with argumentation theory, informal logic, computational theory, and theories of persuasion. Using a variety of case studies that represent arguments that typically occur in (...) the mass media, Douglas Walton demonstrates how tools recently developed in argumentation theory can be usefully applied to the identification, analysis, and evaluation of media arguments. (shrink)
If citizens are to make enlightened collective decisions, they need to rely on true factual beliefs, but misinformation impairs their ability to do so. Although some cases of misinformation are deliberate and amount to propaganda, cases of inadvertent misinformation are just as problematic in affecting the beliefs and behavior of democratic citizens. A review of empirical evidence suggests that this is a serious problem that cannot entirely be corrected by means of deliberation.
ABSTRACTJason Stanley’s How Propaganda Works intends to offer a novel account of what propaganda is, how it works, and what damage it does inside a democratic culture. The book succeeds in showing that, contrary to the stereotype, propaganda need not be false or misleading. However, Stanley offers contradictory definitions of propaganda, and his theory, which is both over- and under-inclusive, is applied in a dismissive, highly ideological way. In the end, it remains unclear how much damage (...)propaganda does. Voters in modern democracies would be ignorant and irrational even without propaganda. (shrink)
This paper examines Jason Stanley’s account of propaganda. I begin with an overview and some questions about the structure of that account. I then argue for two main conclusions. First, I argue that Stanley’s account over-generalizes, by counting mere incompetent argumentation as propaganda. But this problem can be avoided, by emphasizing the role of emotions in effective propaganda more than Stanley does. In addition, I argue that more propaganda is democratically acceptable than Stanley allows. Focusing especially (...) on sexual assault prevention campaigns, I show that propaganda can be acceptable even when it represents some in our communities as worthy of contempt. (shrink)
I analyze Jason Stanley’s model for how propaganda works, paying close attention to Stanley’s own rhetoric. I argue that Stanley’s language be supplemented with a vocabulary that helps us to attend to what sorts of things move democratic knowers, what sorts of things do not, and why. In addition, I argue that the reasonableness necessary for considering the views of others within democratic deliberation ought to be understood, not as an empathic, but as an interactive capacity. Finally, I critique (...) some of the ways in which Stanley speaks about the marginalized populations he aims to support.Analizo el modelo debido a Jason Stanley sobre el modo en que funciona la propaganda, prestando atención en particular a la retórica del propio Stanley. Argumento la conveniencia de que el lenguaje de Stanley se complemente con un vocabulario que nos ayude a prestar atención a las cosas que motivan a los agentes de conocimiento democráticos, a las que no lo hacen, y por qué. Adicionalmente, argumento que la razonabilidad que se necesita para tomar en consideración los puntos de vista de otros en la deliberación democrática debería entenderse como una capacidad no de empatía, sino de interacción. Finalmente, critico alguna de las formas en que Stanley habla de las poblaciones marginadas a las que quiere dar apoyo. (shrink)
The development of technology is unbelievably rapid. From limited local networks to high speed Internet, from crude computing machines to powerful semi-conductors, the world had changed drastically compared to just a few decades ago. In the constantly renewing process of adapting to such an unnaturally high-entropy setting, innovations as well as entirely new concepts, were often born. In the business world, one such phenomenon was the creation of a new type of entrepreneurship. This paper proposes a new academic discipline of (...)computational entrepreneurship, which centers on: (i) an exponentially growing (and less expensive) computing power, to the extent that almost everybody in a modern society can own and use that; (ii) omnipresent high-speed Internet connectivity, wired or wireless, representing our modern day’s economic connectomics; (iii) growing concern of exploiting “serendipity” for a strategic commercial advantage; and (iv) growing capabilities of lay people in performing calculations for their informed decisions in taking fast-moving entrepreneurial opportunities. Computational entrepreneurship has slowly become a new mode of operation for business ventures and will likely bring the academic discipline of entrepreneurship back to mainstream economics. (shrink)
Jason Stanley’s How Propaganda Works characterises and explores one democratically problematic kind of propaganda, ‘undermining propaganda’, which involves ‘[a] contribution to public discourse that is presented as an embodiment of certain ideals, yet is of a kind that tends to erode those very ideals’. Stanley’s model for how undermining propaganda functions is Rae Langton and Caroline West’s treatment of moves in pornographic language games. However, Stanley doesn’t consider whether his theory of propaganda might in turn (...) illuminate the harmful nature of pornography, in light of the familiar contention that some pornography acts as a kind of misogynistic propaganda. Drawing on Catharine MacKinnon’s writings on pornography, this paper will explore one way of developing the claim that pornography sometimes functions as undermining propaganda, in something close to Stanley’s sense. Moreover, I will suggest that the discussion points to a new response to the so-called authority problem for Rae Langton’s silencing argument against the protected status of pornography.El libro de Jason Stanley How Propaganda Works caracteriza y explora un tipo de propaganda democráticamente problemático, la ‘propaganda debilitadora’, que envuelve ‘[una] contribución al discurso público que se presenta como si incorporase determinados ideales, pero que es de un tipo que tiende a erosionar esos mismos ideales’. El modelo de Stanley para el funcionamiento de la propaganda debilitadora es el del tratamiento de los movimientos en los juegos de lenguaje pornográficos debido a Rae Langton y Caroline West. Sin embargo, Stanley no reflexiona acerca de si su teoría sobre la propaganda podría, a su vez, iluminar la dañina naturaleza de la pornografía, a la luz de la conocida tesis de que hay pornografía que actúa como una forma de propaganda misógina. A partir de los trabajos de Catharine MacKinnon sobre pornografía, este artículo pretende explorar una forma de desarrollar la tesis de que la pornografía funciona en ocasiones como propaganda debilitadora, en un sentido próximo al de Stanley. Además, sugeriré que la discusión apunta a una nueva respuesta al llamado problema de la autoridad en relación con el argumento del silenciamiento debido a Rae Langton en contra de un estatuto de protección para la pornografía. (shrink)
This book aims to develop a sophisticated understanding of propaganda. It begins with a brief history of early Western propaganda, including Ancient Greek classical theories of rhetoric and the art of persuasion, and traces its development through the Christian era, the rise of the nation-state, World War I, Nazism, and Communism. The core of the book examines the ethical implications of various forms of persuasion, not only hate propaganda but also insidious elements of more generally acceptable communication (...) such as advertising, public relations, and government information, setting these in the context of freedom of expression. Propaganda and the Ethics of Persuasion examines the art of persuasion but it also hopes to establish a "self-defense" resistance to propaganda. As Jacques Ellul warned in 1980, any new technology enters into an already existing class system and can be expected to develop in a way favourable to the dominant interests of that system. The merger of AOL and Time-Warner confirms the likelihood of corporate interests dominating the future of the Internet, but the Internet has also opened up new possibilities for a politically effective counter-culture, as was demonstrated at the meeting of the World Trade Organization in Seattle in late 1999 and numerous similar gatherings since. (shrink)
Abstract The aim of this paper is to examine the propaganda power of Madison’s Solidarity Sing-Along. To do so, I will modify the Epistemic Merit Model of propaganda so that it can account for a broader spectrum of propaganda. I will show how this is consistent with other accounts of musical pragmatics and the potential political function of songs and music. This will provide the ground for a robust interpretation of the political meanings of the Solidarity Sing-Along. (...) I will assume the Madison protests and the Solidarity Sing-Along can be considered a paradigm case of peaceful protest as it has been claimed that the Madison protest and the role of art within those protests set the stage for the Occupy Movement protests later in the same year. (shrink)
Computational explanations focus on information processing required in specific cognitive capacities, such as perception, reasoning or decision-making. These explanations specify the nature of the information processing task, what information needs to be represented, and why it should be operated on in a particular manner. In this article, the focus is on three questions concerning the nature of computational explanations: What type of explanations they are, in what sense computational explanations are explanatory and to what extent they involve (...) a special, “independent” or “autonomous” level of explanation. In this paper, we defend the view computational explanations are genuine explanations, which track non-causal/formal dependencies. Specifically, we argue that they do not provide mere sketches for explanation, in contrast to what for example Piccinini and Craver :283–311, 2011) suggest. This view of computational explanations implies some degree of “autonomy” for the computational level. However, as we will demonstrate that does not make this view “computationally chauvinistic” in a way that Piccinini or Kaplan :339–373, 2011) have charged it to be. (shrink)
We study the computational complexity of polyadic quantifiers in natural language. This type of quantification is widely used in formal semantics to model the meaning of multi-quantifier sentences. First, we show that the standard constructions that turn simple determiners into complex quantifiers, namely Boolean operations, iteration, cumulation, and resumption, are tractable. Then, we provide an insight into branching operation yielding intractable natural language multi-quantifier expressions. Next, we focus on a linguistic case study. We use computational complexity results to (...) investigate semantic distinctions between quantified reciprocal sentences. We show a computational dichotomy<br>between different readings of reciprocity. Finally, we go more into philosophical speculation on meaning, ambiguity and computational complexity. In particular, we investigate a possibility to<br>revise the Strong Meaning Hypothesis with complexity aspects to better account for meaning shifts in the domain of multi-quantifier sentences. The paper not only contributes to the field of the formal<br>semantics but also illustrates how the tools of computational complexity theory might be successfully used in linguistics and philosophy with an eye towards cognitive science. (shrink)
A political discourse of peace marked the distribution and use of radioisotopes in biomedical research and in medical diagnosis and therapy in the post-World War II period. This occurred during the era of expansion and strengthening of the United States' influence on the promotion of sciences and technologies in Europe as a collaborative effort, initially encouraged by the policies and budgetary distribution of the Marshall Plan. This article follows the importation of radioisotopes by two Spanish research groups, one in experimental (...) endocrinology and one in molecular biology. For both groups foreign funds were instrumental in the early establishment of their laboratories. The combination of funding and access to previously scarce radioisotopes helped position these groups at the forefront of research in Spain. (shrink)
A political discourse of peace marked the distribution and use of radioisotopes in biomedical research and in medical diagnosis and therapy in the post-World War II period. This occurred during the era of expansion and strengthening of the United States' influence on the promotion of sciences and technologies in Europe as a collaborative effort, initially encouraged by the policies and budgetary distribution of the Marshall Plan. This article follows the importation of radioisotopes by two Spanish research groups, one in experimental (...) endocrinology and one in molecular biology. For both groups foreign funds were instrumental in the early establishment of their laboratories. The combination of funding and access to previously scarce radioisotopes helped position these groups at the forefront of research in Spain. (shrink)
Computational methods have become the dominant technique in many areas of science. This book contains the first systematic philosophical account of these new methods and their consequences for scientific method. This book will be of interest to philosophers of science and to anyone interested in the role played by computers in modern science.
Research on perception without awareness has provoked strong emotional responses from individuals within and outside the scientific community, due in part to the perceived potential for abuse of subliminal techniques. In this paper, four basic issues regarding the use of subliminal techniques for propaganda purposes are discussed: whether exposure to subliminal stimuli can produce significant, predictable changes in affect, cognition and behavior; whether these effects are robust and powerful enough to make the use of subliminal techniques for propaganda (...) purposes feasible; whether the effects of subliminal stimulation are stable over time; and whether subliminal influences can be resisted by unwilling subjects. Research suggests that exposure to simple drive-or affect-related subliminal stimuli can produce ecologically significant, temporally stable changes in attitudes and behavior, and therefore may have potential for use as propaganda tools. Implications of these findings for our understanding of the mechanisms underlying subliminal perception are discussed. Technical problems which would need to be addressed before subliminal propaganda techniques could be employed are also discussed. Ethical issues raised by the use of covert attitude and behavior manipulation techniques are addressed. (shrink)
Computational modeling plays an increasingly important explanatory role in cases where we investigate systems or problems that exceed our native epistemic capacities. One clear case where technological enhancement is indispensable involves the study of complex systems.1 However, even in contexts where the number of parameters and interactions that define a problem is small, simple systems sometimes exhibit non-linear features which computational models can illustrate and track. In recent decades, computational models have been proposed as a way to (...) assist us in understanding emergent phenomena. (shrink)
Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions. Justifying the role of (...) computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science. (shrink)
In this paper, we argue for the centrality of prediction in the use of computational models in science. We focus on the consequences of the irreversibility of computational models and on the conditional or ceteris paribus, nature of the kinds of their predictions. By irreversibility, we mean the fact that computational models can generally arrive at the same state via many possible sequences of previous states. Thus, while in the natural world, it is generally assumed that physical (...) states have a unique history, representations of those states in a computational model will usually be compatible with more than one possible history in the model. We describe some of the challenges involved in prediction and retrodiction in computational models while arguing that prediction is an essential feature of non-arbitrary decision making. Furthermore, we contend that the non-predictive virtues of computational models are dependent to a significant degree on the predictive success of the models in question. (shrink)
According to some philosophers, computational explanation is proprietary to psychology—it does not belong in neuroscience. But neuroscientists routinely offer computational explanations of cognitive phenomena. In fact, computational explanation was initially imported from computability theory into the science of mind by neuroscientists, who justified this move on neurophysiological grounds. Establishing the legitimacy and importance of computational explanation in neuroscience is one thing; shedding light on it is another. I raise some philosophical questions pertaining to computational explanation and (...) outline some promising answers that are being developed by a number of authors. (shrink)
Cognitive control has long been one of the most active areas of computational modeling work in cognitive science. The focus on computational models as a medium for specifying and developing theory predates the PDP books, and cognitive control was not one of the areas on which they focused. However, the framework they provided has injected work on cognitive control with new energy and new ideas. On the occasion of the books' anniversary, we review computational modeling in the (...) study of cognitive control, with a focus on the influence that the PDP approach has brought to bear in this area. Rather than providing a comprehensive review, we offer a framework for thinking about past and future modeling efforts in this domain. We define control in terms of the optimal parameterization of task processing. From this vantage point, the development of control systems in the brain can be seen as responding to the structure of naturalistic tasks, through the filter of the brain systems with which control directly interfaces. This perspective lays open a set of fascinating but difficult research questions, which together define an important frontier for future computational research. (shrink)
Because the propaganda model challenges basic premises and suggests that the media serve antidemocratic ends, it is commonly excluded from mainstream debates on media bias. Such debates typically include conservatives, who criticize the media for excessive liberalism and an adversarial stance toward government and business, and centrists and liberals, who deny the charge of adversarialism and contend that the media behave fairly and responsibly. The exclusion of the propaganda model perspective is noteworthy, for one reason, because that perspective (...) is consistent with long standing and widely held elite views that 'the masses are notoriously short-sighted' (Bailey 1948: 13) and are 'often poor judges of their own interests' (Lasswell 1933: 527), so that 'our statesmen must deceive them' (Bailey 1948: 13); and they 'can be managed only by a specialized class whose personal interests reach beyond the locality' (Walter Lippmann 1921: 310). In Lippmann's view, the 'manufacture of consent' by an elite class had already become 'a self-conscious art and a regular organ of popular government' by the 1920s (Lippman 1921: 248). (shrink)
The computational theory of mind construes the mind as an information-processor and cognitive capacities as essentially representational capacities. Proponents of the view claim a central role for representational content in computational models of these capacities. In this paper I argue that the standard view of the role of representational content in computational models is mistaken; I argue that representational content is to be understood as a gloss on the computational characterization of a cognitive process.Keywords: Computation; Representational (...) content; Cognitive capacities; Explanation. (shrink)
The medial prefrontal cortex (mPFC) has been the subject of intense interest as a locus of cognitive control. Several computational models have been proposed to account for a range of effects, including error detection, conflict monitoring, error likelihood prediction, and numerous other effects observed with single-unit neurophysiology, fMRI, and lesion studies. Here, we review the state of computational models of cognitive control and offer a new theoretical synthesis of the mPFC as signaling response–outcome predictions. This new synthesis has (...) two interacting components. The first component learns to predict the various possible outcomes of a planned action, and the second component detects discrepancies between the actual and intended responses; the detected discrepancies in turn update the outcome predictions. This single construct is consistent with a wide array of performance monitoring effects in mPFC and suggests a unifying account of the cognitive role of medial PFC in performance monitoring. (shrink)
It is argued that the traditional distinction between artificial intelligence and cognitive simulation amounts to little more than a difference in style of research - a different ordering in goal priorities and different methodological allegiances. Both enterprises are constrained by empirical considerations and both are directed at understanding classes of tasks that are defined by essentially psychological criteria. Because of the different ordering of priorities, however, they occasionally take somewhat different stands on such issues as the power/generality trade-off and on (...) the relevance of the sort of data collected in experimental psychology laboratories. (shrink)
This essay is my review of Erwin Leiser’s excellent documentary film Germany Awake. This classic film first aired in Germany in 1968, and remains to this day one of the best surveys of major Nazi-era movies and exactly what messages they were meant to convey. The film underscores the emphasis the regime put on film as one of the premier mechanisms of propaganda, though Leiser’s film points out that most of the cinema produced by the Nazi regime was not (...) pure propaganda, but mainly entertainment. In the review, I put forward a list of criteria for evaluating the degree of the deceptiveness of propaganda, and why cinema is so apt to exploit those criteria. (shrink)
The main claim of this paper is that notions of implementation based on an isomorphic correspondence between physical and computational states are not tenable. Rather, ``implementation'' has to be based on the notion of ``bisimulation'' in order to be able to block unwanted implementation results and incorporate intuitions from computational practice. A formal definition of implementation is suggested, which satisfies theoretical and practical requirements and may also be used to make the functionalist notion of ``physical realization'' precise. The (...) upshot of this new definition of implementation is that implementation cannot distinguish isomorphic bisimilar from non-isomporphic bisimilar systems anymore, thus driving a wedge between the notions of causal and computational complexity. While computationalism does not seem to be affected by this result, the consequences for functionalism are not clear and need further investigations. (shrink)
The psycholinguistic literature has identified two syntactic adaptation effects in language production: rapidly decaying short-term priming and long-lasting adaptation. To explain both effects, we present an ACT-R model of syntactic priming based on a wide-coverage, lexicalized syntactic theory that explains priming as facilitation of lexical access. In this model, two well-established ACT-R mechanisms, base-level learning and spreading activation, account for long-term adaptation and short-term priming, respectively. Our model simulates incremental language production and in a series of modeling studies, we show (...) that it accounts for (a) the inverse frequency interaction; (b) the absence of a decay in long-term priming; and (c) the cumulativity of long-term adaptation. The model also explains the lexical boost effect and the fact that it only applies to short-term priming. We also present corpus data that verify a prediction of the model, that is, that the lexical boost affects all lexical material, rather than just heads. (shrink)
The formal conception of computation (FCC) holds that computational processes are not sensitive to semantic properties. FCC is popular, but it faces well-known difficulties. Accordingly, authors such as Block and Peacocke pursue a ?semantically-laden? alternative, according to which computation can be sensitive to semantics. I argue that computation is insensitive to semantics within a wide range of computational systems, including any system with ?derived? rather than ?original? intentionality. FCC yields the correct verdict for these systems. I conclude that (...) there is only one promising strategy for semantically-laden theorists: identify special computational systems that help generate their own semantic properties, and then show that computation within those systems is semantically-laden. Unfortunately, the few existing discussions that pursue this strategy are problematic. (shrink)
Second-language learners rarely arrive at native proficiency in a number of linguistic domains, including morphological and syntactic processing. Previous approaches to understanding the different outcomes of first- versus second-language learning have focused on cognitive and neural factors. In contrast, we explore the possibility that children and adults may rely on different linguistic units throughout the course of language learning, with specific focus on the granularity of those units. Following recent psycholinguistic evidence for the role of multiword chunks in online language (...) processing, we explore the hypothesis that children rely more heavily on multiword units in language learning than do adults learning a second language. To this end, we take an initial step toward using large-scale, corpus-based computational modeling as a tool for exploring the granularity of speakers' linguistic units. Employing a computational model of language learning, the Chunk-Based Learner, we compare the usefulness of chunk-based knowledge in accounting for the speech of second-language learners versus children and adults speaking their first language. Our findings suggest that while multiword units are likely to play a role in second-language learning, adults may learn less useful chunks, rely on them to a lesser extent, and arrive at them through different means than children learning a first language. (shrink)
This chapter provides an overview of the basic research strategies and analytic techniques deployed in computational cognitive neuroscience. On the one hand, “top-down” strategies are used to infer, from formal characterizations of behavior and cognition, the computational properties of underlying neural mechanisms. On the other hand, “bottom-up” research strategies are used to identify neural mechanisms and to reconstruct their computational capacities. Both of these strategies rely on experimental techniques familiar from other branches of neuroscience, including functional magnetic (...) resonance imaging, single-cell recording, and electroencephalography. What sets computational cognitive neuroscience apart, however, is the explanatory role of analytic techniques from disciplines as varied as computer science, statistics, machine learning, and mathematical physics. These techniques serve to describe neural mechanisms computationally, but also to drive the process of scientific discovery by influencing which kinds of mechanisms are most likely to be identified. For this reason, understanding the nature and unique appeal of computational cognitive neuroscience requires not just an understanding of the basic research strategies that are involved, but also of the formal methods and tools that are being deployed, including those of probability theory, dynamical systems theory, and graph theory. (shrink)
According to pancomputationalism, everything is a computing system. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some varieties are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most trivial varieties are true. As a side effect of this exercise, I offer a clarified distinction between computational modelling and computational explanation.<br><br>.
We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only with proportional quantifiers, like more than half. This can be explained by noting that, according to the complexity perspective, only proportional quantifiers require working memory engagement.
The main claim of this paper is that notions of implementation based on an isomorphic correspondence between physical and computational states are not tenable. Rather, ``implementation'' has to be based on the notion of ``bisimulation'' in order to be able to block unwanted implementation results and incorporate intuitions from computational practice. A formal definition of implementation is suggested, which satisfies theoretical and practical requirements and may also be used to make the functionalist notion of ``physical realization'' precise. The (...) upshot of this new definition of implementation is that implementation cannot distinguish isomorphic bisimilar from non-isomporphic bisimilar systems anymore, thus driving a wedge between the notions of causal and computational complexity. While computationalism does not seem to be affected by this result, the consequences for functionalism are not clear and need further investigations. (shrink)