Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes (...) “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for theNew York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all. (shrink)
In the fourteenth chapter of The Philosophy of Information, Luciano Floridi puts forth a criticism of ‘digital ontology’ as a step toward the articulation of an ‘informational structural realism’. Based on the claims made in the chapter, the present paper seeks to evaluate the distinctly Kantian scope of the chapter from a rather unconventional viewpoint: while in sympathy with the author’s doubts ‘against’ digital philosophy, we follow a different route. We turn our attention to the concept of construction as used (...) in the book with the hope of raising some additional questions that might contribute to a better understanding of what is at stake in Floridi’s experimental epistemological response to digital ontology. (shrink)
Viewed in the light of the remarkable performance of ‘Watson’ - IBMs proprietary artificial intelligence computer system capable of answering questions posed in natural language - on the US general knowledge quiz show ‘Jeopardy’, we review two experiments on formal systems - one in the domain of quantum physics, the other involving a pictographic languaging game - whereby behaviour seemingly characteristic of domain understanding is generated by the mere mechanical application of simple rules. By re-examining both experiments in the context (...) of Searle’s Chinese Room Argument, we suggest their results merely endorse Searle’s core intuition: that ‘syntactical manipulation of symbols is not sufficient for semantics’. Although, pace Watson, some artificial intelligence practitioners have suggested that more complex, higher-level operations on formal symbols are required to instantiate understanding in computational systems, we show that even high-level calls to Google translate would not enable a computer qua ‘formal symbol processor’ to understand the language it processes. We thus conclude that even the most recent developments in ‘quantum linguistics’ will not enable computational systems to genuinely understand natural language. (shrink)
In a reflective and richly entertaining piece from 1979, Doug Hofstadter playfully imagined a conversation between ‘Achilles’ and an anthill (the eponymous ‘Aunt Hillary’), in which he famously explored many ideas and themes related to cognition and consciousness. For Hofstadter, the anthill is able to carry on a conversation because the ants that compose it play roughly the same role that neurons play in human languaging; unfortunately, Hofstadter’s work is notably short on detail suggesting how this magic might be achieved1. (...) Conversely in this paper - finally reifying Hofstadter’s imagination - we demonstrate how populations of simple ant-like creatures can be organised to solve complex problems; problems that involve the use of forward planning and strategy. Specifically we will demonstrate that populations of such creatures can be configured to play a strategically strong - though tactically weak - game of HeX (a complex strategic game).We subsequently demonstrate how tactical play can be improved by introducing a form of forward planning instantiated via multiple populations of agents; a technique that can be compared to the dynamics of interacting populations of social insects via the concept of meta-population. In this way although, pace Hofstadter, we do not establish that a meta-population of ants could actually hold a conversation with Achilles, we do successfully introduce Aunt Hillary to the complex, seductive charms of HeX. (shrink)
The Chinese Room Argument purports to show that‘ syntax is not sufficient for semantics’; an argument which led John Searle to conclude that ‘programs are not minds’ and hence that no computational device can ever exhibit true understanding. Yet, although this controversial argument has received a series of criticisms, it has withstood all attempts at decisive rebuttal so far. One of the classical responses to CRA has been based on equipping a purely computational device with a physical robot body. This (...) response, although partially addressed in one of Searle’s original contra arguments - the ‘robot reply’ - more recently gained friction with the development of embodiment and enactivism1, two novel approaches to cognitive science that have been exciting roboticists and philosophers alike. Furthermore, recent technological advances - blending biological beings with computational systems - have started to be developed which superficially suggest that mind may be instantiated in computing devices after all. This paper will argue that (a) embodiment alone does not provide any leverage for cognitive robotics wrt the CRA, when based on a weak form of embodiment and that (b) unless they take the body into account seriously, hybrid bio-computer devices will also share the fate of their disembodied or robotic predecessors in failing to escape from Searle’s Chinese room. (shrink)
Upshot: Albeit mostly supportive of our work, the commentaries we received highlighted a few points that deserve additional explanation, with regard to the notion of learning in our model, the relationship between our model and the brain, as well as the notion of anticipation. This open discussion emphasizes the need for toy computer models, to fuel theoretical discussion and prevent business-as-usual from getting in the way of new ideas.
Context: Constructivist approaches to cognition have mostly been descriptive, and now face the challenge of specifying the mechanisms that may support the acquisition of knowledge. Departing from cognitivism, however, requires the development of a new functional framework that will support causal, powerful and goal-directed behavior in the context of the interaction between the organism and the environment. Problem: The properties affecting the computational power of this interaction are, however, unclear, and may include partial information from the environment, exploration, distributed processing (...) and aggregation of information, emergence of knowledge and directedness towards relevant information. Method: We posit that one path towards such a framework may be grounded in these properties, supported by dynamical systems. To assess this hypothesis, we describe computational models inspired from swarm intelligence, which we use as a metaphor to explore the practical implications of the properties highlighted. Results: Our results demonstrate that these properties may serve as the basis for complex operations, yielding the elaboration of knowledge and goal-directed behavior. Implications: This work highlights aspects of interaction that we believe ought to be taken into account when characterizing the possible mechanisms underlying cognition. The scope of the models we describe cannot go beyond that of a metaphor, however, and future work, theoretical and experimental, is required for further insight into the functional role of interaction with the environment for the elaboration of complex behavior. Constructivist content: Inspiration for this work stems from the constructivist impetus to account for knowledge acquisition based on interaction. (shrink)