When things go badly, we notice that something is amiss, figure out what went wrong and why, and attempt to repair the problem. Artificial systems depend on their human designers to program in responses to every eventuality and therefore typically don’t even notice when things go wrong, following their programming over the proverbial, and in some cases literal, cliff. This article describes our work on the Meta-Cognitive Loop, a domain-general approach to giving artificial systems the ability to notice, assess, and (...) repair problems. The goal is to make artificial systems more robust. (shrink)
In: B. Hardy-Vallee & N. Payette, eds. Beyond the brain: embodied, situated & distributed cognition. (Cambridge: Cambridge Scholar’s Press), in press. Abstract: In this article, I do three main things: 1. First, I introduce an approach to the mind motivated primarily by evolutionary considerations. I do that by laying out four principles for the study of the mind from an evolutionary perspective, and four predictions that they suggest. This evolutionary perspective is completely compatible with, although broader than, the embodied cognition (...) approach. 2. Then I look at one prediction in depth, the idea that the brain evolved by exaptation–reusing exiting functional units, and combining them in novel ways to generate new cognitive capacities. 3. Finally, I try to lay out some of the implications, both of the in-depth example, and of the more general approach. (shrink)
Artificial Intelligence, in press. Abstract: For some time we have been developing, and have had significant practical success with, a time-sensitive, contradiction-tolerant logical reasoning engine called the active logic machine (ALMA). The current paper details a semantics for a general version of the underlying logical formalism, active logic. Central to active logic are special rules controlling the inheritance of beliefs in general (and of beliefs about the current time in particular), very tight controls on what can be derived from direct (...) contradictions (P &￢P), and mechanisms allowing an agent to represent and reason about its own beliefs and past reasoning. Furthermore, inspired by the notion that until an agent notices that a set of beliefs is contradictory, that set seems consistent (and the agent therefore reasons with it as if it were consistent), we introduce an “apperception function” that represents an agent’s limited awareness of its own beliefs, and serves to modify inconsistent belief sets so as to yield consistent sets. Using these ideas, we introduce a new definition of logical consequence in the context of active logic, as well as a new definition of soundness such that, when reasoning with consistent premises, all classically sound rules remain sound in our new sense. However, not everything that is classically sound remains sound in our sense, for by classical definitions, all rules with contradictory premises are vacuously sound, whereas in active logic not everything follows from a contradiction. (shrink)
Feminist epistemology and philosophy of science studies the ways in which gender does and ought to influence our conceptions of knowledge, the knowing subject, and practices of inquiry and justification. It identifies ways in which dominant conceptions and practices of knowledge attribution, acquisition, and justification systematically disadvantage women and other subordinated groups, and strives to reform these conceptions and practices so that they serve the interests of these groups. Various practitioners of feminist epistemology and philosophy of science argue that dominant (...) knowledge practices disadvantage women by (1) excluding them from inquiry, (2) denying them epistemic authority, (3) denigrating their “feminine” cognitive styles and modes of knowledge, (4) producing theories of women that represent them as inferior, deviant, or significant only in the ways they serve male interests, (5) producing theories of social phenomena that render women's activities and interests, or gendered power relations, invisible, and (6) producing knowledge (science and technology) that is not useful for people in subordinate positions, or that reinforces gender and other social hierarchies. Feminist epistemologists trace these failures to flawed conceptions of knowledge, knowers, objectivity, and scientific methodology. They offer diverse accounts of how to overcome these failures. They also aim to (1) explain why the entry of women and feminist scholars into different academic disciplines, especially in biology and the social sciences, has generated new questions, theories, and methods, (2) show how gender has played a.. (shrink)
I am very grateful for the thoughtful and illuminating comments of Linda Alcoff, Sharyn Clough, Marianne Janack, and Charles Mills on my Hypatia paper. Together, they raise several related questions about the status of value judgments and the roles they might legitimately play in scientific inquiry. Two common concerns relate to the proper scope of the legitimate use of value judgments in science, and whether there are significant differences between value judgments and factual judgments with respect to their revisability. Let (...) me take up these common questions first. (shrink)
Amartya Sen’s ethical theorizing helps feminists resolve the tensions between the claims of women’s particular perspectives and moral objectivity. His concept of ‘‘positional objectivity’’ highlights the epistemological significance of value judgments made from particular social positions, while holding that certain values may become widely shared. He shows how acknowledging positionality is consistent with affirming the universal value of democracy. This article builds on Sen’s work by proposing an analysis of democracy as a set of institutions that aims to intelligently utilize (...) positional information for shared ends. This epistemological analysis of democracy offers a way to understand the rationale for reserving political offices for women. From a political point of view, gendered positions are better thought of as an epistemological resource than as a ground of identity politics – that is, of parochial identification and solidarity. (shrink)
The premise of this symposium is that the principle and ideal developed in Brown v. Board of Education2 and its successor cases lie at the heart of the rationale for affirmative action in higher education. The principle of the school desegregation cases is that racial segregation is an injustice that demands remediation. The ideal of the school desegregation cases is that racial integration is a positive good, without which “the dream of one Nation, indivisible”3 cannot be realized. Both the principle (...) and the ideal make racial integration a compelling interest. The Supreme Court recognized these claims in Grutter v. Bollinger. However, it failed to take full advantage of them. It thereby failed to answer crucial questions that must be answered by policies subject to strict scrutiny. In this essay, I shall display the links tying Grutter to Brown, discuss the vulnerabilities of Grut- ter in the absence of an explicit grounding in Brown, and demonstrate how the affirmative action policy upheld in Grutter, when explicitly grounded in Brown, survives strict scrutiny. To understand this argument, it is helpful first to explain the integrationist perspective that underlies it. (shrink)
I want to present a new interpretation of Hobbes, in particular of what he was up to when he wrote Leviathan. In order to do this I will examine how he viewed the problem of social disorder and how he intended for that problem to be solved. I will argue that although he held that maintaining a credible threat of punishment for wrongdoing is necessary for social order, to Hobbes it is not sufficient; unless the subjects are properly educated the (...) commonwealth is doomed. I maintain that this need to ensure proper education illuminates Leviathan’s intent and its structure. Further, I’ll argue that when education is given its proper place in Hobbes’ scheme, the result is an account of disorder and a solution to it which are truer to Hobbes’ text and more plausible than those of certain competing views. In what follows, I will give an overview of the line I propose to take, then discuss how it contrasts with other views of Hobbes in the literature. The problem of disorder. A great deal of Hobbes scholarship focuses on the account he gives in the first half of Leviathan of how people in a state of nature could create a commonwealth. Fruitful and important as it is, however, this focus tends to leave the second half of the book a mystery: if what’s important about Leviathan is its presentation of a social contract theory, it’s not obvious why Hobbes devotes half of his treatise to theological matters. I will argue that once we have a clearer understanding of the problem Hobbes was addressing, Leviathan’s second half can be seen as an important component of Hobbes’ intended solution. While modern theoreticians are often most interested in Hobbes’ account of how people prior to society could make one from scratch, Hobbes himself was most concerned with how to prevent disorder from destroying an existing government. Hobbes wrote a good deal about the causes of disorder—whole chapters and many 1 scattered remarks in The Elements of Law, De Cive, and Leviathan, plus much of Behemoth—and I propose examining these writings closely in order to clarify how Hobbes conceived the problem he was addressing.. (shrink)
For the least the last 10 years, there has been growing interest in, and grow- ing evidence for, the intimate relations between more abstract or higher order cognition—such as reasoning, planning, and language use—and the more con- crete, immediate, or lower order operations of the perceptual and motor sys- tems that support seeing, feeling, moving, and manipulating. A sub-field of the larger research program in embodied cognition (Clark, 1997, 1998; Wilson, 2001; Anderson, 2003, 2007d, 2008; Gibbs, 2006), this work has (...) generally pro- ceeded under the banner of grounded cognition, and works to support the claim that thinking is inherently tied to—grounded in—perceiving and acting. Thus, Glenberg and Kaschak (2002) discuss “grounding language in action”; Gallese and Lakoff (2005) argue that concepts are “grounded in the sensory–motor sys- tem;” and Barsalou (1999) at various times talks of “grounding cognition in perception,” “grounding conceptual knowledge in modality-specific systems” (Barsalou et al., 2003), and most recently simply of “grounded cognition” (Barsalou, 2008). (shrink)
Embodied Cognition is growing up, and How the Body Shapes the Mind is both a sign of, and substantive contributor to this ongoing development. Born in or about 1991, EC is only now emerging from a tumultuous but exciting childhood marked in particular by the size and breadth of the extended family hoping to have some impact on its early education and upbringing. As family members include computer science, phenomenology, developmental and cognitive psychology, analytic philosophy of mind, linguistics, neuroscience, and (...) eastern mysticism, just to name a few, EC has both benefited and suffered from a wealth of different and often incompatible ideas about who and what it is, what it should do with its life, even what language it should speak. Gallagher brings some cohesion and consistency to this situation, not by surveying and synthesizing these competing approaches, but by focusing on some fundamental issues, and carefully marshalling the evidence and developing the vocabulary to thoroughly consider them. (shrink)
To think about how to anchor abstract symbols to objects in the world is to become part of a tradition in philosophy with a long history, and an especially rich recent past. It is to ask, with Wittgenstein, “What makes my thought about him, a thought about him?” and thus it is to wonder not just about the nature of referring expressions or singular terms, but about the nature of referring beings. With this in mind I hereby endeavor—brieﬂy, incompletely, but (...) hopefully still usefully—to introduce what in my judgment is the single best philosophical starting-point for those interested in understanding the referential connections between symbols and the world, and the cognitive, epistemic, and linguistic capacities which support them: The Varieties of Reference by Gareth Evans.1 It is worthwhile ﬁrst of all to note, as the title indicates, that it is the varieties of reference that are of interest. It is Evans’ contention that no single theory can account for our various use of singular terms; although the different kinds of reference share certain features, and rely on related cognitive, linguistic and epistemic capacities, it appears that, rather than being a class deﬁned by necessary and sufﬁcient criteria for membership, they form a family of abilities, united, like a thread, by its overlapping ﬁbers. Evans does not defend this claim so much as display it in his account. Much of the underlying variety in reference can be brought out by considering the guiding principle of the work as a whole, which Evans.. (shrink)
However, there has also been growing interest in trying to create, and investigate the potential beneﬁts of, intelligent systems which are themselves metacognitive. It is thought that systems that monitor themselves, and proactively respond to problems, can perform better, for longer, with less need for (expensive) human intervention. Thus has IBM widely publicized their "autonomic computing" initiative, aimed at developing computers which are (in their words) self-aware, selfconﬁguring, self-optimizing, self-healing, self-protecting, and self-adapting. More ambitiously, it is hypothesized that metacognitive awareness (...) may be one of the keys to developing truly intelligent artiﬁcial systems. DARPA's recent Cognitive Information Processing Technology initiative, for instance, foregrounds reﬂection (along with reaction and deliberation) as one of the three pillars required for ﬂexible, robust AI systems. (shrink)
Part of understanding the functional organization of the brain is understanding how it evolved. This talk presents evidence suggesting that while the brain may have originally emerged as an organ with functionally dedicated regions, the creative re-use of these regions has played a significant role in its evolutionary development. This would parallel the evolution of other capabilities wherein existing structures, evolved for other purposes, are re-used and built upon in the course of continuing evolutionary development (“exaptation”: Gould & Vrba 1982). (...) There is psychological support for exaptation in cognition (e.g. Cosmides 1989), theoretical reason to expect it (Anderson 2003; in press-a; in press-b) and neuroanatomic evidence that the brain evolved by preserving, extending, and combining existing network components, rather than by generating complex structures de novo (Sporns & Kötter 2004). However, there has been little evidence that integrates these perspectives, bringing such an account of the evolution of cognitive function into the realm of cognitive neuroscience (although see, e.g., Barsalou 1999). (shrink)
Self Aware Computer Systems is an area of basic research, and we are only in the initial stages of our understanding of what it means: What it means to be self aware; what a self aware system can do that a system without it cannot do; and what are some of the immediate practical applications and challenge problems. This paper is a report capturing some of the salient points discussed during the DARPA workshop on Self Aware Computer Systems held on (...) April 27-28, 2004 in Washington DC. (shrink)
In recent years, embodied cognitive agents have become a central research focus in Cognitive Science. We suggest that there are at least three aspects of embodiment| physical, social and temporal|which must be treated simultaneously to make possible a realistic implementation of agency. In this paper we detail the ways in which attention to the temporal embodiment of a cognitive agent (perhaps the most neglected aspect of embodiment) can enhance the ability of an agent to act in the world, both in (...) itself, and also by supporting more robust integrations with the physical and social worlds. (shrink)
In this essay we respond to some criticisms of the guidance theory of representation offered by Tom Roberts. We argue that although Roberts’ criticisms miss their mark, he raises the important issue of the relationship between affordances and the action-oriented representations proposed by the guidance theory. Affordances play a prominent role in the anti-representationalist accounts offered by theorists of embodied cognition and ecological psychology, and the guidance theory is motivated in part by a desire to respond to the critiques of (...) representationalism offered in such accounts, without giving up entirely on the idea that representations are an important part of the cognitive economy of many animals. Thus, explorations of whether and how such accounts can in fact be related and reconciled potentially offer to shed some light on this ongoing controversy. Although the current essay hardly settles the larger debate, it does suggest that there may be more possibility for agreement than is often supposed. (shrink)
The current paper details a restricted semantics for active logic, a time-sensitive, contradictiontolerant logical reasoning formalism. Central to active logic are special rules controlling the inheritance of beliefs in general, and beliefs about the current time in particular, very tight controls on what can be derived from direct contradictions (P &¬P ), and mechanisms allowing an agent to represent and reason about its own beliefs and past reasoning. Using these ideas, we introduce a new deﬁnition of model and of logical (...) consequence, as well as a new deﬁnition of soundness such that, when reasoning with consistent premises, all classically sound rules are sound for active logic. However, not everything that is classically sound remains sound in our sense, for by classical deﬁnitions, all rules with contradictory premises are vacuously sound, whereas in active logic not everything follows from a contradiction. (shrink)
As is well known, dialog partners manage the uncertainty inherent in conversation by continually providing and eliciting feedback, monitoring their own comprehension and the apparent comprehension of their dialog partner, and initiating repairs as needed (see e.g., Cahn & Brennan, 1999; Clark & Brennan, 1991). Given the nature of such monitoring and repair, one might reasonably hypothesize that a good portion of the utterances involved in dialog management employ meta-language. But while there has been a great deal of work on (...) the specific topic of dialog management, and it is widely (if often tacitly) accepted that meta-language is frequently involved, there has been no work specifically investigating and quantifying the role of meta-language in dialog management. Thus, this small study investigated the correlation between meta-language and dialog management utterances in three dialog files of the British National Corpus (BNC). (shrink)
In this paper we contend that the ability to engage in meta-dialog is necessary for free and exible conversation. Central to the possibility of meta-dialog is the ability to recognize and negotiate the distinction between the use and mention of a word. The paper surveys existing theoretical approaches to the use-mention distinction, and brie y describes some of our ongoing e orts to implement a system which represents the use-mention distinction in the service of simple meta-dialog.
In this paper, we present a meta-cognitive approach for dropping and reconsidering intentions, wherein concurrent actions and results are allowed, in the framework of the time-sensitive and contradiction-tolerant active logic.
Ever since, while continuing to develop his liguistic theories, he has been the most prominent US critic both of his country's foreign policy and of the intellectuals and media that give it overwhelming consensual support. "The Responsibility of Intellectuals" was followed by a series of ever more devastating attacks on American policy in Vietnam (collected in American Power and the New Mandarins and At War With Asia ): by 1970, he was far and away the best known intellectual opponent of (...) the US war effort. (shrink)
Is string theory a futile exercise as physics, as I believe it to be? It is an interesting mathematical specialty and has produced and will produce mathematics useful in other contexts, but it seems no more vital as mathematics than other areas of very abstract or specialized math, and doesn't on that basis justify the incredible amount of effort expended on it.
This essay provides a positive account of coercion that avoids significant difficulties that have confronted most other recent accounts. It enters this territory by noting a dispute over whether coercion has to manipulate the will of the coercee, or whether direct force inhibiting action (such as manhandling or imprisoning) is itself coercive. Though this dispute may at first seem a mere matter of taxonomic categorization, I argue that this dispute reflects an important divergence in thought about the nature of coercion. (...) Though it has rarely been noted, there are two significantly different ways of theorizing coercion found in recent writing on coercion. One focuses on the ability of the coercer to inhibit actions by the coercee through techniques such as force, violence, and like powers, or threats based in such powers. The other approach restricts coercion to cases where coercion manipulates the will of the coercee, though widens it to include any sort of threat that puts pressure on the coercee's will and alters the coercee's intentional choice of action. The former, enforcement approach used to be widely assumed by many political theorists who discussed the place of coercion in law and politics, though it has been largely supplanted by the latter, pressure approach. I show that these approaches are indeed quite distinct, and argue that the enforcement approach is in several ways superior to and more fundamental than the pressure approach for recognizing and understanding coercion in ethics and political and legal philosophy. I also consider and respond to a number of objections to the enforcement approach, showing that it can deal with some puzzle cases such as bluffs, blackmail, inefficacious threats, oblique threats, and economic coercion. (shrink)
This paper lays out some of the empirical evidence for the importance of neural reuse—the reuse of existing (inherited and/or early-developing) neural circuitry for multiple behavioral purposes—in defining the overall functional structure of the brain. We then discuss in some detail one particular instance of such reuse: the involvement of a local neural circuit in finger awareness, number representation, and other diverse functions. Finally, we consider whether and how the notion of a developmental homology can help us understand the relationships (...) between the cognitive functions that develop out of shared neural supports. (shrink)
My scholarly work on the problem of race relations began with a general inquiry into the theory of economic inequality. Specifically, my 1981 paper, "Intergenerational Transfers and the Distribution of Earnings," which appeared in the journal Econometrica, introduced a model of economic achievement in which a person's earnings depended on a random endowment of innate ability and on skills acquired from formal training. The key feature of this theory was that individuals had to rely on their families to pay for (...) their training. In this way, a person's economic opportunities were influenced by his inherited social position. I showed how, under these circumstances, the distribution of income in each generation could be determined by an examination of what had been obtained by the previous generation. My objective with the model was to illustrate how, in the long run, when people depend on resources available within families to finance their acquisition of skills, economic inequality comes to reflect the inherited advantages of birth. A disparity among persons in economic attainment would bear no necessary connection to differences in their innate abilities. (shrink)
If truth is not unproblematic, then neither is it inaccessible. And, telling the truth is decidedly a political act. "From the viewpoint of politics, truth has a despotic character," declared Hannah Arendt, in her essay, "Truth and Politics." "Unwelcome opinion can be argued with, rejected, or compromised upon," she goes on, "but unwelcome facts possess an infuriating stubbornness that nothing can move except plain lies." Moreover, at this late date in the twentieth century, we know that social justice is impossible (...) unless intellectuals tell the truth. This is a lesson which Vaclav Havel, the Czech playwright turned politician, teaches as well as anyone. In "The Power of the Powerless," his classic essay on the intellectual's role in opposing totalitarianism, he observes that: "Under the orderly surface of the life of lies... there slumbers the hidden sphere of life in its real aims, of its hidden openness to truth.". (shrink)
Moral intuitions, while ubiquitous in moral reasoning, have been the cause of considerable controversy in philosophy. My purpose here is to describe the most reasonable role for intuitions in moral theory, in order to look at some problems that arise, particularly for theories of justice, when intuitions are presumed to have this role.
An important element in the criticism of liberalism by some communitarians and feminists is the notion of our embeddedness in relationships of dependence. The criticism in general is that liberal theory is deficient in that it generally attaches no special meaning to such relations, thus justifying a social structure that weakens them. However, the questions of precisely what sort of moral significance these relationships have, why they are morally significant, and what types of dependence relationships possess this significance, have largely (...) gone unasked. This article attempts to explore these questions. I will begin by considering duties that may arise from being depended on by others. (shrink)
The contractarian theory elaborated by John Rawls in A Theory of Justice exploits the difference principle in a great many ways. Rawls argues that, when used as part of a set of guiding principles for structuring the basic institutions of society, it simplifies the problem of interpersonal comparisons (91-4)1, helps compensate for the arbitrariness of natural endowments (101-3), promotes a harmony of interests between citizens (104-5), reintroduces the principle of fraternity to democratic society (105-6), and, what is critical to his (...) contractarian theory, it is an essential part of the principles of justice which would be chosen by free, equal, and rational persons in the original position. (shrink)
Epicurus emphatically asserts the veracity of perception, including visual perception, yet most of the literature on Epicurus’ atomistic theory of vision pays scant attention to what Epicurus believed transpires outside the body that leads to it. The treatments by DeWitt, Everson, Hicks, and Rist are all very brief; Glidden focuses primarily on the processes occurring inside the perceiver; and while the discussions by Asmis and Bailey are more detailed, they hardly more than note in passing that the process is problematic.1 (...) In this paper I will critically examine Epicurus’ theory of vision, in particular his theory of the events occurring between perceived objects and the eye. I will argue that while certain common objections to Epicurus’ theory may be answerable, it nevertheless suffers from serious problems. These problems, in turn, occur on two levels. On the mechanical level, it demands that dissimilar atomic complexes behave in strikingly similar ways. And on the theoretical level, there is tension created by the need for the intermediary between objects and the observer to be both like objects and unlike them. (shrink)
One of liberalism’s core commitments is to safeguarding individuals’ autonomy. And a central aspect of liberal social justice is the commitment to protecting the vulnerable. Taken together, and combined with an understanding of autonomy as an acquired set of capacities to lead one’s own life, these commitments suggest that liberal societies should be especially concerned to address vulnerabilities of individuals regarding the development and maintenance of their autonomy. In this chapter, we develop an account of what it would mean for (...) a society to take seriously the obligation to reduce individuals’ autonomy-related vulnerabilities to an acceptable minimum. In particular, we argue that standard liberal accounts underestimate the scope of this obligation because they fail to appreciate various threats to autonomy. (shrink)
Approaching Plato is a comprehensive research guide to all (fifteen) of Plato’s early and middle dialogues. Each of the dialogues is covered with a short outline, a detailed outline (including some Greek text), and an interpretive essay. Also included (among other things) is an essay distinguishing Plato’s idea of eudaimonia from our contemporary notion of happiness and brief descriptions of the dialogues’ main characters.
Recent years have seen a resurgence of interest in the use of metacognition in intelligent systems. This essay is part of a small section meant to give interested researchers an overview and sampling of the kinds of work currently being pursued in this broad area. The current essay offers a review of recent research in two main topic areas: the monitoring and control of reasoning (metareasoning) and the monitoring and control of learning (metalearning).
Basics of Embodied Cognition EC treats cognition as a set of tools evolved by organisms for coping with their environments. Each of the key terms in this characterization—tool, evolved, organisms, coping, and environment—has a special significance for, and casts a particular light on, the study of the mind. EC thereby foregrounds the following six facts.
One of the most foundational and continually contested questions in the cognitive sciences is the degree to which the functional organization of the brain can be understood as modular. In its classic formulation, a module was defined as a cognitive sub-system with (all or most of) nine specific properties; the classic module is, among other things, domain specific, encapsulated (i.e. maintains proprietary representations to which other modules have no access), and implemented in dedicated neural substrates. Most of the examinations—and especially (...) the criticisms—of the modularity thesis have focused on these properties individually, for instance by finding counterexamples in which otherwise good candidates for cognitive modules are shown to lack domain specificity or encapsulation. The current paper goes beyond the usual approach by asking what some of the broad architectural implications of the modularity thesis might be, and attempting to test for these. The evidence does not favor a modular architecture for the cortex. Moreover, the evidence suggests that best way to approach the understanding of cognition is not by analyzing and modelling different functional domains (visual perception, attention, language, motor control, etc.) in isolation from the others, but rather by looking for points of overlap in their neural implementations, and exploiting these to guide the analysis and decomposition of the functions in question. This has significant implications for the question of how to approach the design and implementation of intelligent artifacts in general, and language-using robots in particular. (shrink)
Reading this book is, I imagine, very much like having a conversation with—by which I mean listening to—Gerald Edelman on topics of great interest: evolution; the brain; consciousness; and the nature and limits of human knowledge. Normally, this would be a great recommendation for a work, as one would assume the informality of style and intimacy of tone would make more accessible the ideas being conveyed. In this case, however, there are a couple of problems. The first is that, at (...) any point in this particular conversation, at most one person understands what is being said; the other problem is that it isn’t always Edelman. (shrink)
This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able to notice when something is amiss, assess the anomaly, and guide a solution into place. This basic strategy of self-guided learning is termed the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. This paper (a) argues that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) (...) details the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describes specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outlines both short-term and long-term research agendas. (shrink)
Description: The massive redeployment hypothesis (MRH) is a theory about the functional organization of the human cortex, offering a middle course between strict localization on the one hand, and holism on the other. Central to MRH is the claim that cognitive evolution proceeded in a way analogous to component reuse in software engineering, whereby existing components—originally developed to serve some specific purpose—were used for new purposes and combined to support new capacities, without disrupting their participation in existing programs.
Embodied cognition (EC) is growing up, and How the Body Shapes the Mind is both a sign of, and substantive contributor to, this ongoing development. Born in or about 1991 (the year of publication of seminal works by Brooks, Dreyfus, and Varela, Thompson & Rosch), EC is only now emerging from a tumultuous but exciting childhood marked in particular by the size and breadth of the extended family hoping to have some impact on its early education and upbringing. As family (...) members include computer science, phenomenology, developmental and cognitive psychology, analytic philosophy of mind, linguistics, neuroscience, and eastern mysticism—just to name a few—EC has both benefited and suffered from a wealth of different and often incompatible ideas about who and what it is, what it should do with its life, even what language it should speak. Gallagher brings some cohesion and consistency to this situation, not by surveying and synthesizing these competing approaches, but by carefully marshalling the evidence and developing the vocabulary to thoroughly consider a few fundamental issues. (shrink)
A symbol is a pattern (of physical marks, electromagnetic energy, etc.) which denotes, designates, or otherwise has meaning. The notion that intelligence requires the use and manipulation of symbols, and that humans are therefore symbol systems, has been extremely in uential in arti cial intelligence.
Maintaining adequate performance in dynamic and uncertain settings has been a perennial stumbling block for intelligent systems. Nevertheless, any system intended for real-world deployment must be able to accommodate unexpected change—that is, it must be perturbation tolerant. We have found that metacognitive monitoring and control—the ability of a system to self-monitor its own decision-making processes and ongoing performance, and to make targeted changes to its beliefs and action-determining components—can play an important role in helping intelligent systems cope with the perturbations (...) that are the inevitable result of real-world deployment. In this article we present the results of several experiments demonstrating the efficacy of metacognition in improving the perturbation tolerance of reinforcement learners, and discuss a general theory of metacognitive monitoring and control, in a form we call the metacognitive loop. (shrink)
I have no opinion about translations. I tell you this because the first time I offered publicly some thoughts on the Iliad, the very first question from the audience was about which translation I thought best. I was surprised by the question, and gave a befuddled sort of answer. But I should have expected it, for it is the mark of the classicist to have opinions about such matters; more than that, a classicist’s answer to a question about the best (...) will show what sort of classicist she is, what she values and how she applies the standards of the discipline. At the time I thought the question odd and irrelevant to my talk; but in fact it was an important question, for to answer it would shed light on my scholarly concerns, and help fit my talk into the ongoing discussions at the heart of classical scholarship. That I was unable to give such an answer showed it was my talk that was irrelevant, not the question. (shrink)
Because I don’t know what a cultural imaginary is, nor how to put (or find) something in one, I propose instead to provide a brief, general account of what, when we think and write about, and thereby determine, the characteristics of mindedness, the members of my tribe imagine themselves to be doing.
Multi-voxel pattern analysis (MVPA) is a popular analytical technique in neuroscience that involves identifying patterns in fMRI BOLD signal data that are predictive of task conditions. But the technique is also frequently used to make inferences about the regions of the brain that are most important to the tasks in question, and our analysis shows that this is a mistake. MVPA does not provide a reliable guide to what information is being used by the brain during cognitive tasks, nor where (...) that information is. This is due in part to inherent run to run variability in the decision space generated by the classifier, but there are also several other issues, discussed below, that make inference from the characteristics of the learned models to relevant brain activity deeply problematic. These issues have significant implications both for many papers already published, and for how the field uses this technique in the future. (shrink)
The generation of value bubbles is an inherently psychological and social process, where information sharing and individual decisions can affect representations of value. Bubbles occur in many domains, from the stock market, to the runway, to the laboratories of science. Here we seek to understand how psychological and social processes lead representations (i.e., expectations) of value to become divorced from the inherent value, using asset bubbles as an example. We hypothesize that simple asset group switching rules can give rise to (...) aggregate behavior that resembles the irrational exuberance that can drive asset bubbles. Using an agent-based model we explore whether a simple switching rule can generate irrational exuberance, and systematically explore how communication between decision makers influences the speed and intensity of overvaluation. We show that rational and simple individual level rules combined with honest information sharing are sufficient to generate the collective overvaluation characteristic of irrational exuberance. Further, our results demonstrate that low fidelity in the exchange of value information leads to rapidly increasing expectations about value, even when no one is engaged in exaggerating their expectations for the assets they own. (shrink)
Less than a decade ago, “rational choice theory” seemed oddly impervious to criticism. Hundreds of books, articles and studies were published every year, attacking the theory from every angle, yet it continued to attract new converts. How times have changed! The “anomalies” that Richard Thaler once blithely cataloged for the Journal of Economic Perspectives are now widely regarded, not as curious deviations from the norm, but as falsifying counterexamples to the entire project of neoclassical economics. The work of experimental game (...) theorists has perhaps been the most influential in showing that people do not maximize expected utility, in any plausible sense of the terms “maximize,” “expected,” or “utility.” The evidence is so overwhelming and incontrovertible that, by the time one gets to the end of a book like Dan Ariely’s Predictably Irrational,1 it begins to feel like piling on. The suggestion is pretty clear: not only are people not as rational as decision and game theorists have traditionally taken them to be, they are not even as rational as they themselves take themselves to be. This conclusion, however, is not self-evident. The standard interpretation of these findings is that people are irrational: their estimation of probabilities is vulnerable to framing effects, their treatment of (equivalent) losses and gains is asymmetric, their choices violate the sure-thing principle, they discount the future hyperbolically, and so on. Indeed, after surveying the experimental findings, one begins to wonder how people manage to get on in their daily lives at all, given the seriousness and ubiquity of these deliberative pathologies. And yet, most people do manage to get on, in some form or another. This in itself suggests an.. (shrink)