Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the (...) objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning. (shrink)
How should we speak of bodies and souls? In _Coming to Mind_, Lenn E. Goodman and D. Gregory Caramenico pick their way through the minefields of materialist reductionism to present the soul not as the brain’s rival but as its partner. What acts, they argue, is what is real. The soul is not an ethereal wisp but a lively subject, emergent from the body but inadequately described in its terms. Rooted in some of the richest philosophical and intellectual traditions (...) of Western and Eastern philosophy, psychology, literature, and the arts and the latest findings of cognitive psychology and brain science—_Coming to Mind_ is a subtle manifesto of a new humanism and an outstanding contribution to our understanding of the human person. Drawing on new and classical understandings of perception, consciousness, memory, agency, and creativity, Goodman and Caramenico frame a convincing argument for a dynamic and integrated self capable of language, thought, discovery, caring, and love. (shrink)
Dans La structure de l’apparence, Nelson Goodman met en place les principaux thèmes philosophiques qui feront de lui un penseur singulier : constructivisme, nominalisme, phénoménalisme et pluralisme s’entrecroisent ici dans l’élaboration d’une pensée aussi subtile que complexe. Ce livre propose une première traduction d’un texte fondateur de la philosophie analytique.
Emmanuel Levinas gave an account of radical, asymmetrical responsibility for the Other that is phenomenologically sensible in the proximity of face-to-face relation. This original arrangement, however, is not interminable. The approach of the third party equalizes and creates distance between self and Other by introducing ontology and epistemology. It is a necessary process of totalization that moves from a primordial ethics to justice and institutional fairness. However, Levinas was aware that the third party's presence brought with it a possible forgetting (...) of the Other and a covering over of radical ethics. In this presentation, we propose that the psychotherapeutic process represents a context that bears the proximal dimension described by Levinas while also representing an institution that is, in definition and purpose, totalizing. Furthermore, using a play on words that represents a significant issue in contemporary psychotherapeutic practice, we explore the common presence and impact of third-party payers on the therapeutic relationship, and how third-party payers are disingenuously a-proximal and constantly approximating; faceless and consequently effacing the patient's august dignity, causing a forgetfulness of the justice to which Levinas called institutions. Lastly, we suggest some strategies by which therapists may recover the never-absolved proximity to the Other in a profession susceptible to the a-proximal effects of third-party economics. 2012 APA, all rights reserved). (shrink)
Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the rational speech-act theory: Listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to “invert” this model of the speaker. We apply this framework to model scalar implicature (“some” implies “not all,” and “N” implies “not more than N”). This model predicts an interaction between the speaker's knowledge state and the listener's interpretation. (...) We test these predictions in two experiments and find good fit between model predictions and human judgments. (shrink)
In many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sampling-based approximations are a common way to implement Bayesian (...) inference, the very limited numbers of samples often used by humans seem insufficient to approximate the required probability distributions very accurately. Here, we consider this discrepancy in the broader framework of statistical decision theory, and ask: If people are making decisions based on samples—but as samples are costly—how many samples should people use to optimize their total expected or worst-case reward over a large number of decisions? We find that under reasonable assumptions about the time costs of sampling, making many quick but locally suboptimal decisions based on very few samples may be the globally optimal strategy over long periods. These results help to reconcile a large body of work showing sampling-based or probability matching behavior with the hypothesis that human cognition can be understood in Bayesian terms, and they suggest promising future directions for studies of resource-constrained cognition. (shrink)
Marr's levels of analysis—computational, algorithmic, and implementation—have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the (...) notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call “resource-rational analysis.”. (shrink)
We derive a probabilistic account of the vagueness and context-sensitivity of scalar adjectives from a Bayesian approach to communication and interpretation. We describe an iterated-reasoning architecture for pragmatic interpretation and illustrate it with a simple scalar implicature example. We then show how to enrich the apparatus to handle pragmatic reasoning about the values of free variables, explore its predictions about the interpretation of scalar adjectives, and show how this model implements Edgington’s Vagueness: a reader, 1997) account of the sorites paradox, (...) with variations. The Bayesian approach has a number of explanatory virtues: in particular, it does not require any special-purpose machinery for handling vagueness, and it is integrated with a promising new approach to pragmatics and other areas of cognitive science. (shrink)
If Bayesian Fundamentalism existed, Jones & Love's (J&L's) arguments would provide a necessary corrective. But it does not. Bayesian cognitive science is deeply concerned with characterizing algorithms and representations, and, ultimately, implementations in neural circuits; it pays close attention to environmental structure and the constraints of behavioral data, when available; and it rigorously compares multiple models, both within and across papers. J&L's recommendation of Bayesian Enlightenment corresponds to past, present, and, we hope, future practice in Bayesian cognitive science.
Scholars of classical philosophy have long disputed whether Aristotle was a dialectical thinker. Most agree that Aristotle contrasts dialectical reasoning with demonstrative reasoning, where the former reasons from generally accepted opinions and the latter reasons from the true and primary. Starting with a grasp on truth, demonstration never relinquishes it. Starting with opinion, how could dialectical reasoning ever reach truth, much less the truth about first principles? Is dialectic then an exercise that reiterates the prejudices of one's times and at (...) best allows one to persuade others by appealing to these prejudices, or is it the royal road to first principles and philosophical wisdom? In From Puzzles to Principles? May Sim gathers experts to argue both these positions and offer a variety of interpretive possibilities. The contributors' thoughtful reflections on the nature and limits of dialectic should play a crucial role in Aristotelian scholarship. (shrink)