In many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sampling-based approximations are a common way to implement Bayesian (...) inference, the very limited numbers of samples often used by humans seem insufficient to approximate the required probability distributions very accurately. Here, we consider this discrepancy in the broader framework of statistical decision theory, and ask: If people are making decisions based on samples—but as samples are costly—how many samples should people use to optimize their total expected or worst-case reward over a large number of decisions? We find that under reasonable assumptions about the time costs of sampling, making many quick but locally suboptimal decisions based on very few samples may be the globally optimal strategy over long periods. These results help to reconcile a large body of work showing sampling-based or probability matching behavior with the hypothesis that human cognition can be understood in Bayesian terms, and they suggest promising future directions for studies of resource-constrained cognition. (shrink)
Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the rational speech-act theory: Listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to “invert” this model of the speaker. We apply this framework to model scalar implicature (“some” implies “not all,” and “N” implies “not more than N”). This model predicts an interaction between the speaker's knowledge state and the listener's interpretation. (...) We test these predictions in two experiments and find good fit between model predictions and human judgments. (shrink)
In an essay on performance-enhancing drugs, author Chuck Klosterman (2007) argues that the category of enhancers extends from hallucinogens used to inspire music to steroids used to strengthen athletes—and he criticizes those who would excuse one means of enhancement while railing against the other as a form of cheating: After the summer of 1964, the Beatles started taking serious drugs, and those drugs altered their musical performance. Though it may not have been their overt intent, the Beatles took performance-enhancing drugs. (...) And . . . absolutely no one holds it against them. No one views “Rubber Soul” and “Revolver” as “less authentic” albums, despite the fact that they would not (and probably could .. (shrink)
This paper presents a new argument for necessitism, the claim that necessarily everything is necessarily something. The argument appeals to principles about the metaphysics of quantification and predication which are best seen as constraints on reality’s fineness of grain. I give this argument in section 4; the impatient reader may skip directly there. Sections 1-3 set the stage by surveying three other arguments for necessitism. I argue that none of them are persuasive, but I think it is illuminating to consider (...) my argument in light of the others and vice versa. These interconnections should be of interest even to those who reject necessitism; of particular interest may be the new conception of validity proposed in section 5. (shrink)
Hierarchical Bayesian models (HBMs) provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher‐level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, particularly the idea that (...) higher‐level theories guide learning at lower levels. In addition, they help resolve certain issues for Bayesians, such as scientific preference for simplicity and the problem of new theories. *Received July 2009; revised October 2009. †To contact the authors, please write to: Leah Henderson, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 32D‐808, Cambridge, MA 02139; e‐mail: email@example.com. (shrink)
The identity predicate can be defined using second-order quantification: a=b =df ∀F(Fa↔Fb). Less familiarly, a dyadic sentential operator analogous to the identity predicate can be defined using third-order quantification: ϕ≡ψ =df ∀X(Xϕ↔Xψ), where X is a variable of the same syntactic type as a monadic sentential operator. With this notion in view, it is natural to ask after general principles governing its application. More grandiosely, how fine-grained is reality? -/- I will argue that reality is not structured in anything like (...) the way that the sentences we use to talk about it are structured. I do so by formulating a higher-order analogue of Russell’s paradox of structured propositions. I then relate this argument to the Frege-Russell correspondence. When confronted with the alleged paradox, Frege agreed that reality was not structured, but maintained that propositions (i.e. thoughts) were structured all the same. Russell replied that his paradox showed Frege’s theory of structured thoughts to be inconsistent, to which Frege replied that Russell’s argument failed to heed the distinction between sense and reference. Most recent commentators have sided with Russell. In defense of Frege, I establish the consistency of one version of his rejoinder. I then consider and reject some ways of resisting the argument against a structured conception of reality. I conclude that, if propositions are structured, this is because they correspond not to distinctions in reality, but rather to ways in which those distinctions can be represented. (shrink)
We derive a probabilistic account of the vagueness and context-sensitivity of scalar adjectives from a Bayesian approach to communication and interpretation. We describe an iterated-reasoning architecture for pragmatic interpretation and illustrate it with a simple scalar implicature example. We then show how to enrich the apparatus to handle pragmatic reasoning about the values of free variables, explore its predictions about the interpretation of scalar adjectives, and show how this model implements Edgington’s Vagueness: a reader, 1997) account of the sorites paradox, (...) with variations. The Bayesian approach has a number of explanatory virtues: in particular, it does not require any special-purpose machinery for handling vagueness, and it is integrated with a promising new approach to pragmatics and other areas of cognitive science. (shrink)
Closest-possible-world analyses of counterfactuals suffer from what has been called the ‘problem of counterpossibles’: some counterfactuals with metaphysically impossible antecedents seem plainly false, but the proposed analyses imply that they are all (vacuously) true. One alleged solution to this problem is the addition of impossible worlds. In this paper, I argue that the closest possible or impossible world analyses that have recently been suggested suffer from the ‘new problem of counterpossibles’: the proposed analyses imply that some plainly true counterpossibles (viz., (...) ‘counterlogicals’) are false. After motivating and presenting the ‘new problem’, I give reasons to think that the most plausible objection to my argument is not compelling. (shrink)
Marr's levels of analysis—computational, algorithmic, and implementation—have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the (...) notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call “resource-rational analysis.”. (shrink)
It has become popular of late to identify the phenomenon of thinking a singular thought with that of thinking with a mental file. Proponents of the mental files conception of singular thought claim that one thinks a singular thought about an object o iff one employs a mental file to think about o. I argue that this is false by arguing that there are what I call descriptive mental files, so some file-based thought is not singular thought. Descriptive mental files (...) are mental files for which descriptive information plays four roles: determines which object the file is about, if any, it sets limits on possible mistakes that fall within the scope of successful reference for the file, it acts as a ‘gatekeeper’ for the file, and it determines persistence conditions for the file. Contrary to popular assumption, a description playing these roles is consistent with the file-theoretic framework. Recognising this allows us to distinguish the notion of singular thought from that of file-thinking and better understand the nature and role of both. (shrink)
Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the (...) objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning. (shrink)
This paper is a study of higher-order contingentism – the view, roughly, that it is contingent what properties and propositions there are. We explore the motivations for this view and various ways in which it might be developed, synthesizing and expanding on work by Kit Fine, Robert Stalnaker, and Timothy Williamson. Special attention is paid to the question of whether the view makes sense by its own lights, or whether articulating the view requires drawing distinctions among possibilities that, according to (...) the view itself, do not exist to be drawn. The paper begins with a non-technical exposition of the main ideas and technical results, which can be read on its own. This exposition is followed by a formal investigation of higher-order contingentism, in which the tools of variable-domain intensional model theory are used to articulate various versions of the view, understood as theories formulated in a higher-order modal language. Our overall assessment is mixed: higher-order contingentism can be fleshed out into an elegant systematic theory, but perhaps only at the cost of abandoning some of its original motivations. (shrink)
Fundamental Buddhist teachings -- Main features of some western ethical theories -- Teravāda ethics as rule-consequentialism -- Mahāyāna ethics before Śāntideva and after -- Transcending ethics -- Buddhist ethics and the demands of consequentialism -- Buddhism on moral responsibility -- Punishment -- Objections and replies -- A Buddhist response to Kant.
This paper has a narrow and a broader target. The narrow target is a particular version of what I call the mental-files conception of singular thought, proposed by Robin Jeshion, and known as cognitivism. The broader target is the MFC in general. I give an argument against Jeshion's view, which gives us preliminary reason to reject the MFC more broadly. I argue Jeshion's theory of singular thought should be rejected because the central connection she makes between significance and singularity does (...) not hold. However, my argument grants Jeshion's claim that there is a connection between significance and file-thinking. The upshot is not only that we have reason to reject Jeshion's significance constraint on singular thought, but that we have reason to question the connection made by MFC proponents between file-thinking and singularity. (shrink)
Owen Flanagan's important book The Bodhisattva's Brain presents a naturalized interpretation of Buddhist philosophy. Although the overall approach of the book is very promising, certain aspects of its presentation could benefit from further reflection. Traditional teachings about reincarnation do not contradict the doctrine of no self, as Flanagan seems to suggest; however, they are empirically rather implausible. Flanagan's proposed “tame” interpretation of karma is too thin; we can do better at fitting karma into a scientific worldview. The relationship between eudaimonist (...) and utilitarian strands in Buddhist ethics is more complex than the book suggests. Flanagan is right to criticize incautious and imprecise claims that Buddhism will make practitioners happy. We can make progress by distinguishing between happiness in the sense of a Buddhist version of eudaimonia, and happiness in the sense of attitudinal pleasure. Doing so might result in an interpretation of Buddhist views about happiness that was simultaneously philosophically interesting, historically credible, and psychologically testable. (shrink)
The use of cognition-enhancing drugs (CEDs) appears to be increasingly common in both academic and workplace settings. But many universities and businesses have not yet engaged with the ethical challenges raised by CED use. This paper considers criticisms of CED use with a particular focus on the Accomplishment Argument: an influential set of claims holding that enhanced work is less dignified, valuable, or authentic, and that cognitive enhancement damages our characters. While the Accomplishment Argument assumes a view of authorship based (...) on individual credit-taking, an impersonal or collaborative view is just as possible. This paper considers the benefits of this view—including humility, a value often claimed by critics of enhancement—and argues that such a view is consistent with open CED use. It proposes an ethics of cognitive enhancement based on toleration, transparency, and humility, and it discusses how institutions and individuals can build a culture of open cognitive enhancement. (shrink)
Investors concerned about the social and environmental impact of the companies they invest in are increasingly choosing to use voice over exit as a strategy. This article addresses the question of how and why the voice and exit options (Hirschman 1970) are used in social shareholder engagement (SSE) by religious organisations. Using an inductive case study approach, we examine seven engagements by three religious organisations considered to be at the forefront of SSE. We analyse the full engagement process rather than (...) focusing on particular tools or on outcomes. We map the key stages of the engagement processes and the influences on the decisions made at each stage to develop a model of the dynamics of voice and exit in SSE. This study finds that religious organisations divest for political rather than economic motives using exit as a form of voice. The silent exit option is not used by religious organisations in SSE, exit is not always the consequence of unsatisfactory voice outcomes, and voice can continue after exit. We discuss the implications of these dynamics and influences on decisions for further research in engagement. (shrink)
I critically discuss some of the main arguments of Modal Logic as Metaphysics, present a different way of thinking about the issues raised by those arguments, and briefly discuss some broader issues about the role of higher-order logic in metaphysics.
If Bayesian Fundamentalism existed, Jones & Love's (J&L's) arguments would provide a necessary corrective. But it does not. Bayesian cognitive science is deeply concerned with characterizing algorithms and representations, and, ultimately, implementations in neural circuits; it pays close attention to environmental structure and the constraints of behavioral data, when available; and it rigorously compares multiple models, both within and across papers. J&L's recommendation of Bayesian Enlightenment corresponds to past, present, and, we hope, future practice in Bayesian cognitive science.
Many who think that some abstracta are artefacts are fictional creationists, asserting that fictional characters are brought about by our activities. Kripke (1973), Salmon (1998, 2002), and Braun (2005) further embrace mythical creationism, claiming that certain entities that figure in false theories, such as phlogiston or Vulcan, are likewise abstracta produced by our intentional activities. I here argue that one may not reasonably take the metaphysical route travelled by the mythical creationist. Even if one holds that fictional characters are artefact (...) one ought not further hold that mythical objects are, too. (shrink)
In Collins's latest book, we see an attempt to apply his sociological theories to the history of philosophy. While Collins's macrosociology of knowledge provides important insights into the role of conflict in an intellectual field, his microsociology is more problematic. In particular, Collins's micro theory ignores the fundamental importance of social interpretations. This leads him to use a vague and unproductive notion of emotions. Nevertheless, we can usefully apply Collins's findings to sociological theory itself. As in philosophy, we see the (...) same competitive appropriation and elaboration of accumulated intellectual capital and the same struggle over the limited resources necessary to intellectual production, especially over what Collins calls the intellectual attention space. (shrink)
The definitions of ‘deduction’ found in virtually every introductory logic textbook would encourage us to believe that the inductive/deductive distinction is a distinction among kinds of arguments and that the extension of ‘deduction’ is a determinate class of arguments. In this paper, we argue that that this approach is mistaken. Specifically, we defend the claim that typical definitions of ‘deduction’ operative in attempts to get at the induction/deduction distinction are either too narrow or insufficiently precise. We conclude by presenting a (...) deflationary understanding of the inductive/deductive distinction; in our view, its content is nothing over and above the answers to two fundamental sorts of questions central to critical thinking. (shrink)
In a series of recent papers, Timothy Williamson has argued for the surprising conclusion that there are cases in which you know a proposition in spite of its being overwhelmingly improbable given what you know that you know it. His argument relies on certain formal models of our imprecise knowledge of the values of perceptible and measurable magnitudes. This paper suggests an alternative class of models that do not predict this sort of improbable knowing. I show that such models are (...) motivated by independently plausible principles in the epistemology of perception, the epistemology of estimation, and concerning the connection between knowledge and justified belief. (shrink)
Creationism is the view that fictional individuals such as Sherlock Holmes are contingently existing abstracta that come about due to the intentional activities of authors. Author-essentialism is the stronger thesis that the author responsible for bringing a fictional individual into existence at a time is essential to the existence of that individual. Takashi Yagisawa has recently attacked this view on the following grounds: author-essentialists rely on an ontological parallelism between fictional individuals and whole works of fiction, but this parallelism fails (...) to obtain. I here argue that Yagisawa’s grounds are weak. (shrink)