Despite wide acceptance that the attributes of living creatures have appeared through a cumulative evolutionary process guided chiefly by natural selection, many human activities have seemed analytically inaccessible through such an approach. Prominent evolutionary biologists, for example, have described morality as contrary to the direction of biological evolution, and moral philosophers rarely regard evolution as relevant to their discussions. -/- The Biology of Moral Systems adopts the position that moral questions arise out of conflicts of interest, and that moral systems (...) are ways of using confluences of interest at lower levels of social organiation to deal with conflicts of interest at higher levels. Moral systems are described as systems of indirect reciprocity: humans gain and lose socially and reproductively not only by direct transactions, but also by the reputations they gain from the everyday flow of social interactions. -/- The author develops a general theory of human interests, using senescence and effort theory from biology, to help analye the patterning of human lifetimes. He argues that the ultimate interests of humans are reproductive, and that the concept of morality has arisen within groups because of its contribution to unity in the context, ultimately, of success in intergroup competition. He contends that morality is not easily relatable to universals, and he carries this argument into a discussion of what he calls the greatest of all moral problems, the nuclear arms race. (shrink)
This book directly challenges the notion that the election of Barack Obama signals a new era of colorblindness. Michelle Alexander argues that "we have not ended racial caste in America; we have merely redesigned it." By targeting black men through the War on Drugs and decimating communities of color, the U.S. criminal justice system functions as a contemporary system of racial control---relegating millions to a permanent second-class status---even as it formally adheres to the principle of colorblindness.
A growing body of empirical literature challenges philosophers’ reliance on intuitions as evidence based on the fact that intuitions vary according to factors such as cultural and educational background, and socio-economic status. Our research extends this challenge, investigating Lehrer’s appeal to the Truetemp Case as evidence against reliabilism. We found that intuitions in response to this case vary according to whether, and which, other thought experiments are considered first. Our results show that compared to subjects who receive the Truetemp Case (...) first, subjects first presented with a clear case of knowledge are less willing to attribute knowledge in the Truetemp Case, and subjects first presented with a clear case of nonknowledge are more willing to attribute knowledge in the Truetemp Case. We contend that this instability undermines the supposed evidential status of these intuitions, such that philosophers who deal in intuitions can no longer rest comfortably in their armchairs. (shrink)
Experimental philosophy uses experimental research methods from psychology and cognitive science in order to investigate both philosophical and metaphilosophical questions. It explores philosophical questions about the nature of the psychological world - the very structure or meaning of our concepts of things, and about the nature of the non-psychological world - the things themselves. It also explores metaphilosophical questions about the nature of philosophical inquiry and its proper methodology. This book provides a detailed and provocative introduction to this innovative field, (...) focusing on the relationship between experimental philosophy and the aims and methods of more traditional analytic philosophy. Special attention is paid to carefully examining experimental philosophy's quite different philosophical programs, their individual strengths and weaknesses, and the different kinds of contributions that they can make to our philosophical understanding. Clear and accessible throughout, it situates experimental philosophy within both a contemporary and historical context, explains its aims and methods, examines and critically evaluates its most significant claims and arguments, and engages with its critics. (shrink)
Recent experimental philosophy arguments have raised trouble for philosophers' reliance on armchair intuitions. One popular line of response has been the expertise defense: philosophers are highly-trained experts, whereas the subjects in the experimental philosophy studies have generally been ordinary undergraduates, and so there's no reason to think philosophers will make the same mistakes. But this deploys a substantive empirical claim, that philosophers' training indeed inculcates sufficient protection from such mistakes. We canvass the psychological literature on expertise, which indicates that people (...) are not generally very good at reckoning who will develop expertise under what circumstances. We consider three promising hypotheses concerning what philosophical expertise might consist in: (i) better conceptual schemata; (ii) mastery of entrenched theories; and (iii) general practical know-how with the entertaining of hypotheticals. On inspection, none seem to provide us with good reason to endorse this key empirical premise of the expertise defense. (shrink)
It has been standard philosophical practice in analytic philosophy to employ intuitions generated in response to thought-experiments as evidence in the evaluation of philosophical claims. In part as a response to this practice, an exciting new movement—experimental philosophy—has recently emerged. This movement is unified behind both a common methodology and a common aim: the application of methods of experimental psychology to the study of the nature of intuitions. In this paper, we will introduce two different views concerning the relationship that (...) holds between experimental philosophy and the future of standard philosophical practice (what we call, the proper foundation view and the restrictionist view), discuss some of the more interesting and important results obtained by proponents of both views, and examine the pressure these results put on analytic philosophers to reform standard philosophical practice. We will also defend experimental philosophy from some recent objections, suggest future directions for work in experimental philosophy, and suggest what future lines of epistemological response might be available to those wishing to defend analytic epistemology from the challenges posed by experimental philosophy. (shrink)
In recent years, a number of philosophers have conducted empirical studies that survey people's intuitions about various subject matters in philosophy. Some have found that intuitions vary accordingly to seemingly irrelevant facts: facts about who is considering the hypothetical case, the presence or absence of certain kinds of content, or the context in which the hypothetical case is being considered. Our research applies this experimental philosophical methodology to Judith Jarvis Thomson's famous Loop Case, which she used to call into question (...) the validity of the intuitively plausible Doctrine of Double Effect. We found that intuitions about the Loop Case vary according to the context in which the case is considered. We contend that this undermines the supposed evidential status of intuitions about the Loop Case. We conclude by considering the implications of our findings for philosophers who rely on the Loop Case to make philosophical arguments and for philosophers who use intuitions in general. (shrink)
This article examines two questions about scientists’ search for knowledge. First, which search strategies generate discoveries effectively? Second, is it advantageous to diversify search strategies? We argue pace Weisberg and Muldoon, “Epistemic Landscapes and the Division of Cognitive Labor”, that, on the first question, a search strategy that deliberately seeks novel research approaches need not be optimal. On the second question, we argue they have not shown epistemic reasons exist for the division of cognitive labor, identifying the errors that led (...) to their conclusions. Furthermore, we generalize the epistemic landscape model, showing that one should be skeptical about the benefits of social learning in epistemically complex environments. (shrink)
This book presents a comprehensive overview of what the criminal law would look like if organised around the principle that those who deserve punishment should receive punishment commensurate with, but no greater than, that which they deserve. Larry Alexander and Kimberly Kessler Ferzan argue that desert is a function of the actor's culpability, and that culpability is a function of the risks of harm to protected interests that the actor believes he is imposing and his reasons for acting in the (...) face of those risks. The authors deny that resultant harms, as well as unperceived risks, affect the actor's desert. They thus reject punishment for inadvertent negligence as well as for intentions or preparatory acts that are not risky. Alexander and Ferzan discuss the reasons for imposing risks that negate or mitigate culpability, the individuation of crimes, and omissions. (shrink)
Our interest in this paper is to drive a wedge of contention between two different programs that fall under the umbrella of “experimental philosophy”. In particular, we argue that experimental philosophy’s “negative program” presents almost as significant a challenge to its “positive program” as it does to more traditional analytic philosophy.
From its very beginnings, the social study of culture has been polarized between structuralist theories that treat meaning as a text and investigate the patterning that provides relative autonomy and pragmatist theories that treat meaning as emerging from the contingencies of individual and collective action-so-called practices-and that analyze cultural patterns as reflections of power and material interest. In this article, I present a theory of cultural pragmatics that transcends this division, bringing meaning structures, contingency, power, and materiality together in a (...) new way. My argument is that the materiality of practices should be replaced by the more multidimensional concept of performances. Drawing on the new field of performance studies, cultural pragmatics demonstrates how social performances, whether individual or collective, can be analogized systematically to theatrical ones. After defining the elements of social performance, I suggest that these elements have become "de-fused" as societies have become more complex. Performances are successful only insofar as they can "re-fuse" these increasingly disentangled elements. In a fused performance, audiences identify with actors, and cultural scripts achieve verisimilitude through effective mise-en-scène. Performances fail when this relinking process is incomplete: the elements of performance remain apart, and social action seems inauthentic and artificial, failing to persuade. Refusion, by contrast, allows actors to communicate the meanings of their actions successfully and thus to pursue their interests effectively. (shrink)
Feminist Geneaologies, Colonial Legacies, Democratic Futures provides a feminist anaylsis of the questions of sexual and gender politics, economic and cultural marginality, and anti-racist and anti-colonial practices both in the "West" and in the "Third World." This collection, edited by Jacqui Alexander and Chandra Talpade Mohanty, charts the underlying theoretical perspectives and organization practices of the different varieties of feminism that take on questions of colonialism, imperialism, and the repressive rule of colonial, post-colonial and advanced capitalist nation-states. It provides a (...) comparative, relational, historically grounded conception of feminist praxis that differs markedly from the liberal pluralist, multicultural understanding that sheapes some of the dominant version of Euro-American feminism. As a whole, the collection poses a unique challenge to the naturalization of gender based in the experiences, histories and practices of Euro-American women. (shrink)
Jennifer Nagel (2010) has recently proposed a fascinating account of the decreased tendency to attribute knowledge in conversational contexts in which unrealized possibilities of error have been mentioned. Her account appeals to epistemic egocentrism, or what is sometimes called the curse of knowledge, an egocentric bias to attribute our own mental states to other people (and sometimes our own future and past selves). Our aim in this paper is to investigate the empirical merits of Nagel’s hypothesis about the psychology involved (...) in knowledge attribution. (shrink)
It has become increasingly popular to respond to experimental philosophy by suggesting that experimental philosophers haven’t been studying the right kind of thing. One version of this kind of response, which we call the reflection defense, involves suggesting both that philosophers are interested only in intuitions that are the product of careful reflection on the details of hypothetical cases and the key concepts involved in those cases, and that these kinds of philosophical intuitions haven’t yet been adequately studied by experimental (...) philosophers. Of course, as a defensivemove, thisworks only if reflective intuitions are immune from the kinds of problematic effects that form the basis of recent experimental challenges to philosophy’s intuition-deploying practices. If they are not immune to these kinds of effects, then the fact that experimental philosophers have not had the right kind of thing in their sights would provide little comfort to folks invested in philosophy’s intuition-deploying practices. Here we provide reasons to worry that even reflective intuitions can display sensitivity to the same kinds of problematic effects, although possibly in slightly different ways. As it turns out, being reflective might sometimes just mean being wrong in a different way. (shrink)
" Our various cultures are symbolic environments or "spiritual ecologies" within which the Human Eros can thrive. This is how we inhabit the earth. Encircling and sustaining our cultural existence is nature.
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...) of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on). (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...) based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
This study presents a substantial and often radical reinterpretation of some of the central themes of Locke's thought. Professor Alexander concentrates on the Essay Concerning Human Understanding and aims to restore that to its proper historical context. In Part I he gives a clear exposition of some of the scientific theories of Robert Boyle, which, he argues, heavily influenced Locke in employing similar concepts and terminology. Against this background, he goes on in Part II to provide an account of Locke's (...) views on the external world and our knowledge of it. He shows those views to be more consistent and plausible than is generally allowed, demonstrating how they make sense and enable scientific explanations of nature. In examining the views of Locke and Boyle together, the book throws light both on the development of philosophy and the beginnings of modern science, and in particular it makes a considerable and original contribution to our understanding of Locke's philosophy. (shrink)
Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...) so on forever: that any sequence of organisms (each one a child of the previous) must contain occasional multi-parent organisms, or must terminate. By proving that a certain measure (arguably an intelligence measure) decreases when an idealized parent AGI single-handedly creates a child AGI, we argue that a similar Law holds for AGIs. (shrink)
After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...) traditional reinforcement learning could be altered to remove this roadblock. (shrink)
In four closely interwoven studies, Jeffrey Alexander identifies the central dilemma that provokes contemporary social theory and proposes a new way to resolve it. The dream of reason that marked the previous fin de siècle foundered in the face of the cataclysms of the twentieth century, when war, revolution, and totalitarianism came to be seen as themselves products of reason. In response there emerged the profound skepticism about rationality that has so starkly defined the present fin de siècle. From Wittgenstein (...) through Rorty and postmodernism, relativism rejects the very possibility of universal standards, while for both positivism and neo-Marxists like Bourdieu, reductionism claims that ideas simply reflect their social base. In a readable and spirited argument, Alexander develops the alternative of a "neo-modernist" position that defends reason from within a culturally centered perspective while remaining committed to the goal of explaining, not merely interpreting, contemporary social life. On the basis of a sweeping reinterpretation of postwar society and its intellectuals, he suggests that both antimodernist radicalism and postmodernist resignation are now in decline; a more democratic, less ethnocentric and more historically contingent universalizing social theory may thus emerge. Developing in his first two studies a historical approach to the problem of "absent reason," Alexander moves via a critique of Richard Rorty to construct his case for "present reason." Finally, focusing on the work of Pierre Bourdieu, he provokes the most sustained critical reflection yet on this influential thinker. Fin de Siecle Social Theory is a tonic intervention in contemporary debates, showing how social and cultural theory can properly take the measure of the extraordinary times in which we live. (shrink)
This paper argues that higher-order doubt generates an epistemic dilemma. One has a higher-order doubt with regards to P insofar as one justifiably withholds belief as to what attitude towards P is justified. That is, one justifiably withholds belief as to whether one is justified in believing, disbelieving, or withholding belief in P. Using the resources provided by Richard Feldman’s recent discussion of how to respect one’s evidence, I argue that if one has a higher-order doubt with regards to P, (...) then one is not justified in having any attitude towards P. Otherwise put: No attitude towards the doubted proposition respects one’s higher-order doubt. I argue that the most promising response to this problem is to hold that when one has a higher-order doubt about P, the best one can do to respect such a doubt is to simply have no attitude towards P. Higher-order doubt is thus much more rationally corrosive than non-higher-order doubt, as it undermines the possibility of justifiably having any attitude towards the doubted proposition. (shrink)
This handbook presents a comprehensive introduction to the core areas of philosophy of education combined with an up-to-date selection of the central themes. It includes 95 newly commissioned articles that focus on and advance key arguments; each essay incorporates essential background material serving to clarify the history and logic of the relevant topic, examining the status quo of the discipline with respect to the topic, and discussing the possible futures of the field. The book provides a state-of-the-art overview of philosophy (...) of education, covering a range of topics: Voices from the present and the past deals with 36 major figures that philosophers of education rely on; Schools of thought addresses 14 stances including Eastern, Indigenous, and African philosophies of education as well as religiously inspired philosophies of education such as Jewish and Islamic; Revisiting enduring educational debates scrutinizes 25 issues heavily debated in the past and the present, for example care and justice, democracy, and the curriculum; New areas and developments addresses 17 emerging issues that have garnered considerable attention like neuroscience, videogames, and radicalization. The collection is relevant for lecturers teaching undergraduate and graduate courses in philosophy of education as well as for colleagues in teacher training. Moreover, it helps junior researchers in philosophy of education to situate the problems they are addressing within the wider field of philosophy of education and offers a valuable update for experienced scholars dealing with issues in the sub-discipline. Combined with different conceptions of the purpose of philosophy, it discusses various aspects, using diverse perspectives to do so. Contributing Editors: Section 1: Voices from the Present and the Past: Nuraan Davids Section 2: Schools of Thought: Christiane Thompson and Joris Vlieghe Section 3: Revisiting Enduring Debates: Ann Chinnery, Naomi Hodgson, and Viktor Johansson Section 4: New Areas and Developments: Kai Horsthemke, Dirk Willem Postma, and Claudia Ruitenberg. (shrink)
I begin my analysis of consent by agreeing with Professor Hurd that consent functions as a “moral transformative” by altering the obligations and permissions that determine the Tightness of others' actions. I further agree with her that consent is intimately related to the capacity for autonomous action; one who cannot alter others' obligations through consent is not fully autonomous. I cannot improve on her elaboration of these points.
There are two ways of understanding experimental philosophy's process of appealing to intuitions as evidence for or against philosophical claims: the positive and negative programs. This chapter deals with how the positivist method of conceptual analysis is affected by the results of the negative program. It begins by describing direct extramentalism, semantic mentalism, conceptual mentalism, and mechanist mentalism, all of which argue that intuitions are credible sources of evidence and will therefore be shared. The negative program challenges this view by (...) questioning if there can be in fact a shared intuition about a specific hypothetical case, as conflicting intuitions are as likely to arise. The chapter then discusses other issues raised by the negativists such as the limits of surveys and the proper domain problem. (shrink)
. Moral systems are described as systems of indirect reciprocity, existing because of histories of conflicts of interest and arising as outcomes of the complexity of social interactions in groups of long‐lived individuals with varying conflicts and confluences of interest and indefinitely iterated social interactions. Although morality is commonly defined as involving justice for all people, or consistency in the social treatment of all humans, it may have arisen for immoral reasons, as a force leading to cohesiveness within human groups (...) but specifically excluding and directed against other human groups with different interests. (shrink)
"These notes are about the process of design: the process of inventing things which display new physical order, organization, form, in response to function." This book, opening with these words, presents an entirely new theory of the process of design. In the first part of the book, Christopher Alexander discusses the process by which a form is adapted to the context of human needs and demands that has called it into being. He shows that such an adaptive process will be (...) successful only if it proceeds piecemeal instead of all at once. It is for this reason that forms from traditional un-self-conscious cultures, molded not by designers but by the slow pattern of changes within tradition, are so beautifully organized and adapted. When the designer, in our own self-conscious culture, is called on to create a form that is adapted to its context he is unsuccessful, because the preconceived categories out of which he builds his picture of the problem do not correspond to the inherent components of the problem, and therefore lead only to the arbitrariness, willfulness, and lack of understanding which plague the design of modern buildings and modern cities. In the second part, Mr. Alexander presents a method by which the designer may bring his full creative imagination into play, and yet avoid the traps of irrelevant preconception. He shows that, whenever a problem is stated, it is possible to ignore existing concepts and to create new concepts, out of the structure of the problem itself, which do correspond correctly to what he calls the subsystems of the adaptive process. By treating each of these subsystems as a separate subproblem, the designer can translate the new concepts into form. The form, because of the process, will be well-adapted to its context, non-arbitrary, and correct. The mathematics underlying this method, based mainly on set theory, is fully developed in a long appendix. Another appendix demonstrates the application of the method to the design of an Indian village. (shrink)
Experimental philosophy has emerged as a very specific kind of response to an equally specific way of thinking about philosophy, one typically associated with philosophical analysis and according to which philosophical claims are measured, at least in part, by our intuitions. Since experimental philosophy has emerged as a response to this way of thinking about philosophy, its philosophical significance depends, in no small part, on how significant the practice of appealing to intuitions is to philosophy. In this paper, I defend (...) the significance of experimental philosophy by defending the significance of intuitions—in particular, by defending their significance from a recent challenge advanced by Timothy Williamson. (shrink)
The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce (...) acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of large-scale experiments using crowd-sourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from state-of-the-art machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic. (shrink)
Philosophical discussions often involve appeals to verdicts about particular cases, sometimes actual, more often hypothetical, and usually with little or no substantive argument in their defense. Philosophers — on both sides of debates over the standing of this practice — have often called the basis for such appeals ‘intuitions’. But, what might such ‘intuitions’ be, such that they could legitimately serve these purposes? Answers vary, ranging from ‘thin’ conceptions that identify intuitions as merely instances of some fairly generic and epistemologically (...) uncontroversial category of mental states or episodes to ‘thick’ conceptions that add to this thin base certain semantic, phenomenological, etiological, or methodological conditions. As this chapter discusses, thick conceptions turn out to have their own methodological problems; some may even leave philosophers in the methodologically untenable position of being unable to determine when anyone is doing philosophy correctly. (shrink)
In the recent literature on causal and non-causal scientific explanations, there is an intuitive assumption according to which an explanation is non-causal by virtue of being abstract. In this context, to be ‘abstract’ means that the explanans in question leaves out many or almost all causal microphysical details of the target system. After motivating this assumption, we argue that the abstractness assumption, in placing the abstract and the causal character of an explanation in tension, is misguided in ways that are (...) independent of which view of causation or causal explanation one takes to be most accurate. On major accounts of causation, as well as on major accounts of causal explanation, the abstractness of an explanation is not sufficient for it being non-causal. That is, explanations are not non-causal by dint of being abstract. (shrink)
Multiarm bandit problems have been used to model the selection of competing scientific theories by boundedly rational agents. In this paper, I define a variable-arm bandit problem, which allows the set of scientific theories to vary over time. I show that Roth-Erev reinforcement learning, which solves multiarm bandit problems in the limit, cannot solve this problem in a reasonable time. However, social learning via preferential attachment combined with individual reinforcement learning which discounts the past, does.
In the past two decades, reinforcement learning has become a popular framework for understanding brain function. A key component of RL models, prediction error, has been associated with neural signals throughout the brain, including subcortical nuclei, primary sensory cortices, and prefrontal cortex. Depending on the location in which activity is observed, the functional interpretation of prediction error may change: Prediction errors may reflect a discrepancy in the anticipated and actual value of reward, a signal indicating the salience or novelty of (...) a stimulus, and many other interpretations. Anterior cingulate cortex has long been recognized as a region involved in processing behavioral error, and recent computational models of the region have expanded this interpretation to include a more general role for the region in predicting likely events, broadly construed, and signaling deviations between expected and observed events. Ongoing modeling work investigating the interaction between ACC and additional regions involved in cognitive control suggests an even broader role for cingulate in computing a hierarchically structured surprise signal critical for learning models of the environment. The result is a predictive coding model of the frontal lobes, suggesting that predictive coding may be a unifying computational principle across the neocortex. (shrink)
Edouard Machery argues that many traditional philosophical questions are beyond our capacity to answer. Answering them seems to require using the method of cases, a method that involves testing answers to philosophical questions against what we think about real or imagined cases. The problem, according to Machery, is that this method has proved unreliable ; what we think about these kinds of cases is both problematically heterogeneous and volatile. His bold solution: abandon the method of cases altogether and with it (...) many of the questions that we have come to associate with philosophy itself. Many of the critical responses to Machery’s book have focused on whether empirical work on judgements about philosophical cases supports his claim that the method of cases is unreliable. Our problem with these responses is that they accept that reliability is the right way to frame empirically informed concerns about the method of cases, and we think that it is not. The reason is simple: the kind of unreliability thesis that Machery needs proves to be empirically intractable, at least by anything like the current methods used by experimental philosophers, or so we shall argue here. While we have empirical grounds for thinking that unreliability arguments don’t give us reason to abandon the method of cases, we do think that there are empirical grounds for thinking that it needs to be reformed. There are other standards that we expect our methods to meet beyond mere reliability, especially standards of practical rationality, which are too often forgotten in metaphilosophical discussions that tend to focus exclusively on epistemological considerations. Methodological considerations, after all, are not just matters of epistemic normativity, but practical rationality as well. What’s more, considerations of practical rationality become particularly important when we move from the kind of extreme scepticism that Machery endorses to the kind of progressive reformation that we think should be pursued. And so we conclude by arguing that thinking about philosophical inquiry in terms of standards of practical rationality allows us both to better understand what kinds of problems recent empirical work on philosophical cognition raises for the method of cases and also how that work can point the way to reforming it. (shrink)
The medial prefrontal cortex (mPFC) has been the subject of intense interest as a locus of cognitive control. Several computational models have been proposed to account for a range of effects, including error detection, conflict monitoring, error likelihood prediction, and numerous other effects observed with single-unit neurophysiology, fMRI, and lesion studies. Here, we review the state of computational models of cognitive control and offer a new theoretical synthesis of the mPFC as signaling response–outcome predictions. This new synthesis has two interacting (...) components. The first component learns to predict the various possible outcomes of a planned action, and the second component detects discrepancies between the actual and intended responses; the detected discrepancies in turn update the outcome predictions. This single construct is consistent with a wide array of performance monitoring effects in mPFC and suggests a unifying account of the cognitive role of medial PFC in performance monitoring. (shrink)
A number of philosophers have recently claimed that unjustified beliefs can be defeaters. However these claims have been made in passing, occurring in the context of defenses of other theses. As a result, the claim that unjustified beliefs can be defeaters has been neither vigorously defended nor thoroughly explained. This paper fills that gap. It begins by identifying problems with the two most in-depth accounts of the possibility of unjustified defeaters due to Bergmann and Pryor. It then offers a revised (...) version of Pryor’s account. On this proposal, an unjustified belief can be a defeater if it is rational, all things considered. If a belief is rational, all things considered, it can require one to abandon other beliefs with which it conflicts—even if it is unjustified. Finally, this paper shows that the proposed account of unjustified defeaters is one that can and should be embraced by leading accounts of justified belief as diverse as reliabilism and evidentialism. (shrink)
Amodel for inventing newsignals is introduced in the context of sender–receiver games with reinforcement learning. If the invention parameter is set to zero, it reduces to basic Roth–Erev learning applied to acts rather than strategies, as in Argiento et al.. If every act is uniformly reinforced in every state it reduces to the Chinese Restaurant Process—also known as the Hoppe–Pólya urn—applied to each act. The dynamics can move players from one signaling game to another during the learning process. Invention helps (...) agents avoid pooling and partial pooling equilibria. (shrink)