What is morality? Where does it come from? And why do most of us heed its call most of the time? In Braintrust, neurophilosophy pioneer Patricia Churchland argues that morality originates in the biology of the brain. She describes the "neurobiological platform of bonding" that, modified by evolutionary pressures and cultural values, has led to human styles of moral behavior. The result is a provocative genealogy of morals that asks us to reevaluate the priority given to religion, absolute rules, and (...) pure reason in accounting for the basis of morality. Moral values, Churchland argues, are rooted in a behavior common to all mammals--the caring for offspring. The evolved structure, processes, and chemistry of the brain incline humans to strive not only for self-preservation but for the well-being of allied selves--first offspring, then mates, kin, and so on, in wider and wider "caring" circles. Separation and exclusion cause pain, and the company of loved ones causes pleasure; responding to feelings of social pain and pleasure, brains adjust their circuitry to local customs. In this way, caring is apportioned, conscience molded, and moral intuitions instilled. A key part of the story is oxytocin, an ancient body-and-brain molecule that, by decreasing the stress response, allows humans to develop the trust in one another necessary for the development of close-knit ties, social institutions, and morality. A major new account of what really makes us moral, Braintrust challenges us to reconsider the origins of some of our most cherished values. (shrink)
Anydomainofscientificresearchhasitssustainingorthodoxy. Thatis, research on a problem, whether in astronomy, physics, or biology, is con- ducted against a backdrop of broadly shared assumptions. It is these as- sumptionsthatguideinquiryandprovidethecanonofwhatisreasonable-- of what "makes sense." And it is these shared assumptions that constitute a framework for the interpretation of research results. Research on the problem of how we see is likewise sustained by broadly shared assump- tions, where the current orthodoxy embraces the very general idea that the business of the visual system is to (...) create a detailed replica of the visual world, and that it accomplishes its business via hierarchical organization and by operatingessentiallyindependently of other sensorymodalitiesas well as independently of previous learning, goals, motor planning, and motor execution. (shrink)
We critically review themushrooming literature addressing the neuralmechanisms of moral cognition (NMMC), reachingthe following broad conclusions: (1) researchmainly focuses on three inter-relatedcategories: the moral emotions, moral socialcognition, and abstract moral reasoning. (2)Research varies in terms of whether it deploysecologically valid or experimentallysimplified conceptions of moral cognition. Themore ecologically valid the experimentalregime, the broader the brain areas involved.(3) Much of the research depends on simplifyingassumptions about the domain of moral reasoningthat are motivated by the need to makeexperimental progress. This is a (...) valuablebeginning, but as more is understood about theneural mechanisms of decision-making, morerealistic conceptions will need to replace thesimplified conceptions. (4) The neuralcorrelates of real-life moral cognition areunlikely to consist in anything remotely like a``moral module'' or a ``morality center.'' Moralrepresentations, deliberations and decisionsare probably highly distributed and notconfined to any particular brainsub-system. Discovering the basic neuralprinciples governing planning, judgment anddecision-making will require vastly more basicresearch in neuroscience, but correlatingactivity in certain brain regions withwell-defined psychological conditions helpsguide neural level research. Progress on socialphenomena will also require theoreticalinnovation in understanding the brain'sdistinctly biological form of computationthat is anchored by emotions, needs, drives,and the instinct for survival. (shrink)
States of the brain represent states of the world. A puzzle arises when one learns that at least some of the mind/brain’s internal representations, such as a sensation of heat or a sensation of red, do not genuinely resemble the external realities they allegedly represent: the mean kinetic energy of the molecules of the substance felt (temperature) and the mean electromagnetic reflectance profile of the seen object (color). The historical response has been to declare a distinction between objectively real properties, (...) such as shape motion and mass, and merely subjective properties, such as heat, color and smell. This hypothesis leads to trouble. A challenge for cognitive neurobiology is to characterize, in suitably general terms, the nature of the relationship between brain models and the world modeled. We favor the hypothesis that brains develop high-dimensional maps whose internal relations correspond in varying degrees of fidelity to the enduring causal structure of the world. From this perspective, the basic epistemological relation is not “single-percept to single- external-feature” but rather “background-brain-maps to causal-domain-portrayed. (shrink)
What we humans call ethics or morality depends on four interlocking brain processes: caring. Learning local social practices and the ways of others – by positive and negative reinforcement, by imitation, by trial and error, by various kinds of conditioning, and by analogy. Recognition of others' psychological states. Problem-solving in a social context. These four broad capacities are not unique to humans, but are probably uniquely developed in human brains by virtue of the expansion of the prefrontal cortex.1.
Within the domain of philosophy, it is not unusual to hear the claim that most questions about the nature of consciousness are essentially and absolutely beyond the scope of science, no matter how science may develop in the twenty-first century. Some things, it is pointed out, we shall never _ever_ understand, and consciousness is one of them (Vendler 1994, Swinburne 1994, McGinn 1989, Nagel 1994, Warner 1994). One line of reasoning assumes that consciousness is the manifestation of a distinctly nonphysical (...) thing, and hence has no physical properties that might be explored by techniques suitable to physical things. Dualism, as this view is known, is still to be found among those within the tradition of Kant and Hegel, as well as among some with religious convictions. Surprisingly, however, strenuous foot-dragging is evident even among philosophers of a materialist conviction. Indeed, one might say that it is the philosophical fashion of the 90's to pronounce consciousness unexplainable, and to find the explanatory aspirations of neurobiology to be faintly comic if not rather pitiful. The very word, "reductionism" has come to be used more or less synonymously with "benighted-scientism-run-amok", where scientistm apparently means "applying scientific techniques to domains where they are inapplicable." McGinn, perhaps the most unblushing of the naysayers, insists that we cannot expect even to make any headway on the problem. (p. 114) Ironically perhaps, here we are at a conference in honor of Dr. Herbert Jasper who was a great pioneer in moving neuroscience forward on this problem, and where results will be presented allegedly _showing_ additional progress on the problem. Because I am quite optimistic about future scientific progress on the nature of consciousness, my aim here, as a philosopher, is to address the most popular and influential of the skeptical arguments, and to explain why I find them unconvincing. Thus the overall form of the paper is negative, in the sense that I want to show why a set of naysaying arguments fail. (shrink)
Two very different insights motivate characterizing the brain as a computer. One depends on mathematical theory that defines computability in a highly abstract sense. Here the foundational idea is that of a Turing machine. Not an actual machine, the Turing machine is really a conceptual way of making the point that any well-defined function could be executed, step by step, according to simple 'if-you-are-in-state-P-and-have-input-Q-then-do-R' rules, given enough time (maybe infinite time) [see COMPUTATION]. Insofar as the brain is a device whose (...) input and output can be characterized in terms of some mathematical function -- however complicated -- then in that very abstract sense, it can be mimicked by a Turning machine. Given what is known so far brains do seem to depend on cause-effect operations, and hence brains appear to be, in some formal sense, equivalent to a Turing machine [see CHURCH-TURING THESIS]. On its own, however, this reveals nothing at all of how the mind-brain actually works. The second insight depends on looking at the brain as a biological device that processes information from the environment to build complex representations that enable the brain to make predictions and select advantageous behaviors. Where necessary to avoid ambiguity, we will refer to the first notion of computation as. (shrink)
Toward a neurobiologically grounded approach to explaining self-control we discuss the case of a patient with a bilateral lesion in frontal ventromedial cortex. Patients with such lesions display a marked deficit in social decision making. Compared with an account that examines the causal antecedents of self-control, Rachlin's behaviorist approach seems lacking in explanatory strength.