Understanding consciousness is a truly multidisciplinary project, attracting intense interest from researchers and theorists from diverse backgrounds. Thus, we now have computational scientists, neuroscientists, and philosophers all engaged in the same effort. This book draws together the work of leading researchers around the world, providing insights from these three general perspectives. The work is highlighted by a rare look at work being conducted by Japanese researchers.
The primary aim of this study is to dissolve the mind-body problem. It shows how the ‘problem’ separates into two distinct sets of issues, concerning ontology on the one hand, and explanation on the other, and argues that explanation – whether or not human behaviour can be explained in physical terms – is the more crucial. The author contends that a functionalist methodology in psychology and neurophysiology will prove adequate to explain human behaviour. Defence of this thesis requires: an examination (...) of the mental/physical dichotomy, and its rejection in favour of a distinction between psychological and physical terms; a description and discussion of functionalism in psychology and neurophysiology, showing how the notorious problem of the necessary intensionality of psychological terms may be circumvented; an examination of the role of computer simulation in psycho-physical research; and an explanation of how the phenomena of sentience fit the functional framework. The book concludes that the thesis presented is in all essentials that of Aristotle; Aristotle had no ‘mind-body problem’, and were it not for a subsequent over-obsession with Cartesian scepticism, we need not have had one either. (shrink)
This collection by a distinguished group of philosophers, psychologists, and physiologists reflects an interdisciplinary approach to the central question of cognitive science: how do we model the mind? Among the topics explored are the relationships (theoretical, reductive, and explanatory) between philosophy, psychology, computer science, and physiology; what should be asked of models in science generally, and in cognitive science in particular; whether theoretical models must make essential reference to objects in the environment; whether there are human competences that are resistant, (...) in principle, to modelling; whether simulated thinking and intentionality are really thinking and intentionality; how semantics can be generated from syntactics; the meaning of the terms "representations" and "modelling;" whether the nature of the "hardware" matters; and whether computer models of humans are "dehumanizing." Contributors include Donald Davidson, Daniel C. Dennett, Margaret A. Boden, Adam Morton, Dennis Noble, T. Poggio, Colin Blakemore, K.V. Wilkes, P.N. Johnson-Laird, and Jonathan St. B.T. Evans. (shrink)
Ranging from Joseph Bellamy to Hilary Putnam, and from early New England Divinity Schools to contemporary university philosophy departments, historian Bruce Kuklick recounts the story of the growth of philosophical thinking in the United States. Readers will explore the thought of early American philosphers such as Jonathan Edwards and John Witherspoon and will see how the political ideas of Benjamin Franklin, Thomas Paine and Thomas Jefferson influenced philosophy in colonial America. Kuklick discusses The Transcendental Club (members Henry David Thoreau, Ralph (...) Waldo Emerson) and describes the rise of pragmatism centered on Metaphysical Club of Cambridge (and members William James, Oliver Wendell Holmes, and Charles Peirce). He examines the profound impact Darwinism had on American philosophy and looks at Idealists such as the Kantian Josiah Royce and the Hegelian John Dewey. The book shows how, in the twentieth century, the Nazi conquest of Europe unleashed a flood of European intellectuals onto these shores, including such major thinkers as Theodore Adorno, Erich Fromm, Rudolph Carnap, and Alfred Tarski. Finally, Kuklick examines the contributions of such contemporary philosophers as Sidney Hook and Willard Quine and such books as John Rawl's A Theory of Justice and Herbert Marcuse's One Dimensional Man. Kuklick pulls no punches in portraying the state of American philosophy today and its contested role in the intellectual life of the nation and the world. The range of philosophical thought in our nation's history has been great, from Edwards's Religious Affections to Kuhn's The Structure of Scientific Revolutions, and Bruce Kuklick has captured it all in a book that blends intricate details with sweeping vision. (shrink)
The Other Freud undertakes an exciting and original analysis of Freud's major writings on religion and culture. James DiCenso suggests that Freud's texts on religion are unjustifiably ignored or taken for granted, and he shows that Freud's commentary on religion are rich, multifaceted texts, and deserve far more attention. Using concepts derived primarily from Jacques Lacan and Julia Kristeva, DiCenso draws an unparalleled critical portrait of the "other Freud". This book is rich with new ideas and fresh interpretations.
This volume is a direct result of a conference held at Princeton University to honor George A. Miller, an extraordinary psychologist. A distinguished panel of speakers from various disciplines -- psychology, philosophy, neuroscience and artificial intelligence -- were challenged to respond to Dr. Miller's query: "What has happened to cognition? In other words, what has the past 30 years contributed to our understanding of the mind? Do we really know anything that wasn't already clear to William James?" Each participant (...) tried to stand back a little from his or her most recent work, but to address the general question from his or her particular standpoint. The chapters in the present volume derive from that occasion. (shrink)
The essays in this collection are concerned with the psychology of moral agency. They focus on moral feelings and moral motivation, and seek to understand the operations and origins of these phenomena as rooted in the natural desires and emotions of human beings. An important feature of the essays, and one that distinguishes the book from most philosophical work in moralpsychology, is the attention to the writings of Freud. Many of the essays draw (...) on Freud's ideas about conscience and morality, while several explore the depths and limits of Freud's theories. An underlying theme of the volume is a critique of influential rationalist accounts of moral agency. John Deigh shows that one can subject the principles of morality to rational inquiry without thereby holding that reason alone can originate action. (shrink)
Theories of Theories of Mind brings together contributions by a distinguished international team of philosophers, psychologists, and primatologists, who between them address such questions as: what is it to understand the thoughts, feelings, and intentions of other people? How does such an understanding develop in the normal child? Why, unusually, does it fail to develop? And is any such mentalistic understanding shared by members of other species? The volume's four parts together offer a state of the art survey of (...) the major topics in the theory-theory/simulationism debate within philosophy of mind, developmental psychology, the aetiology of autism and primatology. The volume will be of great interest to researchers and students in all areas interested in the 'theory of mind' debate. (shrink)
The Character of Mind provides a sweeping and accessible general introduction to the philosophy of mind. Colin McGinn covers all of the main topics--the mind-body problem, the nature of acquaintance, the relation between thought and language, agency, and the self.In particular, McGinn addresses the issue of consciousness, and the difficulty of combining the two very different perspectives on the mind that arise from introspection and from the observation of other people. This second edition has been updated with three new cutting-edge (...) chapters on consciousness, content, and cognitive science to make it the reader of choice on this vital topic. (shrink)
This book attempts to marry truth-conditional semantics with cognitive linguistics in the church of computational neuroscience. To this end, it examines the truth-conditional meanings of coordinators, quantifiers, and collective predicates as neurophysiological phenomena that are amenable to a neurocomputational analysis. Drawing inspiration from work on visual processing, and especially the simple/complex cell distinction in early vision (V1), we claim that a similar two-layer architecture is sufficient to learn the truth-conditional meanings of the logical coordinators and logical quantifiers. As a prerequisite, (...) much discussion is given over to what a neurologically plausible representation of the meanings of these items would look like. We eventually settle on a representation in terms of correlation, so that, for instance, the semantic input to the universal operators (e.g. and, all)is represented as maximally correlated, while the semantic input to the universal negative operators (e.g. nor, no)is represented as maximally anticorrelated. On the basis this representation, the hypothesis can be offered that the function of the logical operators is to extract an invariant feature from natural situations, that of degree of correlation between parts of the situation. This result sets up an elegant formal analogy to recent models of visual processing, which argue that the function of early vision is to reduce the redundancy inherent in natural images. Computational simulations are designed in which the logical operators are learned by associating their phonological form with some degree of correlation in the inputs, so that the overall function of the system is as a simple kind of pattern recognition. Several learning rules are assayed, especially those of the Hebbian sort, which are the ones with the most neurological support. Learning vector quantization (LVQ) is shown to be a perspicuous and efficient means of learning the patterns that are of interest. We draw a formal parallelism between the initial, competitive layer of LVQ and the simple cell layer in V1, and between the final, linear layer of LVQ and the complex cell layer in V1, in that the initial layers are both selective, while the final layers both generalize. It is also shown how the representations argued for can be used to draw the traditionally-recognized inferences arising from coordination and quantification, and why the inference of subalternacy breaks down for collective predicates. Finally, the analogies between early vision and the logical operators allow us to advance the claim of cognitive linguistics that language is not processed by proprietary algorithms, but rather by algorithms that are general to the entire brain. Thus in the debate between objectivist and experiential metaphysics, this book falls squarely into the camp of the latter. Yet it does so by means of a rigorous formal, mathematical, and neurological exposition – in contradiction of the experiential claim that formal analysis has no place in the understanding of cognition. To make our own counter-claim as explicit as possible, we present a sketch of the LVQ structure in terms of mereotopology, in which the initial layer of the network performs topological operations, while the final layer performs mereological operations. The book is meant to be self-contained, in the sense that it does not assume any prior knowledge of any of the many areas that are touched upon. It therefore contains mini-summaries of biological visual processing, especially the retinocortical and ventral /what?/ parvocellular pathways computational models of neural signaling, and in particular the reduction of the Hodgkin-Huxley equations to the connectionist and integrate-and-fire neurons Hebbian learning rules and the elaboration of learning vector quantization the linguistic pathway in the left hemisphere memory and the hippocampus truth-conditional vs. image-schematic semantics objectivist vs. experiential metaphysics and mereotopology. All of the simulations are implemented in MATLAB, and the code is available from the book’s website. • The discovery of several algorithmic similarities between visison and semantics. • The support of all of this by means of simulations, and the packaging of all of this in a coherent theoretical framework. (shrink)