In this book, Michael Arbib, a researcher in artificial intelligence and brain theory, joins forces with Mary Hesse, a philosopher of science, to present an integrated account of how humans 'construct' reality through interaction with the social and physical world around them. The book is a major expansion of the Gifford Lectures delivered by the authors at the University of Edinburgh in the autumn of 1983. The authors reconcile a theory of the individual's construction of reality as a network (...) of schemas 'in the head' with an account of the social construction of language, science, ideology and religion to provide an integrated schema-theoretic view of human knowledge. The authors still find scope for lively debate, particularly in their discussion of free will and of the reality of God. The book integrates an accessible exposition of background information with a cumulative marshalling of evidence to address fundamental questions concerning human action in the world and the nature of ultimate reality. (shrink)
This article is concerned with developing a philosophical approach to a number of significant changes to academic publishing, and specifically the global journal knowledge system wrought by a range of new digital technologies that herald the third age of the journal as an electronic, interactive and mixed-media form of scientific communication. The paper emerges from an Editors' Collective, a small New Zealand-based organisation comprised of editors and reviewers of academic journals mostly in the fields of education and philosophy. The paper (...) is the result of a collective writing process. (shrink)
Largely due to the popular allegation that contemporary science has uncovered indeterminism in the deepest known levels of physical reality, the debate as to whether humans have moral freedom, the sort of freedom on which moral responsibility depends, has put aside to some extent the traditional worry over whether determinism is true. As I argue in this paper, however, there are powerful proofs for both chronological determinism and necessitarianism, forms of determinism that pose the most penetrative threat to human moral (...) freedom. My ultimate hope is to show that, despite the robust case against human moral freedom that can be made without even relying on them, chronological determinism and necessitarianism should be regarded with renewed urgency. (shrink)
Management theory and practice are facing unprecedented challenges. The lack of sustainability, the increasing inequity, and the continuous decline in societal trust pose a threat to ‘business as usual’. Capitalism is at a crossroad and scholars, practitioners, and policy makers are called to rethink business strategy in light of major external changes. In the following, we review an alternative view of human beings that is based on a renewed Darwinian theory developed by Lawrence and Nohria. We label this alternative view (...) ‘humanistic’ and draw distinctions to current ‘economistic’ conceptions. We then develop the consequences that this humanistic view has for business organizations, examining business strategy, governance structures, leadership forms, and organizational culture. Afterward, we outline the influences of humanism on management in the past and the present, and suggest options for humanism to shape the future of management. In this manner, we will contribute to the discussion of alternative management paradigms that help solve the current crises. (shrink)
Bishop and Trout here present a unique and provocative new approach to epistemology. Their approach aims to liberate epistemology from the scholastic debates of standard analytic epistemology, and treat it as a branch of the philosophy of science. The approach is novel in its use of cost-benefit analysis to guide people facing real reasoning problems and in its framework for resolving normative disputes in psychology. Based on empirical data, Bishop and Trout show how people can improve their reasoning by relying (...) on Statistical Prediction Rules. They then develop and articulate the positive core of the book. Their view, Strategic Reliabilism, claims that epistemic excellence consists in the efficient allocation of cognitive resources to reliable reasoning strategies, applied to significant problems. The last third of the book develops the implications of this view for standard analytic epistemology; for resolving normative disputes in psychology; and for offering practical, concrete advice on how this theory can improve real people's reasoning. This is a truly distinctive and controversial work that spans many disciplines and will speak to an unusually diverse group, including people in epistemology, philosophy of science, decision theory, cognitive and clinical psychology, and ethics and public policy. (shrink)
Here, we argue that any neurobiological theory based on an experience/function division cannot be empirically confirmed or falsified and is thus outside the scope of science. A ‘perfect experiment’ illustrates this point, highlighting the unbreachable boundaries of the scientific study of consciousness. We describe a more nuanced notion of cognitive access that captures personal experience without positing the existence of inaccessible conscious states. Finally, we discuss the criteria necessary for forming and testing a falsifiable theory of consciousness.
When contrasted with "Continental" philosophy, analytical philosophy is often called "Anglo-American." Dummett argues that "Anglo-Austrian" would be a more accurate label. By re-examining the similar origins of the two traditions, we can come to understand why they later diverged so widely, and thus take the first step toward reconciliation.
Choice Outstanding Academic Title, 1996. In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, The Handbook of Brain Theory and Neural Networkscharts the immense progress made in recent years in many specific areas related to two great questions: How does the brain work? and How can we build intelligent machines? While many books have appeared on limited aspects of one subfield or another of brain theory and neural networks, the (...) Handbookcovers the entire sweep of topics—from detailed models of single neurons, analyses of a wide variety of biological neural networks, and connectionist studies of psychology and language, to mathematical analyses of a variety of abstract neural networks, and technological applications of adaptive, artificial neural networks. The excitement, and the frustration, of these topics is that they span such a broad range of disciplines including mathematics, statistical physics and chemistry, neurology and neurobiology, and computer science and electrical engineering as well as cognitive psychology, artificial intelligence, and philosophy. Thus, much effort has gone into making the Handbookaccessible to readers with varied backgrounds while still providing a clear view of much of the recent, specialized research in specific topics. The heart of the book, part III, comprises of 267 original articles by leaders in the various fields, arranged alphabetically by title. Parts I and II, written by the editor, are designed to help readers orient themselves to this vast range of material. Part I, Background, introduces several basic neural models, explains how the present study of brain theory and neural networks integrates brain theory, artificial intelligence, and cognitive psychology, and provides a tutorial on the concepts essential for understanding neural networks as dynamic, adaptive systems. Part II, Road Maps, provides entry into the many articles of part III through an introductory "Meta-Map" and twenty-three road maps, each of which tours all the Part III articles on the chosen theme. (shrink)
The article analyzes the neural and functional grounding of language skills as well as their emergence in hominid evolution, hypothesizing stages leading from abilities known to exist in monkeys and apes and presumed to exist in our hominid ancestors right through to modern spoken and signed languages. The starting point is the observation that both premotor area F5 in monkeys and Broca's area in humans contain a “mirror system” active for both execution and observation of manual actions, and that F5 (...) and Broca's area are homologous brain regions. This grounded the mirror system hypothesis of Rizzolatti and Arbib (1998) which offers the mirror system for grasping as a key neural “missing link” between the abilities of our nonhuman ancestors of 20 million years ago and modern human language, with manual gestures rather than a system for vocal communication providing the initial seed for this evolutionary process. The present article, however, goes “beyond the mirror” to offer hypotheses on evolutionary changes within and outside the mirror systems which may have occurred to equip Homo sapiens with a language-ready brain. Crucial to the early stages of this progression is the mirror system for grasping and its extension to permit imitation. Imitation is seen as evolving via a so-called simple system such as that found in chimpanzees (which allows imitation of complex “object-oriented” sequences but only as the result of extensive practice) to a so-called complex system found in humans (which allows rapid imitation even of complex sequences, under appropriate conditions) which supports pantomime. This is hypothesized to have provided the substrate for the development of protosign, a combinatorially open repertoire of manual gestures, which then provides the scaffolding for the emergence of protospeech (which thus owes little to nonhuman vocalizations), with protosign and protospeech then developing in an expanding spiral. It is argued that these stages involve biological evolution of both brain and body. By contrast, it is argued that the progression from protosign and protospeech to languages with full-blown syntax and compositional semantics was a historical phenomenon in the development of Homo sapiens, involving few if any further biological changes. Key Words: gestures; hominids; language evolution; mirror system; neurolinguistics; primates; protolanguage; sign language; speech; vocalization. (shrink)
This paper, based on an invited Thesis Eleven presentation, provides a ‘map of technopolitics’ that springs from an investigation of the theoretical notion of technological convergence adopted by the US National Science Foundation, signaling a new paradigm of ‘nano-bio-info-cogno’ technologies. This integration at the nano-level is expected to drive the next wave of scientific research, technology and knowledge economy. The paper explores the concept of ‘technopolitics’ by investigating the links between Wittgenstein’s anti-scientism and Lyotard’s ‘technoscience’, reviewing the history of the (...) notion in the work of the Belgium philosopher Gilbert Hottois. The ‘deep convergence’ representing a new technoscientific synergy is the product of long-term trends of ‘bioinformational capitalism’ that harnesses the twin forces of information and genetic sciences that coalesce in the least mature ‘cognosciences’ in their application to education and research. The map of technopolitics systematically identifies the political relations between Big Tech and ‘new digital publics’ to reveal that the new paradigm is based on the supreme value of cognitive efficiency. There are a closely-knit cluster of concerns that frame a map of political issues about the fifth-generation technological impacts on human beings, their bodies and minds, and public institutions, not least the logic of the distribution and ownership of data, information and knowledge, and its effects on democracy. (shrink)
Although our subjective impression is of a richly detailed visual world, numerous empirical results suggest that the amount of visual information observers can perceive and remember at any given moment is limited. How can our subjective impressions be reconciled with these objective observations? Here, we answer this question by arguing that, although we see more than the handful of objects, claimed by prominent models of visual attention and working memory, we still see far less than we think we do. Taken (...) together, we argue that these considerations resolve the apparent conflict between our subjective impressions and empirical data on visual capacity, while also illuminating the nature of the representations underlying perceptual experience. (shrink)
Coalescent argumentation is a normative ideal that involves the joining together of two disparate claims through recognition and exploration of opposing positions. By uncovering the crucial connection between a claim and the attitudes, beliefs, feelings, values and needs to which it is connected dispute partners are able to identify points of agreement and disagreement. These points can then be utilized to effect coalescence, a joining or merging of divergent positions, by forming the basis for a mutual investigation of non-conflictual options (...) that might otherwise have remained unconsidered. The essay proceeds by defining and discussing ‘argument’, ‘position’ and ‘understanding’. These notions are then brought together to outline the concept of coalescent reasoning. (shrink)
Research on implicit learning - a cognitive phenomenon in which people acquire knowledge without conscious intent or awareness - has been growing exponentially. This volume draws together this research, offering the first complete reference on implicit learning by those who have been instrumental in shaping the field. The contributors explore controversies in the field, and examine: functional characteristics, brain mechanisms and neurological foundations of implicit learning; connectionist models; and applications of implicit learning to acquiring new mental skills.
Using relevant encyclicals issued over the last 100 years, the author extracts those principles that constitute the underpinnings of Catholic Social Teaching about the employment relationship and contemplates implications of their incorporation into human resource policy. Respect for worker dignity, for his or her family's economic security, and for the common good of society clearly emerge as the primary guidelines for responsible human resource management. Dovetailing these three Church mandates with the economic objectives of the firm could, in essence, alter (...) the firm's nature because profit motivations would be constrained by consideration for worker and societal welfare. Integration of Church teaching with current corporate goals should therefore impact greatly on a variety of human resource policies. (shrink)
This essay builds on the literatures on ‘biocapitalism’ and ‘informationalism’ (or ‘informational capitalism’) to develop the concept of ‘bio-informational capitalism’ in order to articulate an emergent form of capitalism that is self-renewing in the sense that it can change and renew the material basis for life and capital as well as program itself. Bio-informational capitalism applies and develops aspects of the new biology to informatics to create new organic forms of computing and self-reproducing memory that in turn has become the (...) basis of bioinformatics. The paper begins with a review of the successes of the ‘new biology’, focusing on Craig Venter’s digitizing of biology and, as he remarks, the creation of new life from the digital universe. The paper then provides a brief account of bioinformatics before brokering and discussing the term ‘bioinformational capitalism’. (shrink)
Strategic Reliabilism is a framework that yields relative epistemic evaluations of belief-producing cognitive processes. It is a theory of cognitive excellence, or more colloquially, a theory of reasoning excellence (where 'reasoning' is understood very broadly as any sort of cognitive process for coming to judgments or beliefs). First introduced in our book, Epistemology and the Psychology of Human Judgment (henceforth EPHJ), the basic idea behind SR is that epistemically excellent reasoning is efficient reasoning that leads in a robustly reliable fashion (...) to significant, true beliefs. It differs from most contemporary epistemological theories in two ways. First, it is not a theory of justification or knowledge – a theory of epistemically worthy belief. Strategic Reliabilism is a theory of epistemically worthy ways of forming beliefs. And second, Strategic Reliabilism does not attempt to account for an epistemological property that is assumed to be faithfully reflected in the epistemic judgments and intuitions of philosophers. If SR makes recommendations that accord with our reflective epistemic judgments and intuitions, great. If not, then so much the worse for our reflective epistemic judgments and intuitions. (shrink)
Martin Heidegger is, perhaps, the most controversial philosopher of the twentieth-century. Little has been written on him or about his work and its significance for educational thought. This unique collection by a group of international scholars reexamines Heidegger's work and its legacy for educational thought.
The generality problem is widely considered to be a devastating objection to reliabilist theories of justification. My goal in this paper is to argue that a version of the generality problem applies to all plausible theories of justification. Assume that any plausible theory must allow for the possibility of reflective justification—S's belief, B, is justified on the basis of S's knowledge that she arrived at B as a result of a highly (but not perfectly) reliable way of reasoning, R. The (...) generality problem applies to all cases of reflective justification: Given that is the product of a process-token that is an instance of indefinitely many belief-forming process-types (or BFPTs), why is the reliability of R, rather than the reliability of one of the indefinitely many other BFPTs, relevant to B's justificatory status? This form of the generality problem is restricted because it applies only to cases of reflective justification. But unless it is solved, the generality problem haunts all plausible theories of justification, not just reliabilist ones. (shrink)
In this paper, we offer a Piagetian perspective on the construction of the logico-mathematical schemas which embody our knowledge of logic and mathematics. Logico-mathematical entities are tied to the subject's activities, yet are so constructed by reflective abstraction that they result from sensorimotor experience only via the construction of intermediate schemas of increasing abstraction. The axiom set does not exhaust the cognitive structure (schema network) which the mathematician thus acquires. We thus view truth not as something to be defined within (...) the closed world of a formal system but rather in terms of the schema network within which the formal system is embedded. We differ from Piaget in that we see mathematical knowledge as based on social processes of mutual verification which provide an external drive to any necessary dynamic of reflective abstraction within the individual. From this perspective, we argue that axiom schemas tied to a preferred interpretation may provide a necessary intermediate stage of reflective abstraction en route to acquisition of the ability to use formal systems in abstracto. (shrink)
Epistemic responsibility involves at least two central ideas. (V) To be epistemically responsible is to display the virtue(s) epistemic internalists take to be central to justification (e.g., coherence, having good reasons, fitting the evidence). (C) In normal (non-skeptical)circumstances and in thelong run, epistemic responsibility is strongly positively correlated with reliability. Sections 1 and 2 review evidence showing that for a wide range of real-world problems, the most reliable, tractable reasoning strategies audaciously flout the internalist''s epistemic virtues. In Section 3, I (...) argue that these results force us to give up either (V), our current conception of what it is to be epistemically responsible, or (C) the responsibility-reliability connection. I will argue that we should relinquish (V). This is likely to reshape our epistemic practices. It will force us to alter our epistemic judgments about certain instances of reasoning, to endorse some counterintuitive epistemic prescriptions, and to rethink what it is for cognitive agents to be epistemically responsible. (shrink)
Positions in dialogic dispute are presented enthymematically. It is important to explore the position the disputant holds. A model is offered which relies on the presentation of a counter-example to an inferred missing premiss. The example may be: [A+J embraced as falling under the rule; [A-] rejected as basically changing the position; or, [R] rejected as changing the proffered missing premiss. In each case the offered model indicates the next appropriate action. The focus of the model is on uncovering the (...) position actually held by the disputant as opposed to identifying the "logically correct" enthymematic premiss. (shrink)