We introduce the set of definable restricted complex powers for expansions of the real field and calculate it explicitly for expansions of the real field itself by collections of restricted complex powers. We apply this computation to establish a classification theorem for expansions of the real field by families of locally closed trajectories of linear vector fields.
In this book, Michael Arbib, a researcher in artificial intelligence and brain theory, joins forces with Mary Hesse, a philosopher of science, to present an integrated account of how humans 'construct' reality through interaction with the social and physical world around them. The book is a major expansion of the Gifford Lectures delivered by the authors at the University of Edinburgh in the autumn of 1983. The authors reconcile a theory of the individual's construction of reality as a network (...) of schemas 'in the head' with an account of the social construction of language, science, ideology and religion to provide an integrated schema-theoretic view of human knowledge. The authors still find scope for lively debate, particularly in their discussion of free will and of the reality of God. The book integrates an accessible exposition of background information with a cumulative marshalling of evidence to address fundamental questions concerning human action in the world and the nature of ultimate reality. (shrink)
Largely due to the popular allegation that contemporary science has uncovered indeterminism in the deepest known levels of physical reality, the debate as to whether humans have moral freedom, the sort of freedom on which moral responsibility depends, has put aside to some extent the traditional worry over whether determinism is true. As I argue in this paper, however, there are powerful proofs for both chronological determinism and necessitarianism, forms of determinism that pose the most penetrative threat to human moral (...) freedom. My ultimate hope is to show that, despite the robust case against human moral freedom that can be made without even relying on them, chronological determinism and necessitarianism should be regarded with renewed urgency. (shrink)
Choice Outstanding Academic Title, 1996. In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, The Handbook of Brain Theory and Neural Networkscharts the immense progress made in recent years in many specific areas related to two great questions: How does the brain work? and How can we build intelligent machines? While many books have appeared on limited aspects of one subfield or another of brain theory and neural networks, the (...) Handbookcovers the entire sweep of topics—from detailed models of single neurons, analyses of a wide variety of biological neural networks, and connectionist studies of psychology and language, to mathematical analyses of a variety of abstract neural networks, and technological applications of adaptive, artificial neural networks. The excitement, and the frustration, of these topics is that they span such a broad range of disciplines including mathematics, statistical physics and chemistry, neurology and neurobiology, and computer science and electrical engineering as well as cognitive psychology, artificial intelligence, and philosophy. Thus, much effort has gone into making the Handbookaccessible to readers with varied backgrounds while still providing a clear view of much of the recent, specialized research in specific topics. The heart of the book, part III, comprises of 267 original articles by leaders in the various fields, arranged alphabetically by title. Parts I and II, written by the editor, are designed to help readers orient themselves to this vast range of material. Part I, Background, introduces several basic neural models, explains how the present study of brain theory and neural networks integrates brain theory, artificial intelligence, and cognitive psychology, and provides a tutorial on the concepts essential for understanding neural networks as dynamic, adaptive systems. Part II, Road Maps, provides entry into the many articles of part III through an introductory "Meta-Map" and twenty-three road maps, each of which tours all the Part III articles on the chosen theme. (shrink)
Our article relocates the debate about creative labour to the terrain of peer-to-peer interneting as the paradigmatic form of nonmarket – social – production. From Yann Moulier Boutang we take the point that creative labour is immaterial; it is expressed through people connected by the internet. Drawing on two social systems thinkers, Francis Heylighen and Wolfgang Hofkirchner, we transpose this connectedness up to a conception of creative labour as a supra-individual collective intelligence. This intelligence, we argue, is one of the (...) internet’s emergent properties. We then present a model of internet development that flags the potential of digitally-evoked collective intelligence to facilitate what the Marxist philosopher George Caffentzis calls ‘postcapitalist commoning’. Yoking together systems theorizing about the internet and socialist envisioning of social transformation, we identify two sets of internet tools for coordination that can assist with the convivial reconstruction of society along the lines of peer-based production. (shrink)
This essay builds on the literatures on ‘biocapitalism’ and ‘informationalism’ (or ‘informational capitalism’) to develop the concept of ‘bio-informational capitalism’ in order to articulate an emergent form of capitalism that is self-renewing in the sense that it can change and renew the material basis for life and capital as well as program itself. Bio-informational capitalism applies and develops aspects of the new biology to informatics to create new organic forms of computing and self-reproducing memory that in turn has become the (...) basis of bioinformatics. The paper begins with a review of the successes of the ‘new biology’, focusing on Craig Venter’s digitizing of biology and, as he remarks, the creation of new life from the digital universe. The paper then provides a brief account of bioinformatics before brokering and discussing the term ‘bioinformational capitalism’. (shrink)
If a brain is uploaded into a computer, will consciousness continue in digital form or will it end forever when the brain is destroyed? Philosophers have long debated such dilemmas and classify them as questions about personal identity. There are currently three main theories of personal identity: biological, psychological, and closest continuer theories. None of these theories can successfully address the questions posed by the possibility of uploading. I will argue that uploading requires us to adopt a new theory of (...) identity, psychological branching identity. Psychological branching identity states that consciousness will continue as long as there is continuity in psychological structure. What differentiates this from psychological identity is that it allows identity to continue in multiple selves. According to branching identity, continuity of consciousness will continue in both the original brain and the upload after nondestructive uploading. Branching identity can also resolve long standing questions about split-brain syndrome and can provide clear predictions about identity in even the most difficult cases imagined by philosophers. (shrink)
Spinoza's Theological-Political Treatise was published anonymously in 1670 and immediately provoked huge debate. Its main goal was to claim that the freedom of philosophizing can be allowed in a free republic and that it cannot be abolished without also destroying the peace and piety of that republic. Spinoza criticizes the traditional claims of revelation and offers a social contract theory in which he praises democracy as the most natural form of government. This Critical Guide presents essays by well-known scholars in (...) the field and covers a broad range of topics, including the political theory and the metaphysics of the work, religious toleration, the reception of the text by other early modern philosophers and the relation of the text to Jewish thought. It offers valuable perspectives on this important and influential work. (shrink)
This article is concerned with developing a philosophical approach to a number of significant changes to academic publishing, and specifically the global journal knowledge system wrought by a range of new digital technologies that herald the third age of the journal as an electronic, interactive and mixed-media form of scientific communication. The paper emerges from an Editors' Collective, a small New Zealand-based organisation comprised of editors and reviewers of academic journals mostly in the fields of education and philosophy. The paper (...) is the result of a collective writing process. (shrink)
When a young child begins to engage in everyday interaction, she has to acquire competencies that allow her to be oriented to the conventions that inform talk-in-interaction and, at the same time, deal with emotional or affective dimensions of experience. The theoretical positions associated with these domains - social-action and emotion - provide very different accounts of human development and this book examines why this is the case. Through a longitudinal video-recorded study of one child learning how to talk, (...) class='Hi'>Michael A. Forrester develops proposals that rest upon a comparison of two perspectives on everyday parent-child interaction taken from the same data corpus - one informed by conversation analysis and ethnomethodology, the other by psychoanalytic developmental psychology. Ultimately, what is significant for attaining membership within any culture is gradually being able to display an orientation towards both domains - doing and feeling, or social-action and affect. (shrink)
Bishop and Trout here present a unique and provocative new approach to epistemology. Their approach aims to liberate epistemology from the scholastic debates of standard analytic epistemology, and treat it as a branch of the philosophy of science. The approach is novel in its use of cost-benefit analysis to guide people facing real reasoning problems and in its framework for resolving normative disputes in psychology. Based on empirical data, Bishop and Trout show how people can improve their reasoning by relying (...) on Statistical Prediction Rules. They then develop and articulate the positive core of the book. Their view, Strategic Reliabilism, claims that epistemic excellence consists in the efficient allocation of cognitive resources to reliable reasoning strategies, applied to significant problems. The last third of the book develops the implications of this view for standard analytic epistemology; for resolving normative disputes in psychology; and for offering practical, concrete advice on how this theory can improve real people's reasoning. This is a truly distinctive and controversial work that spans many disciplines and will speak to an unusually diverse group, including people in epistemology, philosophy of science, decision theory, cognitive and clinical psychology, and ethics and public policy. (shrink)
Gilbert’s four modes of communication include the logical, the emotional, the visceral and the kisceral, which last has not received much attention at all. This mode covers the forms of argument that rely on intuition and undefended basal assumptions. These forms range from the scientific and mathematical to the religious and mystical. In this paper these forms will be examined, and suggestions made for ways in which intuitive frameworks can be compared and valued.
Management theory and practice are facing unprecedented challenges. The lack of sustainability, the increasing inequity, and the continuous decline in societal trust pose a threat to ‘business as usual’. Capitalism is at a crossroad and scholars, practitioners, and policy makers are called to rethink business strategy in light of major external changes. In the following, we review an alternative view of human beings that is based on a renewed Darwinian theory developed by Lawrence and Nohria. We label this alternative view (...) ‘humanistic’ and draw distinctions to current ‘economistic’ conceptions. We then develop the consequences that this humanistic view has for business organizations, examining business strategy, governance structures, leadership forms, and organizational culture. Afterward, we outline the influences of humanism on management in the past and the present, and suggest options for humanism to shape the future of management. In this manner, we will contribute to the discussion of alternative management paradigms that help solve the current crises. (shrink)
Here, we argue that any neurobiological theory based on an experience/function division cannot be empirically confirmed or falsified and is thus outside the scope of science. A ‘perfect experiment’ illustrates this point, highlighting the unbreachable boundaries of the scientific study of consciousness. We describe a more nuanced notion of cognitive access that captures personal experience without positing the existence of inaccessible conscious states. Finally, we discuss the criteria necessary for forming and testing a falsifiable theory of consciousness.
When contrasted with "Continental" philosophy, analytical philosophy is often called "Anglo-American." Dummett argues that "Anglo-Austrian" would be a more accurate label. By re-examining the similar origins of the two traditions, we can come to understand why they later diverged so widely, and thus take the first step toward reconciliation.
The article analyzes the neural and functional grounding of language skills as well as their emergence in hominid evolution, hypothesizing stages leading from abilities known to exist in monkeys and apes and presumed to exist in our hominid ancestors right through to modern spoken and signed languages. The starting point is the observation that both premotor area F5 in monkeys and Broca's area in humans contain a “mirror system” active for both execution and observation of manual actions, and that F5 (...) and Broca's area are homologous brain regions. This grounded the mirror system hypothesis of Rizzolatti and Arbib (1998) which offers the mirror system for grasping as a key neural “missing link” between the abilities of our nonhuman ancestors of 20 million years ago and modern human language, with manual gestures rather than a system for vocal communication providing the initial seed for this evolutionary process. The present article, however, goes “beyond the mirror” to offer hypotheses on evolutionary changes within and outside the mirror systems which may have occurred to equip Homo sapiens with a language-ready brain. Crucial to the early stages of this progression is the mirror system for grasping and its extension to permit imitation. Imitation is seen as evolving via a so-called simple system such as that found in chimpanzees (which allows imitation of complex “object-oriented” sequences but only as the result of extensive practice) to a so-called complex system found in humans (which allows rapid imitation even of complex sequences, under appropriate conditions) which supports pantomime. This is hypothesized to have provided the substrate for the development of protosign, a combinatorially open repertoire of manual gestures, which then provides the scaffolding for the emergence of protospeech (which thus owes little to nonhuman vocalizations), with protosign and protospeech then developing in an expanding spiral. It is argued that these stages involve biological evolution of both brain and body. By contrast, it is argued that the progression from protosign and protospeech to languages with full-blown syntax and compositional semantics was a historical phenomenon in the development of Homo sapiens, involving few if any further biological changes. Key Words: gestures; hominids; language evolution; mirror system; neurolinguistics; primates; protolanguage; sign language; speech; vocalization. (shrink)
This paper, based on an invited Thesis Eleven presentation, provides a ‘map of technopolitics’ that springs from an investigation of the theoretical notion of technological convergence adopted by the US National Science Foundation, signaling a new paradigm of ‘nano-bio-info-cogno’ technologies. This integration at the nano-level is expected to drive the next wave of scientific research, technology and knowledge economy. The paper explores the concept of ‘technopolitics’ by investigating the links between Wittgenstein’s anti-scientism and Lyotard’s ‘technoscience’, reviewing the history of the (...) notion in the work of the Belgium philosopher Gilbert Hottois. The ‘deep convergence’ representing a new technoscientific synergy is the product of long-term trends of ‘bioinformational capitalism’ that harnesses the twin forces of information and genetic sciences that coalesce in the least mature ‘cognosciences’ in their application to education and research. The map of technopolitics systematically identifies the political relations between Big Tech and ‘new digital publics’ to reveal that the new paradigm is based on the supreme value of cognitive efficiency. There are a closely-knit cluster of concerns that frame a map of political issues about the fifth-generation technological impacts on human beings, their bodies and minds, and public institutions, not least the logic of the distribution and ownership of data, information and knowledge, and its effects on democracy. (shrink)
Although our subjective impression is of a richly detailed visual world, numerous empirical results suggest that the amount of visual information observers can perceive and remember at any given moment is limited. How can our subjective impressions be reconciled with these objective observations? Here, we answer this question by arguing that, although we see more than the handful of objects, claimed by prominent models of visual attention and working memory, we still see far less than we think we do. Taken (...) together, we argue that these considerations resolve the apparent conflict between our subjective impressions and empirical data on visual capacity, while also illuminating the nature of the representations underlying perceptual experience. (shrink)
Coalescent argumentation is a normative ideal that involves the joining together of two disparate claims through recognition and exploration of opposing positions. By uncovering the crucial connection between a claim and the attitudes, beliefs, feelings, values and needs to which it is connected dispute partners are able to identify points of agreement and disagreement. These points can then be utilized to effect coalescence, a joining or merging of divergent positions, by forming the basis for a mutual investigation of non-conflictual options (...) that might otherwise have remained unconsidered. The essay proceeds by defining and discussing ‘argument’, ‘position’ and ‘understanding’. These notions are then brought together to outline the concept of coalescent reasoning. (shrink)
Schizophrenia, like other pathological conditions of mental life, has not been systematically included in the general study of consciousness. By focusing on aspects of chronic schizophrenia, we attempt to remedy this omission. Basic components of Husserl’s phenomenology (intentionality, synthesis, constitution, epoche, and unbuilding) are explicated and then employed in an account of chronic schizophrenia. In schizophrenic experience, basic constituents of reality are lost and the subject must try to explicitly re-constitute them. “Automatic mental life” is weakened such that much of (...) the world that is normally taken-for-granted cannot continue to be so. The subject must actively re-lay the ontological foundations of reality. (shrink)
In this paper, we offer a Piagetian perspective on the construction of the logico-mathematical schemas which embody our knowledge of logic and mathematics. Logico-mathematical entities are tied to the subject's activities, yet are so constructed by reflective abstraction that they result from sensorimotor experience only via the construction of intermediate schemas of increasing abstraction. The axiom set does not exhaust the cognitive structure (schema network) which the mathematician thus acquires. We thus view truth not as something to be defined within (...) the closed world of a formal system but rather in terms of the schema network within which the formal system is embedded. We differ from Piaget in that we see mathematical knowledge as based on social processes of mutual verification which provide an external drive to any necessary dynamic of reflective abstraction within the individual. From this perspective, we argue that axiom schemas tied to a preferred interpretation may provide a necessary intermediate stage of reflective abstraction en route to acquisition of the ability to use formal systems in abstracto. (shrink)
Strategic Reliabilism is a framework that yields relative epistemic evaluations of belief-producing cognitive processes. It is a theory of cognitive excellence, or more colloquially, a theory of reasoning excellence (where 'reasoning' is understood very broadly as any sort of cognitive process for coming to judgments or beliefs). First introduced in our book, Epistemology and the Psychology of Human Judgment (henceforth EPHJ), the basic idea behind SR is that epistemically excellent reasoning is efficient reasoning that leads in a robustly reliable fashion (...) to significant, true beliefs. It differs from most contemporary epistemological theories in two ways. First, it is not a theory of justification or knowledge – a theory of epistemically worthy belief. Strategic Reliabilism is a theory of epistemically worthy ways of forming beliefs. And second, Strategic Reliabilism does not attempt to account for an epistemological property that is assumed to be faithfully reflected in the epistemic judgments and intuitions of philosophers. If SR makes recommendations that accord with our reflective epistemic judgments and intuitions, great. If not, then so much the worse for our reflective epistemic judgments and intuitions. (shrink)
This article was conceived as a sequel to “The Humean Theory of Motivation.” The paper addresses various challenges to the standard account of the explanation of intentional action in terms of desire and means-end belief, challenges that didn’t occur to me when I wrote “The Humean Theory of Motivation.” I begin by suggesting that the attraction of the standard account lies in the way in which it allows us to unify a vast array of otherwise diverse types of action explanation. (...) I go on to consider a range of other challenges to the standard account of the explanation of action: Rosalind Hursthouse’s challenge based on the possibility of what she calls “arational” actions (Hursthouse 1991); Michael Stocker’s challenge based on the idea that some explanations of action are nonteleological (Stocker 1981); Mark Platts’s challenge based on the idea that our evaluative beliefs can sometimes explain our actions all by themselves (Platts 1981); a voluntarist challenge based on the possibility of explaining actions by the exercise of self-control; and a challenge from Jonathan Dancy based on the idea that reasons can themselves sometimes explain actions all by themselves (Dancy 1994). (shrink)
Research on implicit learning - a cognitive phenomenon in which people acquire knowledge without conscious intent or awareness - has been growing exponentially. This volume draws together this research, offering the first complete reference on implicit learning by those who have been instrumental in shaping the field. The contributors explore controversies in the field, and examine: functional characteristics, brain mechanisms and neurological foundations of implicit learning; connectionist models; and applications of implicit learning to acquiring new mental skills.
Using relevant encyclicals issued over the last 100 years, the author extracts those principles that constitute the underpinnings of Catholic Social Teaching about the employment relationship and contemplates implications of their incorporation into human resource policy. Respect for worker dignity, for his or her family's economic security, and for the common good of society clearly emerge as the primary guidelines for responsible human resource management. Dovetailing these three Church mandates with the economic objectives of the firm could, in essence, alter (...) the firm's nature because profit motivations would be constrained by consideration for worker and societal welfare. Integration of Church teaching with current corporate goals should therefore impact greatly on a variety of human resource policies. (shrink)
The generality problem is widely considered to be a devastating objection to reliabilist theories of justification. My goal in this paper is to argue that a version of the generality problem applies to all plausible theories of justification. Assume that any plausible theory must allow for the possibility of reflective justification—S's belief, B, is justified on the basis of S's knowledge that she arrived at B as a result of a highly (but not perfectly) reliable way of reasoning, R. The (...) generality problem applies to all cases of reflective justification: Given that is the product of a process-token that is an instance of indefinitely many belief-forming process-types (or BFPTs), why is the reliability of R, rather than the reliability of one of the indefinitely many other BFPTs, relevant to B's justificatory status? This form of the generality problem is restricted because it applies only to cases of reflective justification. But unless it is solved, the generality problem haunts all plausible theories of justification, not just reliabilist ones. (shrink)
Martin Heidegger is, perhaps, the most controversial philosopher of the twentieth-century. Little has been written on him or about his work and its significance for educational thought. This unique collection by a group of international scholars reexamines Heidegger's work and its legacy for educational thought.
Page generated Tue Jul 27 14:41:34 2021 on philpapers-web-84c8c567c7-kx665
cache stats: hit=27630, miss=24706, save= autohandler : 1588 ms called component : 1579 ms search.pl : 1326 ms render loop : 846 ms initIterator : 478 ms addfields : 465 ms publicCats : 377 ms next : 318 ms autosense : 183 ms match_other : 159 ms retrieve cache object : 132 ms quotes : 72 ms menu : 57 ms save cache object : 47 ms search_quotes : 30 ms prepCit : 26 ms match_cats : 22 ms applytpl : 6 ms match_authors : 1 ms intermediate : 1 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms