Public education is not just a way to organise and fund education. It is also the expression of a particular ideal about education and of a particular way to conceive of the relationship between education and society. The ideal of public education sees education as an important dimension of the common good and as an important institution in securing the common good. The common good is never what individuals or particular groups want or desire, but always reaches beyond such particular (...) desires towards that which societies as a whole should consider as desirable. This does, of course, put the common good in tension with the desires of individuals and groups. Neo-liberal modes of governance have, over the past decades, put this particular educational set up under pressure and have, according to some, eroded the very idea of the common good. This set of contributions reflects on this state of affairs, partly through an exploration of the idea of publicness itself – how it can be rearticulated and regained – and partly through reflections on the current state of education in the ‘north’ and the ‘south.’. (shrink)
Michael Weinstein redefines the debates over modernism/postmodernism that dominate contemporary cultural studies, offering a daring and original alternative. He argues that the current era is neither a continuation of modernity nor a postmodern rupture, but a period of 'post-civilized modernity.'.
Introduction: education, philosophy and politics -- Writing the self: Wittgenstein, confession and pedagogy -- Nietzsche, nihilism and the critique of modernity: post-Nietzschean philosophy of education -- Heidegger, education and modernity -- Truth-telling as an educational practice of the self: Foucault and the ethics of subjectivity -- Neoliberal governmentality: Foucault on the birth of biopolitics -- Lyotard, nihilism and education -- Gilles Deleuze's 'societies of control': from disciplinary pedagogy to perpetual training -- Geophilosophy, education and the pedagogy of the concept - (...) Humanism, Derrida and the new humanities -- Politics and deconstruction: Derrida, neoliberalism and democracy -- Neopragmatism, ethnocentrism and the politics of the ethnos: Rorty's 'postmodernist bourgeois liberalism' -- Achieving America: postmodernism and Rorty's critique of the cultural left -- Deranging the investigations: Cavell on the philosophy of the child -- White philosophy in/of America. (shrink)
Mexican thinkers in recent generations have sought a philosophy emphasizing the ends of human activity (finalism) as contrasted with one stressing means or techniques (instrumentalism). According to Professor Weinstein's interpretation, an integrated perspective toward all aspects of the human condition characterizes Mexican philosophy and social thought, incorporating close attention to the aesthetic dimension of human experience and the tensions of human existence. The distinctive Mexican world-view provides a needed supplement to the analytical approach of North American philosophy and Marxist determinism.
Arguing that a psychological basis for ethics can be found in human motivation, Rethinking Goodness proposes a naturalistic ethics that transcends the conflict between liberalism and authoritarianism --the conflict between freedom at the price of narcissism and morality at the price of coercion. A third option is offered, an ethic broader than liberalism's pursuit of the personal, that avoids jeopardizing, as do authoritarian positions, the centrality of individual autonomy.
Mexican thinkers in recent generations have sought a philosophy emphasizing the ends of human activity as contrasted with one stressing means or techniques. According to Professor Weinstein's interpretation, an integrated perspective toward all aspects of the human condition characterizes Mexican philosophy and social thought, incorporating close attention to the aesthetic dimension of human experience and the tensions of human existence. The distinctive Mexican world-view provides a needed supplement to the analytical approach of North American philosophy and Marxist determinism.
This collective article discusses the philosophy of modern higher education in Iran, which in this case, optimistically, its history dates back to the founding of Dār al-fonūn —if we consider Dār al-fonūn as a university. Otherwise, its origin can be traced back to the University of Tehran. Central to this article is the emphasis on the lack of philosophy of higher education in Iran. Therefore, most of the criticisms in front of us are related to the internal inconsistency in the (...) Iranian higher education system due to the lack of a national-indigenized-official philosophy of higher education in Iran. Furthermore, The Islamic Revolution of 1979 brought about fundamental changes in higher education. Accordingly, several controversial issues including the rapid growth of higher education, the Islamization of universities, cultural narratives in higher education, the increase in students, especially women and the low-income class of the country were also explored. Therefore, in this collection, the political, economic, social, cultural, moral, technological and historical dimensions of Iranian higher education were examined. (shrink)
This article is concerned with developing a philosophical approach to a number of significant changes to academic publishing, and specifically the global journal knowledge system wrought by a range of new digital technologies that herald the third age of the journal as an electronic, interactive and mixed-media form of scientific communication. The paper emerges from an Editors' Collective, a small New Zealand-based organisation comprised of editors and reviewers of academic journals mostly in the fields of education and philosophy. The paper (...) is the result of a collective writing process. (shrink)
In this book, Michael Arbib, a researcher in artificial intelligence and brain theory, joins forces with Mary Hesse, a philosopher of science, to present an integrated account of how humans 'construct' reality through interaction with the social and physical world around them. The book is a major expansion of the Gifford Lectures delivered by the authors at the University of Edinburgh in the autumn of 1983. The authors reconcile a theory of the individual's construction of reality as a network (...) of schemas 'in the head' with an account of the social construction of language, science, ideology and religion to provide an integrated schema-theoretic view of human knowledge. The authors still find scope for lively debate, particularly in their discussion of free will and of the reality of God. The book integrates an accessible exposition of background information with a cumulative marshalling of evidence to address fundamental questions concerning human action in the world and the nature of ultimate reality. (shrink)
Management theory and practice are facing unprecedented challenges. The lack of sustainability, the increasing inequity, and the continuous decline in societal trust pose a threat to ‘business as usual’. Capitalism is at a crossroad and scholars, practitioners, and policy makers are called to rethink business strategy in light of major external changes. In the following, we review an alternative view of human beings that is based on a renewed Darwinian theory developed by Lawrence and Nohria. We label this alternative view (...) ‘humanistic’ and draw distinctions to current ‘economistic’ conceptions. We then develop the consequences that this humanistic view has for business organizations, examining business strategy, governance structures, leadership forms, and organizational culture. Afterward, we outline the influences of humanism on management in the past and the present, and suggest options for humanism to shape the future of management. In this manner, we will contribute to the discussion of alternative management paradigms that help solve the current crises. (shrink)
The article analyzes the neural and functional grounding of language skills as well as their emergence in hominid evolution, hypothesizing stages leading from abilities known to exist in monkeys and apes and presumed to exist in our hominid ancestors right through to modern spoken and signed languages. The starting point is the observation that both premotor area F5 in monkeys and Broca's area in humans contain a “mirror system” active for both execution and observation of manual actions, and that F5 (...) and Broca's area are homologous brain regions. This grounded the mirror system hypothesis of Rizzolatti and Arbib (1998) which offers the mirror system for grasping as a key neural “missing link” between the abilities of our nonhuman ancestors of 20 million years ago and modern human language, with manual gestures rather than a system for vocal communication providing the initial seed for this evolutionary process. The present article, however, goes “beyond the mirror” to offer hypotheses on evolutionary changes within and outside the mirror systems which may have occurred to equip Homo sapiens with a language-ready brain. Crucial to the early stages of this progression is the mirror system for grasping and its extension to permit imitation. Imitation is seen as evolving via a so-called simple system such as that found in chimpanzees (which allows imitation of complex “object-oriented” sequences but only as the result of extensive practice) to a so-called complex system found in humans (which allows rapid imitation even of complex sequences, under appropriate conditions) which supports pantomime. This is hypothesized to have provided the substrate for the development of protosign, a combinatorially open repertoire of manual gestures, which then provides the scaffolding for the emergence of protospeech (which thus owes little to nonhuman vocalizations), with protosign and protospeech then developing in an expanding spiral. It is argued that these stages involve biological evolution of both brain and body. By contrast, it is argued that the progression from protosign and protospeech to languages with full-blown syntax and compositional semantics was a historical phenomenon in the development of Homo sapiens, involving few if any further biological changes. Key Words: gestures; hominids; language evolution; mirror system; neurolinguistics; primates; protolanguage; sign language; speech; vocalization. (shrink)
Morals from Motives develops a virtue ethics inspired more by Hume and Hutcheson's moral sentimentalism than by recently-influential Aristotelianism. It argues that a reconfigured and expanded "morality of caring" can offer a general account of right and wrong action as well as social justice. Expanding the frontiers of ethics, it goes on to show how a motive-based "pure" virtue theory can also help us to understand the nature of human well-being and practical reason.
Here, we argue that any neurobiological theory based on an experience/function division cannot be empirically confirmed or falsified and is thus outside the scope of science. A ‘perfect experiment’ illustrates this point, highlighting the unbreachable boundaries of the scientific study of consciousness. We describe a more nuanced notion of cognitive access that captures personal experience without positing the existence of inaccessible conscious states. Finally, we discuss the criteria necessary for forming and testing a falsifiable theory of consciousness.
Using relevant encyclicals issued over the last 100 years, the author extracts those principles that constitute the underpinnings of Catholic Social Teaching about the employment relationship and contemplates implications of their incorporation into human resource policy. Respect for worker dignity, for his or her family's economic security, and for the common good of society clearly emerge as the primary guidelines for responsible human resource management. Dovetailing these three Church mandates with the economic objectives of the firm could, in essence, alter (...) the firm's nature because profit motivations would be constrained by consideration for worker and societal welfare. Integration of Church teaching with current corporate goals should therefore impact greatly on a variety of human resource policies. (shrink)
Bishop and Trout here present a unique and provocative new approach to epistemology. Their approach aims to liberate epistemology from the scholastic debates of standard analytic epistemology, and treat it as a branch of the philosophy of science. The approach is novel in its use of cost-benefit analysis to guide people facing real reasoning problems and in its framework for resolving normative disputes in psychology. Based on empirical data, Bishop and Trout show how people can improve their reasoning by relying (...) on Statistical Prediction Rules. They then develop and articulate the positive core of the book. Their view, Strategic Reliabilism, claims that epistemic excellence consists in the efficient allocation of cognitive resources to reliable reasoning strategies, applied to significant problems. The last third of the book develops the implications of this view for standard analytic epistemology; for resolving normative disputes in psychology; and for offering practical, concrete advice on how this theory can improve real people's reasoning. This is a truly distinctive and controversial work that spans many disciplines and will speak to an unusually diverse group, including people in epistemology, philosophy of science, decision theory, cognitive and clinical psychology, and ethics and public policy. (shrink)
Largely due to the popular allegation that contemporary science has uncovered indeterminism in the deepest known levels of physical reality, the debate as to whether humans have moral freedom, the sort of freedom on which moral responsibility depends, has put aside to some extent the traditional worry over whether determinism is true. As I argue in this paper, however, there are powerful proofs for both chronological determinism and necessitarianism, forms of determinism that pose the most penetrative threat to human moral (...) freedom. My ultimate hope is to show that, despite the robust case against human moral freedom that can be made without even relying on them, chronological determinism and necessitarianism should be regarded with renewed urgency. (shrink)
This paper, based on an invited Thesis Eleven presentation, provides a ‘map of technopolitics’ that springs from an investigation of the theoretical notion of technological convergence adopted by the US National Science Foundation, signaling a new paradigm of ‘nano-bio-info-cogno’ technologies. This integration at the nano-level is expected to drive the next wave of scientific research, technology and knowledge economy. The paper explores the concept of ‘technopolitics’ by investigating the links between Wittgenstein’s anti-scientism and Lyotard’s ‘technoscience’, reviewing the history of the (...) notion in the work of the Belgium philosopher Gilbert Hottois. The ‘deep convergence’ representing a new technoscientific synergy is the product of long-term trends of ‘bioinformational capitalism’ that harnesses the twin forces of information and genetic sciences that coalesce in the least mature ‘cognosciences’ in their application to education and research. The map of technopolitics systematically identifies the political relations between Big Tech and ‘new digital publics’ to reveal that the new paradigm is based on the supreme value of cognitive efficiency. There are a closely-knit cluster of concerns that frame a map of political issues about the fifth-generation technological impacts on human beings, their bodies and minds, and public institutions, not least the logic of the distribution and ownership of data, information and knowledge, and its effects on democracy. (shrink)
Although our subjective impression is of a richly detailed visual world, numerous empirical results suggest that the amount of visual information observers can perceive and remember at any given moment is limited. How can our subjective impressions be reconciled with these objective observations? Here, we answer this question by arguing that, although we see more than the handful of objects, claimed by prominent models of visual attention and working memory, we still see far less than we think we do. Taken (...) together, we argue that these considerations resolve the apparent conflict between our subjective impressions and empirical data on visual capacity, while also illuminating the nature of the representations underlying perceptual experience. (shrink)
Thirty years ago, I elaborated on a position that could be seen as a compromise between an "extreme," symbol-based AI, and a "neurochemical reductionism" in AI. The present article recalls aspects of the espoused framework of schema theory that, it suggested, could provide a better bridge from human psychology to brain theory than that offered by the symbol systems of A. Newell and H. A. Simon.
Science and philosophy study well-being with different but complementary methods. Marry these methods and a new picture emerges: To have well-being is to be "stuck" in a positive cycle of emotions, attitudes, traits and success. This book unites the scientific and philosophical worldviews into a powerful new theory of well-being.
Coalescent argumentation is a normative ideal that involves the joining together of two disparate claims through recognition and exploration of opposing positions. By uncovering the crucial connection between a claim and the attitudes, beliefs, feelings, values and needs to which it is connected dispute partners are able to identify points of agreement and disagreement. These points can then be utilized to effect coalescence, a joining or merging of divergent positions, by forming the basis for a mutual investigation of non-conflictual options (...) that might otherwise have remained unconsidered. The essay proceeds by defining and discussing ‘argument’, ‘position’ and ‘understanding’. These notions are then brought together to outline the concept of coalescent reasoning. (shrink)
In this paper, we offer a Piagetian perspective on the construction of the logico-mathematical schemas which embody our knowledge of logic and mathematics. Logico-mathematical entities are tied to the subject's activities, yet are so constructed by reflective abstraction that they result from sensorimotor experience only via the construction of intermediate schemas of increasing abstraction. The axiom set does not exhaust the cognitive structure (schema network) which the mathematician thus acquires. We thus view truth not as something to be defined within (...) the closed world of a formal system but rather in terms of the schema network within which the formal system is embedded. We differ from Piaget in that we see mathematical knowledge as based on social processes of mutual verification which provide an external drive to any necessary dynamic of reflective abstraction within the individual. From this perspective, we argue that axiom schemas tied to a preferred interpretation may provide a necessary intermediate stage of reflective abstraction en route to acquisition of the ability to use formal systems in abstracto. (shrink)
Research on implicit learning - a cognitive phenomenon in which people acquire knowledge without conscious intent or awareness - has been growing exponentially. This volume draws together this research, offering the first complete reference on implicit learning by those who have been instrumental in shaping the field. The contributors explore controversies in the field, and examine: functional characteristics, brain mechanisms and neurological foundations of implicit learning; connectionist models; and applications of implicit learning to acquiring new mental skills.
Strategic Reliabilism is a framework that yields relative epistemic evaluations of belief-producing cognitive processes. It is a theory of cognitive excellence, or more colloquially, a theory of reasoning excellence (where 'reasoning' is understood very broadly as any sort of cognitive process for coming to judgments or beliefs). First introduced in our book, Epistemology and the Psychology of Human Judgment (henceforth EPHJ), the basic idea behind SR is that epistemically excellent reasoning is efficient reasoning that leads in a robustly reliable fashion (...) to significant, true beliefs. It differs from most contemporary epistemological theories in two ways. First, it is not a theory of justification or knowledge – a theory of epistemically worthy belief. Strategic Reliabilism is a theory of epistemically worthy ways of forming beliefs. And second, Strategic Reliabilism does not attempt to account for an epistemological property that is assumed to be faithfully reflected in the epistemic judgments and intuitions of philosophers. If SR makes recommendations that accord with our reflective epistemic judgments and intuitions, great. If not, then so much the worse for our reflective epistemic judgments and intuitions. (shrink)
The generality problem is widely considered to be a devastating objection to reliabilist theories of justification. My goal in this paper is to argue that a version of the generality problem applies to all plausible theories of justification. Assume that any plausible theory must allow for the possibility of reflective justification—S's belief, B, is justified on the basis of S's knowledge that she arrived at B as a result of a highly (but not perfectly) reliable way of reasoning, R. The (...) generality problem applies to all cases of reflective justification: Given that is the product of a process-token that is an instance of indefinitely many belief-forming process-types (or BFPTs), why is the reliability of R, rather than the reliability of one of the indefinitely many other BFPTs, relevant to B's justificatory status? This form of the generality problem is restricted because it applies only to cases of reflective justification. But unless it is solved, the generality problem haunts all plausible theories of justification, not just reliabilist ones. (shrink)