The unified theory of dose and effect, as indicated by the median-effect equation for single and multiple entities and for the first and higher order kinetic/dynamic, has been established by T.C. Chou and it is based on the physical/chemical principle of the massaction law (J. Theor. Biol. 59: 253-276, 1976 (質量作用中效定理) and Pharmacological Rev. 58: 621-681, 2006) (普世中效指數定理). The theory was developed by the principle of mathematical induction and deduction (數學演繹歸納法). Rearrangements of the median-effect equation lead to Michaelis-Menten, Hill, (...) Scatchard, and Henderson-Hasselbalch equations. The “median” serves as the universal reference point and the “common link” for the relationship of all entities and is also the “harmonic mean” of kinetic dissociation constants. Over 300 mechanism-specific equations have been derived and published using the mathematical induction-deduction process. These equations can be deduced into several general equations, including the median-mediated whole/part equation, combination index theorem, isobologram equation, and polygonogram. It is proven that “dose” and “effect” are interchangeable, thus, “substance” and “function” are interchangeable, which leads to “the unity theory” (劑效、心物、知行一元論) in quantitative mathematical philosophy (數學的定量哲學) in functional context. Therefore, a general theory centered on the “median” and based on equilibrium dynamics has evolved. In other words: [「中」的宇宙觀： 以「中」爲基凖的動力學生態平衡]. Based on the median-effect equation of the mass-action law, the fundamental claim is that we can draw “a specific cure” for only two data points, if they are determined accurately. This claim has far reaching consequences since it defies the general held belief that two points can dray only a straight line. Remarkably, the unity theory (一元論) providesscientific/mathematical interpretation in equations and in graphics of Chinese ancient philosophy, including Fu-Si Ba Gua (伏羲八卦), Dao’s Harmony (和諧), the Confucian doctrine of the mean (儒家中庸之道), Chou Dun-Yi’s (周敦頤, 1017-1073) From Wu-ji to Tai-ji and Taiji Tu Sho (無極而太極及太極圖說). The moderntopological analysis for trinity yields an exact correspondence to the Ba-Gua, which was introduced over 4,000 years ago. Furthermore, the median-centered algorithm, promotes modern ecological content (生態學) in the equilibral dynamic state of harmony. It is concluded that Western science and Eastern philosophy are directly linked and complementary to each other. Since the truth in mathematical quantitative philosophy (數學的定量哲學) has no boundaries, East and West philosophies can flourish together for the common goal and ideal in science and in humanity (世界大同). (shrink)
Temporal binding via 40-Hz synchronization of neuronal discharges in sensory cortices has been hypothesized to be a necessary condition for the rapid selection of perceptually relevant information for further processing in working memory. Binocular rivalry experiments have shown that late stage visual processing associated with the recognition of a stimulus object is highly correlated with discharge rates in inferotemporal cortex. The hippocampus is the primary recipient of inferotemporal outputs and is known to be the substrate for the consolidation of working (...) memories to long-term, episodic memories. The prefrontal cortex, on the other hand, is widely thought to mediate working memory processes, per se. This article reviews accumulated evidence for the role of a subcortical matrix in linking frontal and hippocampal systems to select and ''stream'' conscious episodes across time (hundreds of milliseconds to several seconds). ''Streaming'' is hypothesized to be mediated by the selective gating of reentrant flows of information between these cortical systems and the subcortical matrix. The physiological mechanism proposed for this temporally extended form of binding is synchronous oscillations in the slower EEG spectrum (< 8 Hz). (shrink)
In its forty years of existence, Artificial Intelligence has suffered both from the exaggerated claims of those who saw it as the definitive solution of an ancestral dream â that of constructing an intelligent machine-and from its detractors, who described it as the latest fad worthy of quacks. Yet AI is still alive, well and blossoming, and has left a legacy of tools and applications almost unequalled by any other field-probably because, as the heir of Renaissance thought, it represents a (...) possible bridge between the humanities and the natural sciences, philosophy and neurophysiology, psychology and integrated circuits-including systems that today are taken for granted, such as the computer interface with mouse pointer and windows. This writing describes a few results of AI that have modified the scientific world, as well as the way a layman sees computers: thetechnology of programming languages, such asLISP-witness the unique excellence of academic departments that have contributed to them-thecomputing workstations-of which our modern PC is but a vulgarised descendant-theapplications to the educational field-e.g., the realisation of some ideas of genetic epistemology-and tointerdisciplinary philosophy-such as Hofstadter's associations between the arts and mathematics-and the use ofAI techniques in music and musicology. All this has led to a generalisation of AI towards Negrotti's overallTheory of the Artificial, which encompasses further specialisation such asartificial reality, artificial life, and applications ofneural networks among others. (shrink)
The paper presents a Chinese philosophical point of view of AI, and presents a novel system of the AI machine. There are two basic relations or contradictions which drive computer developments forward. One is between software and hardware and the other is between data structure and system organization. It is suggested that a description of a future AI system should primarily start from these contradictions.
This article is concerned with the history and current state of research activities into medical expert systems (MES) in Japan. A brief review of expert systems' work over the last ten years is provided and here is a discussion on future directions of artificial intelligence (AI) applications in medicine, which we expect the Japanese AI community in medicine (AIM) to undertake.
Well-known critics of AI such as Hubert Dreyfus and Michael Polanyi tend to confuse cybernetics with AI. Such a confusion is quite misleading and should not be overlooked. In the first place, cybernetics is not vulnerable to criticism of AI as cognitivistic and behaviouristic. In the second place, AI researchers are recommended to consider the cybernetics approach as a way of overcoming the limitations of cognitivism and behaviourism.
Although the AI paradigm is useful for building knowledge-based systems for the applied natural sciences, there are dangers when it is extended into the domains of business, law and other social systems. It is misleading to treat knowledge as a commodity that can be separated from the context in which it is regularly used. Especially when it relates to social behaviour, knowledge should be treated as socially constructed, interpreted and maintained through its practical use in context. The meanings of terms (...) in a knowledge-base are assumed to be references to an objective reality whereas they are instruments for expressing values and exercising power. Expert systems that are not perspicuous to the expert community will lose their meanings and cease to contain genuine knowledge, as they will be divorced from the social processes essential for the maintenance of both meaning and knowledge. Perspicuity is usually sacrificed when knowledge is represented in a formalism, with the result that the original problem is compounded with a second problem of penetrating the representation language. Formalisms that make business and legal problems easier to understand are one essential research goal, not only in the quest for intelligent machines to replace intelligent human beings, but also in the wiser quest for computers to support collaborative work and other forms of social problem solving. (shrink)
Heidegger's reflections on grace culminate in the years 1949-54 where grace names a figure for the ineluctable exposure of existence. Heidegger rethinks the relationship between what exists and the world in which it is found as one that is always open to grace. For Heidegger, this world is what he terms the “dimension” between earth and sky. The relationship is only possible where existence is no longer construed as a self-contained presence but instead is thought as something (...) between presence and absence. In this essay, Heidegger's references to grace in five contexts are considered: the 1949 Bremen lectures, the 1951 essay “... Poetically Dwells Man...,” the 1953 “Dialogue on Language,” the 1951 lecture on “Language,” and the 1954 speech at his nephew's ordination. (shrink)
Against those who dismiss Kant's project in the "Religion" because it provides a Pelagian understanding of salvation, this paper offers an analysis of the deep structure of Kant's views on divine justice and grace showing them not to conflict with an authentically Christian understanding of these concepts. The first part of the paper argues that Kant's analysis of these concepts helps us to understand the necessary conditions of the Christian understanding of grace: unfolding them uncovers intrinsic relations holding (...) between God's justice and grace. Parts two and three provide an analysis of two concepts of grace used by Kant. Getting clear on their differences is the key to understanding why Kant's account is not Pelagian. (shrink)
This paper deals with the rationalist assumptions behind researches of artificial intelligence (AI) on the basis of Hubert Dreyfus’s critique. Dreyfus is a leading American philosopher known for his rigorous critique on the underlying assumptions of the field of artificial intelligence. Artificial intelligence specialists, especially those whose view is commonly dubbed as “classical AI,” assume that creating a thinking machine like the human brain is not a too far away project because they believe that human intelligence works on the basis (...) of formalized rules of logic. In contradistinction to classical AI specialists, Dreyfus contends that it is impossible to create intelligent computer programs analogous to the human brain because the workings of human intelligence is entirely different from that of computing machines. For Dreyfus, the human mind functions intuitively and not formally. Following Dreyfus, this paper aims to pinpointing the major flaws classical AI suffers from. The author of this paper believes that pinpointing these flaws would inform inquiries on and about artificial intelligence. Over and beyond this, this paper contributes something indisputably original. It strongly argues that classical AI research programs have, though inadvertently, falsified an entire epistemological enterprise of the rationalists not in theory as philosophers do but in practice. When AI workers were trying hard in order to produce a machine that can think like human minds, they have in a way been testing—and testing it up to the last point—the rationalist assumption that the workings of the human mind depend on logical rules. Result: No computers actually function like the human mind. Reason: the human mind does not depend on the formal or logical rules ascribed to computers. Thus, symbolic AI research has falsified the rationalist assumption that ‘the human mind reaches certainty by functioning formally’ by virtue of its failure to create a thinking machine. (shrink)
In their joint paper entitled The Replication of the Hard Problem of Consciousness in AI and BIO-AI (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as H-consciousness ). The claim is that if we knew the inner workings of (...) phenomenal consciousness and could understand its’ precise operation, we could instantiate such consciousness in a machine. This claim, called the extra-strong AI thesis, is an important claim because if true it would demystify the privileged access problem of first-person consciousness and cast it as an empirical problem of science and not a fundamental question of philosophy. A core assumption of the extra-strong AI thesis is that there is no logical argument that precludes the implementation of H-consciousness in an organic or in-organic machine provided we understand its algorithm. Another way of framing this conclusion is that there is nothing special about H-consciousness as compared to any other process. That is, in the same way that we do not preclude a machine from implementing photosynthesis, we also do not preclude a machine from implementing H-consciousness. While one may be more difficult in practice, it is a problem of science and engineering, and no longer a philosophical question. I propose that Boltuc’s conclusion, while plausible and convincing, comes at a very high price; the argument given for his conclusion does not exclude any conceivable process from machine implementation. In short, if we make some assumptions about the equivalence of a rough notion of algorithm and then tie this to human understanding, all logical preconditions vanish and the argument grants that any process can be implemented in a machine. The purpose of this paper is to comment on the argument for his conclusion and offer additional properties of H-consciousness that can be used to make the conclusion falsifiable through scientific investigation rather than relying on the limits of human understanding. (shrink)
Creativity has a special role in enabling humans to develop beyond the fulfilment of simple primary functions. This factor is significant for Artificial Intelligence (AI) developers who take replication to be the primary goal, since moves toward creating autonomous artificial-beings beg questions about their potential for creativity. Using Wittgenstein’s remarks on rule-following and language-games, I argue that although some AI programs appear creative, to call these programmed acts creative in our terms is to misunderstand the use of this word in (...) language. I conclude that replication is not the best way forward for AI development in matters of creativity. (shrink)
Challenges of interpersonal harm for a theology of freedom and grace -- Karl Rahner's theological anthropology -- The role of freedom and grace in the construction of the human self -- The vulnerable self and loss of agency -- Trauma theory and the challenge to a Rahnerian theology of freedom and grace -- The fragmented self and constrained agency -- Feminist theories as correctives to a Rahnerian anthropology -- Response to the challenge -- Rahner's theology revisited -- (...) Ethical directions -- Implications of a revised theology of freedom and grace. (shrink)
Scenarios involving the introduction of artificially intelligent (AI) assistive technologies in health care practices raise several ethical issues. In this paper, I discuss four objections to introducing AI assistive technologies in health care practices as replacements of human care. I analyse them as demands for felt care, good care, private care, and real care. I argue that although these objections cannot stand as good reasons for a general and a priori rejection of AI assistive technologies as such or as replacements (...) of human care, they demand us to clarify what is at stake, to develop more comprehensive criteria for good care, and to rethink existing practices of care. In response to these challenges, I propose a (modified) capabilities approach to care and emphasize the inherent social dimension of care. I also discuss the demand for real care by introducing the ‘Care Experience Machine’ thought experiment. I conclude that if we set the standards of care too high when evaluating the introduction of AI assistive technologies in health care, we have to reject many of our existing, low-tech health care practices. (shrink)
In The Grace and the Severity of the Ideal, Victor Kestenbaum swims against the current of Dewey scholarship. He declares for and gives close articulation to the importance of transcendence in the philosophy of John Dewey. The guiding thread of the book is "the proposal that Dewey never outgrew his idealistic period. His philosophical achievement is not to be located in his naturalism but in the frontiers along which the natural and the transcendental touch" (137). Kestenbaum does not argue (...) that Dewey defends a supernatural sense of transcendence; instead, he documents the modes of transcendence that, for Dewey, reveal themselves within the flow of experience. This is a learned and carefully developed book, one that will provoke pragmatists to think carefully about how growth, self-revision, and... (shrink)
The introduction of results of AI and Law research in actual legal practice advances disturbingly slow. One of the problems is that most research can be classified as either theoretical or pragmatic, while combinations of these two are scarce. This interferes with the need for feedback as well as with the need of getting support, both financially and from actual legal practice. The conclusion of this paper is that an emphasis on research that generates operational and sophisticated systems is necessary (...) in order to provide a future for AI and Law. (shrink)
Summary The controversy about the strong AI-thesis was recently revived by two interrelated contributions stemming from J. R. Searle on the one hand and from P. M. and P. S. Churchland on the other hand. It is shown that the strong AI-thesis cannot be defended in the formulation used by the three authors. It violates some well accepted criterions of scientific argumentation, especially the rejection of essentialistic definitions. Moreover, Searle's âproofâ is not conclusive. Though it may be reconstructed in a (...) conclusive manner, the modified proof is trivial. Beyond that, the most interesting aspect is formulated as an axiom that is not justified either. Therefore Searle's criticism of strong AI-thesis fails to be a convincing proof â it can be reduced to an unjustified presupposition. (shrink)
The paper identifies some of the problems with legal systems and outlines the potential of AI technology for overcoming them. For expository purposes, this outline is based on a simplified epistemology of the primary functions of law. Social and philosophical impediments from the side of the legal community to taking advantage of the potential of this technology are discussed and strategic recommendations are given.
1. To be is to be-in-relation -- 2. Cosmic being as relation -- 3. Human being as relation -- 4. Divine being as relation -- 5. Divine and cosmic being in relation -- 6. Creation as relation in an evolving cosmos -- 7. Incarnation as relation in an evolving cosmos -- 8. Grace as relation in an evolving cosmos -- 9. Living in trinitarian relation.
This paper compares and contrasts three groups that conducted biological research at Yale University during overlapping periods between 1910 and 1970. Yale University proved important as a site for this research. The leaders of these groups were Ross Granville Harrison, Grace E. Pickford, and G. Evelyn Hutchinson, and their members included both graduate students and more experienced scientists. All produced innovative research, including the opening of new subfields in embryology, endocrinology and ecology respectively, over a long period of time. (...) Harrison's is shown to have been a classic research school; Pickford's and Hutchinson's were not. Pickford's group was successful in spite of her lack of departmental or institutional position or power. Hutchinson and his graduate and post-graduate students were extremely productive but in diverse areas of ecology. His group did not have one focused area of research or use one set of research tools. The paper concludes that new models for research groups are needed, especially for those, like Hutchinson's, that included much field research. (shrink)
For the first time in book format, the sociology or grace (or enchantment) is explained and explored in some detail. Grace is a central concept of theology, while the term also has a wide range of meanings in many fields. The results of this study are fascinating. The author's writings on this topic take the reader on an intriguing journey which traverses subjects ranging from theology, through the history of art, archaeology and mythology to anthropology. As such, this (...) volume will interest academics across a wide range of disciplines apart from sociology. (shrink)
Mostly philosophers cause trouble. I know because on alternate Thursdays I am one -- and I live in a philosophy department where I watch all of them cause trouble. Everyone in artificial intelligence knows how much trouble philosophers can cause (and in particular, we know how much trouble one philosopher -- John Searle -- has caused). And, we know where they tend to cause it: in knowledge representation and the semantics of data structures. This essay is about a recent case (...) of this sort of thing. One of the take-home messages will be that AI ought to redouble its efforts t o understand concepts. (shrink)
Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like to propose a new metaphor to help us understand intelligence. Of course, whether the metaphor is any good or not depends on whether it actually does help us. (What I am going to propose is not something opposed to computationalism -- the hypothesis that cognition is computation. Noncomputational metaphors are in vogue these days, (...) and to date they have all been equally plausible and equally successful. And, just to be explicit, I do not mean “IQ” by “intelligence.” I am using “intelligence” in the way AI uses it: as a semi-techical term referring to a general property of all intelligent systems, animal (including humans), or machine, alike.). (shrink)
Under the Superstition Mountains in central Arizona toil those who would rob humankind o f its humanity. These gray, soulless monsters methodically tear away at our meaning, our subjectivity, our essence as transcendent beings. With each advance, they steal our freedom and dignity. Who are these denizens of darkness, these usurpers of all that is good and holy? None other than humanity’s arch-foe: The Cognitive Scientists -- AI researchers, fallen philosophers, psychologists, and other benighted lovers of computers. Unless they are (...) stopped, humanity -- you and I -- will soon be nothing but numbers and algorithms locked away on magnetic tape. (shrink)
This paper is a modified version of my acceptance lecture for the 1986 SPL-Insight Award. It turned into something of a personal credo -describing my view of Â the nature of AI Â the potential social benefit of applied AI Â the importance of basic AI research Â the role of logic and the methodology of rational construction Â the interplay of applied and basic AI research, and Â the importance of funding basic AI. These points are knitted together by (...) an analogy between AI and structural engineering: in particular, between building expert systems and building bridges. (shrink)
This paper considers the impact of the AI R&D programme on human society and the individual human being on the assumption that a full realisation of the engineering objective of AI, namely, construction of human-level, domain-independent intelligent entities, is possible. Our assumption is essentially identical tothe maximum progress scenario of the Office of Technology Assessment, US Congress.Specifically, the first section introduces some of the significant issues on the relational nexus among work, education and the human-machine boundary. In particular, based on (...) a Russellian conception of rationality I briefly argue that we need to change our related conceptions of work, employment and free time, through a new human-centred education. On the human-machine boundary problem, I make a couple of tentative suggestions and put forward some crucial open questions.Section two discusses the impact of the emerging machine intelligence on human nature both as modification of its self-image, keeping human nature itself unchanged, and its potential for altering human nature itself. I briefly argue that: (i) in a certain context, the question of the supremacy or uniqueness of human intelligence loses much, if not all, of its ‘weight’; and (ii) appearance of Robot-X species would immortalise the human spirit. (shrink)
25 years ago, when AI & Society was launched, the emphasis was, and still is, on dehumanisation and the effects of technology on human life, including reliance on technology. What we forgot to take into account was another very great danger to humans. The pervasiveness of computer technology, without appropriate security safeguards, dehumanises us by allowing criminals to steal not just our money but also our confidential and private data at will. Also, denial-of-service attacks prevent us from accessing the information (...) we need when we want it. We are being dehumanised not by the technology but by criminals who use the ubiquity of the technology and its lack of security to steal from us and prevent us from doing what we want. What is more interesting is that this malevolent use of the technology doesn’t come from monolithic corporate structures eager to control our lives but mainly from individuals keen to demonstrate their knowledge of the technology for social networking purposes. The aim of this paper is to turn the clock back 25 years and present an alternative perspective: the single, biggest threat of dehumanisation is not the pervasiveness and ubiquity of computers but the lack of ensuring that humans are provided with the basic security they need for using the technology safely and securely. Cyberspace is not a safe space to be. This was something that even far-sighted researcher colleagues in the 1970s and 1980s overlooked. The paper will explore where we went wrong 25 years ago in our predictions and concerns. We will also present a scenario that allows future generations to have a safer cyberworld. (shrink)
There have been few attempts, so far, to document the history of artificial intelligence. It is argued that the “historical sociology of scientific knowledge” can provide a broad historiographical approach for the history of AI, particularly as it has proved fruitful within the history of science in recent years. The article shows how the sociology of knowledge can inform and enrich four types of project within the history of AI; organizational history; AI viewed as technology; AI viewed as cognitive science (...) and historical biography. In the latter area the historical treatments of Darwin and Turing are compared to warn against the pitfalls of “rational reconstructions” of the past. (shrink)
The industrial society in Japan is now entering into a new era of an advanced information society or a network society. AI as a knowledge information processing technology is becoming an integral part of the society. This emerging era is being supported by the information industry.
Theoretical commentaries on AI often operate as a metadiscourse on the way in which science represents itself to a wider public. The sciences and humanities do the same kind of work but in different fields that encourage them to talk about their work differently: science refers to a natural world that does not talk back, and the humanities refer continually to a world with communicative people in it. This paper suggests that much AI commentary is misconceived because it models itself (...) on the way that science represents itself, rather than on the actual practice of science.AI theorists have become increasingly worried about the lack of evaluation in AI, the lack of reflexivity, and the lack of contact with society. Frequently these writers turn to concepts of tacit knowledge to work through these worries. In doing so they are recognising the problem of AI's second-order representation of science and trying to deal with it. However, this recognition of a problem with the representations of science simply turns back to the legitimation crisis of Western politics where many commentators use science precisely as a ‘model’ for western political institutions. They do so because science is one of the few areas of knowledge where it has been legitimate to use plausible methodology for representation that allows for arbitrary designations of authority as well as parallel systems of different authority. However, the plausible rejects any control on reflexivity, assumes an ethnocentric club culture and does not address social context.It is in this sense that the problems of legitimation in political liberalism are similar to those of legitimation in sciences, both are rooted in their uses of representation. AI's link with the representation of science places it in the heart of this debate about legitimacy. This paper suggests that AI does need to learn about reflexivity and that it might well do so by looking at the recent work on experimentation and representation by historians of science, and by looking to the debates about representation by historians of science, and by looking to the debates about representation within the humanities. However, reflexivity may not be enough. Devising rules of thumb for the appropriate halting of reflexivity, is also needed to address social context and take action. (shrink)
The expression, ‘the culture of the artificial’ results from the confusion between nature and culture, when nature mingles with culture to produce the ‘artificial’ and science becomes ‘the science of the artificial’. Artificial intelligence can thus be defined as the ultimate expression of the crisis affecting the very foundation of the system of legitimacy in Western society, i.e. Reason, and more precisely, Scientific Reason. The discussion focuses on the emergence of the culture of the artificial and the radical forms of (...) pragmatism, sophism and marketing from a French philosophical perspective. The paper suggests that in the postmodern age of the ‘the crisis of the systems of legitimacy’, the question of social acceptability of any action, especially actions arising out of the application of AI, cannot be avoided. (shrink)
There is much interest in moving AI out into real world applications, a move which has been encouraged by recent funding which has attempted to show industry and commerce can benefit from the Fifth Generation of computing. In this article I suggest that the legal application area is one which is very much more complex than it might — at first sight — seem. I use arguments from the sociology of law to indicate that the viewing of the legal system (...) as simply a rule-bound discipline is inherently nave. This, while not new in jurisprudence, is — as the literature of AI and law indicates — certainly novel to the field of artificial intelligence. The socio-legal argument provided is set within the context of AI as one more example of the failure of scientific success and method to easily transmit itself over into the social sciences. (shrink)
In this paper I shall describe the symbolic search space paradigm which is the dominant model for most of AI. Coupled with the mechanisms of logic it yields the predominant methodology underlying expert systems which are the most successful application of AI technology to date. Human decision making, more precisely, expert human decision making is the function that expert systems aspire to emulate, if not surpass.Expert systems technology has not yet proved to be a decisive success — it appears to (...) fare better in some areas of human expertise than others. As a result subdomains of human expertise are variously categorised and we shall examine a few of the suggested classification schemes. A particular line of argument explored is one which maintains that certain types of human decision making, at least, are not adequately approximated by the symbolic search space paradigm of AI. Furthermore, attempts to project this inadequate model of human decision making via implementations of expert systems will be detrimental to both our image of ourselves and the future possibilities for AI software.Finally, we examine one possible route to the realization of AI, perhaps even practical applications of AI, that is a significant alternative to the model offered by the symbolic search space paradigm. (shrink)
This article looks at the broadest implications of public acceptance of AI. A distinction is drawn between “conscious” belief in a technology, and “organic” belief where the technology is incorporated into an unconscious world model. The extent to which we feel threatened by AI's apparent denial of “spirit” is considered, along with a discussion of how people react to this threat. It is proposed that organic acceptance of AI models would lead to a rebirth of popular spiritual concepts as paradoxical (...) as the “New Age” ideas that have their roots in the theories of physics. Finally the relevance of this speculation is discussed in terms of how it could impinge upon public acceptability of AI technology. (shrink)
One of the most common misunderstandings in dealing with the world is the notion that you can do it piece-meal, that in understanding and shaping one part you can safely ignore the rest. One of the oldest wisdoms is the insight that in reality everything is knitted together, that to meddle with one part is always to meddle with the whole. AI as a social phenomenon is a good example for both findings. In trying to understand this new event in (...) the light of old counsel we get a better understanding not only of AI but of our society as well. (shrink)