The unified theory of dose and effect, as indicated by the median-effect equation for single and multiple entities and for the first and higher order kinetic/dynamic, has been established by T.C. Chou and it is based on the physical/chemical principle of the massaction law (J. Theor. Biol. 59: 253-276, 1976 (質量作用中效定理) and Pharmacological Rev. 58: 621-681, 2006) (普世中效指數定理). The theory was developed by the principle of mathematical induction and deduction (數學演繹歸納法). Rearrangements of the median-effect equation lead to Michaelis-Menten, Hill, (...) Scatchard, and Henderson-Hasselbalch equations. The “median” serves as the universal reference point and the “common link” for the relationship of all entities and is also the “harmonic mean” of kinetic dissociation constants. Over 300 mechanism-specific equations have been derived and published using the mathematical induction-deduction process. These equations can be deduced into several general equations, including the median-mediated whole/part equation, combination index theorem, isobologram equation, and polygonogram. It is proven that “dose” and “effect” are interchangeable, thus, “substance” and “function” are interchangeable, which leads to “the unity theory” (劑效、心物、知行一元論) in quantitative mathematical philosophy (數學的定量哲學) in functional context. Therefore, a general theory centered on the “median” and based on equilibrium dynamics has evolved. In other words: [「中」的宇宙觀： 以「中」爲基凖的動力學生態平衡]. Based on the median-effect equation of the mass-action law, the fundamental claim is that we can draw “a specific cure” for only two data points, if they are determined accurately. This claim has far reaching consequences since it defies the general held belief that two points can dray only a straight line. Remarkably, the unity theory (一元論) providesscientific/mathematical interpretation in equations and in graphics of Chinese ancient philosophy, including Fu-Si Ba Gua (伏羲八卦), Dao’s Harmony (和諧), the Confucian doctrine of the mean (儒家中庸之道), Chou Dun-Yi’s (周敦頤, 1017-1073) From Wu-ji to Tai-ji and Taiji Tu Sho (無極而太極及太極圖說). The moderntopological analysis for trinity yields an exact correspondence to the Ba-Gua, which was introduced over 4,000 years ago. Furthermore, the median-centered algorithm, promotes modern ecological content (生態學) in the equilibral dynamic state of harmony. It is concluded that Western science and Eastern philosophy are directly linked and complementary to each other. Since the truth in mathematical quantitative philosophy (數學的定量哲學) has no boundaries, East and West philosophies can flourish together for the common goal and ideal in science and in humanity (世界大同). (shrink)
Temporal binding via 40-Hz synchronization of neuronal discharges in sensory cortices has been hypothesized to be a necessary condition for the rapid selection of perceptually relevant information for further processing in working memory. Binocular rivalry experiments have shown that late stage visual processing associated with the recognition of a stimulus object is highly correlated with discharge rates in inferotemporal cortex. The hippocampus is the primary recipient of inferotemporal outputs and is known to be the substrate for the consolidation of working (...) memories to long-term, episodic memories. The prefrontal cortex, on the other hand, is widely thought to mediate working memory processes, per se. This article reviews accumulated evidence for the role of a subcortical matrix in linking frontal and hippocampal systems to select and ''stream'' conscious episodes across time (hundreds of milliseconds to several seconds). ''Streaming'' is hypothesized to be mediated by the selective gating of reentrant flows of information between these cortical systems and the subcortical matrix. The physiological mechanism proposed for this temporally extended form of binding is synchronous oscillations in the slower EEG spectrum (< 8 Hz). (shrink)
In the metaphor of behavioral momentum, the rate of a free operant in the presence of a discriminative stimulus is analogous to the velocity of a moving body, and resistance to change measures an aspect of behavior that is analogous to its inertial mass. An extension of the metaphor suggests that preference measures an analog to the gravitational mass of that body. The independent functions relating resistance to change and preference to the conditions of reinforcement may be construed as convergent (...) measures of a single construct, analogous to physical mass, that represents the effects of a history of exposure to the signaled conditions of reinforcement and that unifies the traditionally separate notions of the strength of learning and the value of incentives. Research guided by the momentum metaphor encompasses the effects of reinforcement on response rate, resistance to change, and preference and has implications for clinical interventions, drug addiction, and self-control. In addition, its principles can be seen as a modern, quantitative version of Thorndike's (1911) Law of Effect, providing a new perspective on some of the challenges to his postulation of strengthening by reinforcement. Key Words: behavioral momentum; clinical interventions; drug addiction; preference; reinforcement; resistance to change; response strength; self-control. (shrink)
In reply to the comments on our target article, we address a variety of issues concerning the generality of our major findings, their relation to other theoretical formulations, and the metaphor of behavioral momentum that inspired much of our work. Most of these issues can be resolved by empirical studies, and we hope that the ideas advanced here will promote the analysis of resistance to change and preference in new areas of research and application.
Recently, religious organisations, governments and public institutions have begun to offer apologies for historical wrongs. Can they legitimately do so? Departing from the tendency, Professor Hubert Markl, President of the Max Planck Society, has offered strong reasons for not apologising for the crimes of medical scientists who experimented on human subjects during the Nazi era. He argues that only the perpetrators can meaningfully apologise. Markl’'s position is considered and rejected in favour of the view that apologies by proxy for historical (...) wrongs are justifiable and should be made by institutions that have the authority to do so. (shrink)
The constructs of behavioral mass in research on the momentum of operant behavior and associative strength in Pavlovian conditioning have some interesting parallels, as suggested by Savastano & Miller. Some recent findings challenge the strict separation of operant and Pavlovian determiners of response rate and resistance to change in behavioral momentum, renewing the need for research on the interaction of processes that have traditionally been studied separately. Relatedly, Furedy notes that some autonomic responses may be refractory to conditioning, but a (...) combination of operant contingencies and enriched Pavlovian stimulus-reinforcer relations may prove effective. (shrink)
We have shown that FEF lesion-induced extinction could be compensated for by changing the relative temporal onsets of two targets presented on either side of the midline. Monkeys were trained to make saccades to either of two identical visual stimuli presented with various stimulus onset asynchronies (SOA). In intact animals the targets were chosen with equal probability when they appeared simultaneously. After unilateral FEF lesions an SOA of 67–116 msec had to be introduced, with the contralesional target appearing first, to (...) obtain equal probability choice. With a smaller target separation, averaging saccades occurred with highest frequency at similar SOAs. Our findings suggest that neglect may be attributable to more time being required in the damaged hemisphere for converting sensory information into motor responses. (shrink)
Altruism can be understood in terms of traditional principles of reinforcement if an outcome that is beneficial to another person reinforces the behavior of the actor who produces it. This account depends on a generalization of reinforcement across persons and might be more amenable to experimental investigation than the one proposed by Rachlin.
The predictive validity of the ultimatum game (UG) for cross-cultural differences in real-world behavior has not yet been established. We discuss results of a recent meta-analysis (Oosterbeek et al 2004), which examined UG behavior across large-scale societies and found that the mean percent offers rejected was positively correlated with social expenditure.
Several authors have characterized a striking phenomenon of perceptual learning in visual discrimination tasks. This learning process is selective for the stimulus characteristics and location in the visual field. Since the human visual system exploits symmetry for object recognition we were interested in exploring how it learns to use preattentive symmetry cues for discriminating simple, meaningless, forms. In this study, similar to previous studies of perceptual learning, we asked whether the effects of practice acquired in the discrimination of pairs of (...) shape with a specific orientation of the symmetry axis would transfer to the discrimination of shapes with different orientation of symmetry axis, or to shapes presented in different areas of the visual field. We found that there was no learning transfer between forms with very different axes of symmetry (90° apart). Interestingly, however, we found a transfer of learning effect to horizontally oriented symmetry axis from a condition with an axis of symmetry differing by 45°. Also it appears that some subjects took a longer time to learn than the typical fast learning paradigm would predict. Data showed that when observers practice discrimination of meaningless symmetric forms, consistent improvement in the performance occurs. This improvement is lasting over days, and it tends to be specific for the area of the visual field trained. We will discuss results from some of the observers whose learning was not fast, but who actually improved with more practice and with large time intervals (1 day) between training sessions. (shrink)
How should business deal with society's increasing demands for ethical and social responsibility? In plain language this book considers these and other ethical questions of direct relevance to business in the 1990s. It discusses the nature of ethics, ethical reasoning, the use of stakeholder analysis, and other central concepts used in business ethics. Using mainly, but not exclusively, Australian cases and specific examples, the book covers issues such as fairness in business dealings, advertising ethics, discrimination, and codes of ethics.
This book sets out in plain language ethical questions of direct relevance to business today. This new edition expands the range of issues covered and includes a chapter on international business ethics, drawing extensively from Asian examples.
We make two major comments. First, negative reinforcement contingencies may generate some apparent “drug-like” aspects of money motivation, and the operant account, properly construed, is both a tool and drug theory. Second, according to Lea & Webley (L&W), one might expect that “near-money,” such as frequent-flyer miles, should have a stronger drug and a weaker tool aspect than regular money. Available evidence agrees with this prediction. (Published Online April 5 2006).
Jean Baudrillard is a pivotal figure in contemporary cultural theory. Without doubt one of the foremost European thinkers of the last fifty years, his work has provoked debate and controversy across a number of disciplines, yet his significance has so far been largely ignored by feminist theorists.
Challenges of interpersonal harm for a theology of freedom and grace -- Karl Rahner's theological anthropology -- The role of freedom and grace in the construction of the human self -- The vulnerable self and loss of agency -- Trauma theory and the challenge to a Rahnerian theology of freedom and grace -- The fragmented self and constrained agency -- Feminist theories as correctives to a Rahnerian anthropology -- Response to the challenge -- Rahner's theology revisited -- (...) Ethical directions -- Implications of a revised theology of freedom and grace. (shrink)
In The Grace and the Severity of the Ideal, Victor Kestenbaum swims against the current of Dewey scholarship. He declares for and gives close articulation to the importance of transcendence in the philosophy of John Dewey. The guiding thread of the book is "the proposal that Dewey never outgrew his idealistic period. His philosophical achievement is not to be located in his naturalism but in the frontiers along which the natural and the transcendental touch" (137). Kestenbaum does not argue (...) that Dewey defends a supernatural sense of transcendence; instead, he documents the modes of transcendence that, for Dewey, reveal themselves within the flow of experience. This is a learned and carefully developed book, one that will provoke pragmatists to think carefully about how growth, self-revision, and... (shrink)
1. To be is to be-in-relation -- 2. Cosmic being as relation -- 3. Human being as relation -- 4. Divine being as relation -- 5. Divine and cosmic being in relation -- 6. Creation as relation in an evolving cosmos -- 7. Incarnation as relation in an evolving cosmos -- 8. Grace as relation in an evolving cosmos -- 9. Living in trinitarian relation.
For the first time in book format, the sociology or grace (or enchantment) is explained and explored in some detail. Grace is a central concept of theology, while the term also has a wide range of meanings in many fields. The results of this study are fascinating. The author's writings on this topic take the reader on an intriguing journey which traverses subjects ranging from theology, through the history of art, archaeology and mythology to anthropology. As such, this (...) volume will interest academics across a wide range of disciplines apart from sociology. (shrink)
Mostly philosophers cause trouble. I know because on alternate Thursdays I am one -- and I live in a philosophy department where I watch all of them cause trouble. Everyone in artificial intelligence knows how much trouble philosophers can cause (and in particular, we know how much trouble one philosopher -- John Searle -- has caused). And, we know where they tend to cause it: in knowledge representation and the semantics of data structures. This essay is about a recent case (...) of this sort of thing. One of the take-home messages will be that AI ought to redouble its efforts t o understand concepts. (shrink)
Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like to propose a new metaphor to help us understand intelligence. Of course, whether the metaphor is any good or not depends on whether it actually does help us. (What I am going to propose is not something opposed to computationalism -- the hypothesis that cognition is computation. Noncomputational metaphors are in vogue these days, (...) and to date they have all been equally plausible and equally successful. And, just to be explicit, I do not mean “IQ” by “intelligence.” I am using “intelligence” in the way AI uses it: as a semi-techical term referring to a general property of all intelligent systems, animal (including humans), or machine, alike.). (shrink)
Under the Superstition Mountains in central Arizona toil those who would rob humankind o f its humanity. These gray, soulless monsters methodically tear away at our meaning, our subjectivity, our essence as transcendent beings. With each advance, they steal our freedom and dignity. Who are these denizens of darkness, these usurpers of all that is good and holy? None other than humanity’s arch-foe: The Cognitive Scientists -- AI researchers, fallen philosophers, psychologists, and other benighted lovers of computers. Unless they are (...) stopped, humanity -- you and I -- will soon be nothing but numbers and algorithms locked away on magnetic tape. (shrink)
This paper is a modified version of my acceptance lecture for the 1986 SPL-Insight Award. It turned into something of a personal credo -describing my view of Â the nature of AI Â the potential social benefit of applied AI Â the importance of basic AI research Â the role of logic and the methodology of rational construction Â the interplay of applied and basic AI research, and Â the importance of funding basic AI. These points are knitted together by (...) an analogy between AI and structural engineering: in particular, between building expert systems and building bridges. (shrink)
This paper considers the impact of the AI R&D programme on human society and the individual human being on the assumption that a full realisation of the engineering objective of AI, namely, construction of human-level, domain-independent intelligent entities, is possible. Our assumption is essentially identical tothe maximum progress scenario of the Office of Technology Assessment, US Congress.
25 years ago, when AI & Society was launched, the emphasis was, and still is, on dehumanisation and the effects of technology on human life, including reliance on technology. What we forgot to take into account was another very great danger to humans. The pervasiveness of computer technology, without appropriate security safeguards, dehumanises us by allowing criminals to steal not just our money but also our confidential and private data at will. Also, denial-of-service attacks prevent us from accessing the information (...) we need when we want it. We are being dehumanised not by the technology but by criminals who use the ubiquity of the technology and its lack of security to steal from us and prevent us from doing what we want. What is more interesting is that this malevolent use of the technology doesn’t come from monolithic corporate structures eager to control our lives but mainly from individuals keen to demonstrate their knowledge of the technology for social networking purposes. The aim of this paper is to turn the clock back 25 years and present an alternative perspective: the single, biggest threat of dehumanisation is not the pervasiveness and ubiquity of computers but the lack of ensuring that humans are provided with the basic security they need for using the technology safely and securely. Cyberspace is not a safe space to be. This was something that even far-sighted researcher colleagues in the 1970s and 1980s overlooked. The paper will explore where we went wrong 25 years ago in our predictions and concerns. We will also present a scenario that allows future generations to have a safer cyberworld. (shrink)
In its forty years of existence, Artificial Intelligence has suffered both from the exaggerated claims of those who saw it as the definitive solution of an ancestral dream â that of constructing an intelligent machine-and from its detractors, who described it as the latest fad worthy of quacks. Yet AI is still alive, well and blossoming, and has left a legacy of tools and applications almost unequalled by any other field-probably because, as the heir of Renaissance thought, it represents a (...) possible bridge between the humanities and the natural sciences, philosophy and neurophysiology, psychology and integrated circuits-including systems that today are taken for granted, such as the computer interface with mouse pointer and windows. This writing describes a few results of AI that have modified the scientific world, as well as the way a layman sees computers: thetechnology of programming languages, such asLISP-witness the unique excellence of academic departments that have contributed to them-thecomputing workstations-of which our modern PC is but a vulgarised descendant-theapplications to the educational field-e.g., the realisation of some ideas of genetic epistemology-and tointerdisciplinary philosophy-such as Hofstadter's associations between the arts and mathematics-and the use ofAI techniques in music and musicology. All this has led to a generalisation of AI towards Negrotti's overallTheory of the Artificial, which encompasses further specialisation such asartificial reality, artificial life, and applications ofneural networks among others. (shrink)
The industrial society in Japan is now entering into a new era of an advanced information society or a network society. AI as a knowledge information processing technology is becoming an integral part of the society. This emerging era is being supported by the information industry.
John Searle has argued that the aim of strong AI of creating a thinking computer is misguided. Searle’s Chinese Room Argument purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality. But we are not mainly interested in the program itself but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the implementation (...) of them cannot suffice for semantics. Perhaps our world is a world in which any implementation of the right computer program will create a system with intrinsic intentionality, in which case Searle’s Chinese Room Scenario is empirically (nomically) impossible. But, indeed, perhaps our world is a world in which Searle’s Chinese Room Scenario is empirically (nomically) possible and that the silicon basis of modern day computers are one kind of material unsuited to give you intrinsic intentionality. The metaphysical question turns out to be a question of what kind of world we are in and I argue that in this respect we do not know our modal address. The Modal Address Argument does not ensure that strong AI will succeed, but it shows that Searle’s challenge on the research program of strong AI fails in its objectives. (shrink)
I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about the (...) essential nature of intentional content; such theories often yield non-intuitive results in non-standard cases, and so cannot be judged by such intuitions. However, I further argue that the CRA can be transformed into a potentially valid argument against computationalism simply by reinterpreting it as an indeterminacy argument that shows that computationalism cannot explain the ordinary distinction between semantic content and sheer syntactic manipulation, and thus cannot be an adequate account of content. This conclusion admittedly rests on the arguable but plausible assumption that thought content is interestingly determinate. I conclude that the viability of computationalism and strong AI depends on their addressing the indeterminacy objection, but that it is currently unclear how this objection can be successfully addressed. (shrink)
Discussion of Searle's case against strong AI has usually focused upon his Chinese Room thought-experiment. In this paper, however, I expound and then try to refute what I call his abstract argument against strong AI, an argument which turns upon quite general considerations concerning programs, syntax, and semantics, and which seems not to depend on intuitions about the Chinese Room. I claim that this argument fails, since it assumes one particular account of what a program is. I suggest an alternative (...) account which, however, cannot play a role in a Searle-type argument, and argue that Searle gives no good reason for favoring his account, which allows the abstract argument to work, over the alternative, which doesn't. This response to Searle's abstract argument also, incidentally, enables the Robot Reply to the Chinese Room to defend itself against objections Searle makes to it. (shrink)
This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...) the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents. (shrink)
Heidegger's reflections on grace culminate in the years 1949-54 where grace names a figure for the ineluctable exposure of existence. Heidegger rethinks the relationship between what exists and the world in which it is found as one that is always open to grace. For Heidegger, this world is what he terms the “dimension” between earth and sky. The relationship is only possible where existence is no longer construed as a self-contained presence but instead is thought as something (...) between presence and absence. In this essay, Heidegger's references to grace in five contexts are considered: the 1949 Bremen lectures, the 1951 essay “... Poetically Dwells Man...,” the 1953 “Dialogue on Language,” the 1951 lecture on “Language,” and the 1954 speech at his nephew's ordination. (shrink)
I analyze the frame problem and its relation to other epistemological problems for artificial intelligence, such as the problem of induction, the qualification problem and the "general" AI problem. I dispute the claim that extensions to logic (default logic and circumscriptive logic) will ever offer a viable way out of the problem. In the discussion it will become clear that the original frame problem is really a fairy tale: as originally presented, and as tools for its solution are circumscribed by (...) Pat Hayes, the problem is entertaining, but incapable of resolution. The solution to the frame problem becomes available, and even apparent, when we remove artificial restrictions on its treatment and understand the interrelation between the frame problem and the many other problems for artificial epistemology. I present the solution to the frame problem: an adequate theory and method for the machine induction of causal structure. Whereas this solution is clearly satisfactory in principle, and in practice real progress has been made in recent years in its application, its ultimate implementation is in prospect only for future generations of AI researchers. (shrink)
This paper deals with the rationalist assumptions behind researches of artificial intelligence (AI) on the basis of Hubert Dreyfus’s critique. Dreyfus is a leading American philosopher known for his rigorous critique on the underlying assumptions of the field of artificial intelligence. Artificial intelligence specialists, especially those whose view is commonly dubbed as “classical AI,” assume that creating a thinking machine like the human brain is not a too far away project because they believe that human intelligence works on the basis (...) of formalized rules of logic. In contradistinction to classical AI specialists, Dreyfus contends that it is impossible to create intelligent computer programs analogous to the human brain because the workings of human intelligence is entirely different from that of computing machines. For Dreyfus, the human mind functions intuitively and not formally. Following Dreyfus, this paper aims to pinpointing the major flaws classical AI suffers from. The author of this paper believes that pinpointing these flaws would inform inquiries on and about artificial intelligence. Over and beyond this, this paper contributes something indisputably original. It strongly argues that classical AI research programs have, though inadvertently, falsified an entire epistemological enterprise of the rationalists not in theory as philosophers do but in practice. When AI workers were trying hard in order to produce a machine that can think like human minds, they have in a way been testing—and testing it up to the last point—the rationalist assumption that the workings of the human mind depend on logical rules. Result: No computers actually function like the human mind. Reason: the human mind does not depend on the formal or logical rules ascribed to computers. Thus, symbolic AI research has falsified the rationalist assumption that ‘the human mind reaches certainty by functioning formally’ by virtue of its failure to create a thinking machine. (shrink)
Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have (...) maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions. (shrink)
In their joint paper entitled The Replication of the Hard Problem of Consciousness in AI and BIO-AI (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as H-consciousness ). The claim is that if we knew the inner workings of (...) phenomenal consciousness and could understand its’ precise operation, we could instantiate such consciousness in a machine. This claim, called the extra-strong AI thesis, is an important claim because if true it would demystify the privileged access problem of first-person consciousness and cast it as an empirical problem of science and not a fundamental question of philosophy. A core assumption of the extra-strong AI thesis is that there is no logical argument that precludes the implementation of H-consciousness in an organic or in-organic machine provided we understand its algorithm. Another way of framing this conclusion is that there is nothing special about H-consciousness as compared to any other process. That is, in the same way that we do not preclude a machine from implementing photosynthesis, we also do not preclude a machine from implementing H-consciousness. While one may be more difficult in practice, it is a problem of science and engineering, and no longer a philosophical question. I propose that Boltuc’s conclusion, while plausible and convincing, comes at a very high price; the argument given for his conclusion does not exclude any conceivable process from machine implementation. In short, if we make some assumptions about the equivalence of a rough notion of algorithm and then tie this to human understanding, all logical preconditions vanish and the argument grants that any process can be implemented in a machine. The purpose of this paper is to comment on the argument for his conclusion and offer additional properties of H-consciousness that can be used to make the conclusion falsifiable through scientific investigation rather than relying on the limits of human understanding. (shrink)
Against those who dismiss Kant's project in the "Religion" because it provides a Pelagian understanding of salvation, this paper offers an analysis of the deep structure of Kant's views on divine justice and grace showing them not to conflict with an authentically Christian understanding of these concepts. The first part of the paper argues that Kant's analysis of these concepts helps us to understand the necessary conditions of the Christian understanding of grace: unfolding them uncovers intrinsic relations holding (...) between God's justice and grace. Parts two and three provide an analysis of two concepts of grace used by Kant. Getting clear on their differences is the key to understanding why Kant's account is not Pelagian. (shrink)
Creativity has a special role in enabling humans to develop beyond the fulfilment of simple primary functions. This factor is significant for Artificial Intelligence (AI) developers who take replication to be the primary goal, since moves toward creating autonomous artificial-beings beg questions about their potential for creativity. Using Wittgenstein’s remarks on rule-following and language-games, I argue that although some AI programs appear creative, to call these programmed acts creative in our terms is to misunderstand the use of this word in (...) language. I conclude that replication is not the best way forward for AI development in matters of creativity. (shrink)
Searle (1980) constructed the Chinese Room (CR) to argue against what he called \Strong AI": the claim that a computer can understand by virtue of running a program of the right sort. Margaret Boden (1990), in giving the English Reply to the Chinese Room argument, has pointed out that there isunderstanding in the Chinese Room: the understanding required to recognize the symbols, the understanding of English required to read the rulebook, etc. I elaborate on and defend this response to Searle. (...) In particular, I use the insight of the English Reply to contend that Searle's Chinese Room cannot argue against what I call the claim of \Weak Strong AI": there are some cases of understanding that a computer can achieve solely by virtue of that computer running a program. I refute several objections to my defense of the Weak Strong AI thesis. (shrink)
How is it possible for a physical thing--a person, an animal, a robot--to extract knowledge of the world from perception and then exploit that knowledge in the guidance of successful action? That is a question with which philosophers have grappled for generations, but it could also be taken to be one of the defining questions of Artificial Intelligence. AI is, in large measure, philosophy. It is often directly concerned with instantly recognizable philosophical questions: What is mind? What is meaning? What (...) is reasoning, and rationality? What are the necessary conditions for the recognition of objects in perception? How are decisions made and justified? (shrink)
R. M. Adams’s essay, “Must God Create the Best?” can be interpreted as offering a theodicy for God’s creating morally less perfect beings than he could have created. By creating these morally less perfect beings, God is bestowing grace upon them, which is an unmerited or undeserved benefit. He does so, however, in advance of the free moral misdeeds that render them undeserving. This requires that God have middle knowledge, pace Adams’s version of the Free Will Theodicy, of what (...) would result from his actualization of possible free persons. It is argued that God’s possession of such middle knowledge negates the freedom of created beings, since God completely determines every action of every created person. And since they are not free, they cannot qualify as morally unmeritorious or undeserving. And, with that, Adams’s theodicy of grace-in-advance collapses. (shrink)
In this article, I present a software architecture for intelligent agents. The essence of AI is complex information processing. It is impossible, in principle, to process complex information as a whole. We need some partial processing strategy that is still somehow connected to the whole. We also need flexible processing that can adapt to changes in the environment. One of the candidates for both of these is situated reasoning, which makes use of the fact that an agent is in a (...) situation, so it only processes some of the information – the part that is relevant to that situation. The combination of situated reasoning and context reflection leads to the idea of organic programming, which introduces a new building block of programs called a cell. Cells contain situated programs and the combination of cells is controlled by those programs. (shrink)
For many people, consciousness is one of the defining characteristics of mental states. Thus, it is quite surprising that consciousness has, until quite recently, had very little role to play in the cognitive sciences. Three very popular multi-authored overviews of cognitive science, Stillings et al. , Posner , and Osherson et al. , do not have a single reference to consciousness in their indexes. One reason this seems surprising is that the cognitive revolution was, in large part, a repudiation of (...) behaviorism's proscription against appealing to inner mental events. When researchers turned to consider inner mental events, one might have expected them to turn to conscious states of mind. But in fact the appeals were to postulated inner events of information processing. The model for many researchers of such information processing is the kind of transformation of symbolic structures that occurs in a digital computer. By positing procedures for performing such transformation of incoming information, cognitive scientists could hope to account for the performance of cognitive agents. Artificial intelligence, as a central discipline of cognitive science, has seemed to impose some of the toughest tests on the ability to develop information processing accounts of cognition: it required its researchers to develop running programs whose performance one could compare with that of our usual standard for cognitive agents, human beings. As a result of this focus, for AI researchers to succeed, at least in their primary task, they did not need to attend to consciousness; they simply had to design programs that behaved appropriately (no small task in itself!). This is not to say that conscious was totally ignored by artificial intelligence researchers. Some aspect of our conscious experience seemed critical to the success of any information processing model. For example, conscious agents exhibit selective attention. Some information received through their senses is attended to; much else is ignored.. (shrink)
Within AI and the cognitively related disciplines, there exist a multiplicity of uses of belief. On the face of it, these differing uses reflect differing views about the nature of an objective phenomenon called belief. In this paper I distinguish six distinct ways in which belief is used in AI. I shall argue that not all these uses reflect a difference of opinion about an objective feature of reality. Rather, in some cases, the differing uses reflect differing concerns with special (...) AI applications. In other cases, however, genuine differences exist about the nature of what we pre-theoretically call belief. To an extent the multiplicity of opinions about, and uses of belief, echoes the discrepant motivations of AI researchers. The relevance of this discussion for cognitive scientists and philosophers arises from the fact that (a) many regard theoretical research within AI as a branch of cognitive science, and (b) even if theoretical AI is not cognitive science, trends within AI influence theories developed within cognitive science. It should be beneficial, therefore, to unravel the distinct uses and motivations surrounding belief, in order to discover which usages merely reflect differing pragmatic concerns, and which usages genuinely reflect divergent views about reality. (shrink)
A computational theory of induction must be able to identify the projectible predicates, that is to distinguish between which predicates can be used in inductive inferences and which cannot. The problems of projectibility are introduced by reviewing some of the stumbling blocks for the theory of induction that was developed by the logical empiricists. My diagnosis of these problems is that the traditional theory of induction, which started from a given (observational) language in relation to which all inductive rules are (...) formulated, does not go deep enough in representing the kind of information used in inductive inferences. As an interlude, I argue that the problem of induction, like so many other problems within AI, is a problem of knowledge representation. To the extent that AI-systems are based on linguistic representations of knowledge, these systems will face basically the same problems as did the logical empiricists over induction. In a more constructive mode, I then outline a non-linguistic knowledge representation based on conceptual spaces. The fundamental units of these spaces are "quality dimensions". In relation to such a representation it is possible to define "natural" properties which can be used for inductive projections. I argue that this approach evades most of the traditional problems. (shrink)
Reasoning about causation in fact is an essential element of attributing legal responsibility. Therefore, the automation of the attribution of legal responsibility requires a modelling effort aimed at the following: a thorough understanding of the relation between the legal concepts of responsibility and of causation in fact; a thorough understanding of the relation between causation in fact and the common sense concept of causation; and, finally, the specification of an ontology of the concepts that are minimally required for (automatic) common (...) sense reasoning about causation. This article offers a worked-out example of the indicated analysis. Such example consists of: a definition of the legal concept of responsibility (in terms of liability and accountability); a definition of the legal concept of causation in fact (in terms of the initiation of physical processes by an agent and of the provision of reasons and/or opportunities to other agents); CausatiOnt, an AI-like ontology of the common sense (causal) concepts that are minimally needed for reasoning about the legal concept of causation in fact (in particular, the concepts of category, dimension, object, agent, process, event and act). (shrink)
Gravity and Grace was the first ever publication by the remarkable thinker and activist, Simone Weil. In it Gustave Thibon, the priest to whom she had entrusted her notebooks before her untimely death, compiled in one remarkable volume a compendium of her writings that have become a source of spiritual guidance and wisdom for countless individuals.
We should eventually understand how exactly first person phenomenal consciousness is generated. When we do, we should be able to enginner one for robots. This is the engineering thesis in machine consciousness.
Probability plays an essential role in many branches of AI, where it is typically assumed that we have a complete probability distribution when addressing a problem. But this is unrealistic for problems of real-world complexity. Statistical investigation gives us knowledge of some probabilities, but we generally want to know many others that are not directly revealed by our data. For instance, we may know prob(P/Q) (the probability of P given Q) and prob(P/R), but what we really want is prob(P/Q&R), and (...) we may not have the data required to assess that directly. The probability calculus is of no help here. Given prob(P/Q) and prob(P/R), it is consistent with the probability calculus for prob(P/Q&R) to have any value between 0 and 1. Is there any way to make a reasonable estimate of the value of prob(P/Q&R)? A related problem occurs when probability practitioners adopt undefended assumptions of statistical independence simply on the basis of not seeing any connection between two propositions. This is common practice, but its justification has eluded probability theorists, and researchers are typically apologetic about making such assumptions. Is there any way to defend the practice? This paper shows that on a certain conception of probability — nomic probability — there are principles of “probable probabilities” that license inferences of the above sort. These are principles telling us that although certain inferences from probabilities to probabilities are not deductively valid, nevertheless the second-order probability of their yielding correct results is 1. This makes it defeasibly reasonable to make the inferences. Thus I argue that it is defeasibly reasonable to assume statistical independence when we have no information to the contrary. And I show that there is a function Y(r,s:a) such that if prob(P/Q) = r, prob(P/R) = s, and prob(P/U) = a (where U is our background knowledge) then it is defeasibly reasonable to expect that prob(P/Q&R) = Y(r,s:a).. (shrink)
The nihilists are right, admits philosopher Loyal Rue. The universe is blind and aimless, indifferent to us and void of meaning. There are no absolute truths and no objective values. There is no right or wrong way to live, only alternative ways. There is no correct reading of a text or a picture or a dance. God is dead, nihilism reigns. But, Rue adds, nihilism is a truth inconsistent with personal happiness and social coherence. What we need instead is a (...) new myth, a noble lie. Only a noble lie can save us from the psychological and social chaos now threatened by the spread of skepticism about the meaning of life and the universe. In By the Grace of Guile, Loyal Rue offers a wide ranging look at the importance of deception in nature and in human society, concluding with an argument for a noble lie to replace the religious beliefs rejected by modern thought. Most of the book is a provocative apology for deception, illuminating its role in the shaping of history, evolution, personality, and society. Ranging from the Bible and Greek philosophy, to Saint Augustine and Montaigne, to Galileo, Kirkegaard, and Freud, Rue shows that it may be more accurate to describe the history of our culture as a flight from deception than as a quest for truth. He turns then to the natural world to reveal how deception works at every level of life, ranging from plants that mimic dung, carrion, or prey to lure insects that then spread pollen, to a remarkable African insect (Acanthaspis petax) that bedecks itself with dead ants and enters the ant colony undetected to binge at will. Moreover, he points out that psychological research has shown that strategies of deception and self-deception are essential to our personal well-being, that we sometimes shore up our self-esteem by deceptive means, by leaving others in a state of ignorance, by manipulating others into a state of false belief, by suppressing information from consciousness, and by fabricating or distorting our own sense of reality. And he argues that social coherence is achievable only within certain optimal limits of deception--the social fabric would be threatened by an overabundance of lies and false promises, of course, but it would also collapse if everyone were perfectly honest all the time. Finally, he argues that society is caught up in a Kulturkampf with nihilists promoting intellectual and moral relativism and realists defending objective and universal truths. The noble lie, says Rue, would introduce a third voice, one which first agrees with the nihilists that universal myths are pretentious lies, but then insists, against the nihilists, that without such lies humanity cannot survive. The challenge, he concludes, is ultimately an aesthetic one: it remains for the artists, poets, novelists, musicians, filmmakers, and other masters of illusion to seduce us into an embrace with a noble lie. We need a new myth that tells us where we have come from, what our nature is, and how we should live together--a story with the courage and presumption to say how things really are and what really matters. (shrink)
I propose a semi-eliminative reduction of Fodors concept of module to the concept of attractor basin which is used in Cognitive Dynamic Systems Theory (DST). I show how attractor basins perform the same explanatory function as modules in several DST based research program. Attractor basins in some organic dynamic systems have even been able to perform cognitive functions which are equivalent to the If/Then/Else loop in the computer language LISP. I suggest directions for future research programs which could find similar (...) equivalencies between organic dynamic systems and other cognitive functions. This type of research could help us discover how (and/or if) it is possible to use Dynamic Systems Theory to more accurately model the cognitive functions that are now being modeled by subroutines in Symbolic AI computer models. If such a reduction of subroutines to basins of attraction is possible, it could free AI from the limitations that prompted Fodor to say that it was impossible to model certain higher level cognitive functions. (shrink)
Artificial Intelligence (AI) is a core area of Cognitive Science, yet today few AI researchers attend the Cognitive Science Society meetings. This essay examines why, how AI has changed over the last 30 years, and some emerging areas of potential interest where AI and the Society can go together in the next 30 years, if they choose.
It has been claimed that a great deal of AI research is an attempt to discover the empirical laws describing a new type of entity in the world—the artificial computing system. I call this enterprise 'medium AI', since it is in some respects stronger than Searle's 'weak AI', and in other respects weaker than 'strong AI'. Bruce Buchanan, among others, conceives of medium AI as an empirical science entirely on a par with psychology or chemistry. I argue that medium AI (...) is not an empirical science at all. Depending on how artificial computing systems are categorized, it is either an a priori science like mathematics, or a branch of engineering. (shrink)
Concern over the nature of AI is, for the tastes many AI scientists, probably overdone. In this they are like all other scientists. Working scientists worry about experiments, data, and theories, not foundational issues such as what their work is really about or whether their discipline is methodologically healthy. However, most scientists aren’t in a field that is approximately fifty years old. Even relatively new fields such as nonlinear dynamics or branches of biochemistry are in fact advances in older established (...) sciences and are therefore much more settled. Of course, by stretching things, AI can be said to have a history reaching back t o Charles Babbage, and possibly back beyond that to Leibnitz. However, all of that is best viewed as prelude. AI’s history is punctuated with the invention of the computer (and, if one wants t o stretch our history back to the 1930s, the development of the notion of computation by Turing, Church, and others). Hence, AI really began (or began in earnest) sometime in the late 1940s or early 1950s (some mark the conference a t Dartmouth in the summer of 1957 as the moment of our birth). And since those years we simply have not had time to settle into a routine science attacking reasonably well understood questions (for example, many of the questions some of us regard as supreme are regarded by others as inconsequential or mere excursions). (shrink)
Scenarios involving the introduction of artificially intelligent (AI) assistive technologies in health care practices raise several ethical issues. In this paper, I discuss four objections to introducing AI assistive technologies in health care practices as replacements of human care. I analyse them as demands for felt care, good care, private care, and real care. I argue that although these objections cannot stand as good reasons for a general and a priori rejection of AI assistive technologies as such or as replacements (...) of human care, they demand us to clarify what is at stake, to develop more comprehensive criteria for good care, and to rethink existing practices of care. In response to these challenges, I propose a (modified) capabilities approach to care and emphasize the inherent social dimension of care. I also discuss the demand for real care by introducing the ‘Care Experience Machine’ thought experiment. I conclude that if we set the standards of care too high when evaluating the introduction of AI assistive technologies in health care, we have to reject many of our existing, low-tech health care practices. (shrink)
AI is about a "robot" boy who is "programmed" to love his adoptive human mother but is discriminated against because he is just a robot. I put both "robot" and "programmed" in scarequotes, because these are the two things that should have been given more thought before making the movie. (Most of this critique also applies to the short story by Brian Aldiss that inspired the movie, but the buck stops with the film as made, and its maker.).
This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificial intelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca's (1969) (...) study of refutational argument, this study considers points of contact between opposing arguments that emerged in opposing loci, dissociations, and casuistic reasoning. In particular, it shows how perceptions of AI were reframed and rehabilitated through metaphorical language, reversal of the philosophical pair artificial/natural, appeals to the paradigm case, and use of the loci of quantity and essence. Furthermore, examining responses to the book in subsequent arguments indicates the topoi characteristic of the rhetoric of technology advocacy. (shrink)
Kant’s discussion of radical evil and moral regeneration in Religion Within the Bounds of Reason Alone raises numerous moral and metaphysical problems.If the ground of one’s disposition does not lie in time, as Kant argues, how can it be reformed, as the moral law commands? If divine aid is necessary for thisimpossible reformation, how does this not destroy a person’s moral personality by bypassing her freedom? This paper argues that these problems can be resolved by showing how Kant can conceive (...) the moral law itself as kind of grace which, willed properly, makes moral regeneration possible without destroying the autonomy of the individual. (shrink)
C I Lewis showed up Down Under in 2005, in e-mails initiated by Allen Hazen of Melbourne. Their topic was the system Hazen called FL (a Funny Logic), axiomatized in passing in Lewis 1921. I show that FL is the system MEN of material equivalence with negation. But negation plays no special role in MEN. Symbolizing equivalence with → and defining ∼A inferentially as A→f, the theorems of MEN are just those of the underlying theory ME of pure material equivalence. (...) This accords with the treatment of negation in the Abelian l-group logic A of Meyer and Slaney (Abelian logic. Abstract, Journal of Symbolic Logic 46, 425–426, 1981), which also defines ∼A inferentially with no special conditions on f. The paper then concentrates on the pure implicational part AI of A, the simple logic of Abelian groups. The integers Z were known to be characteristic for AI, with every non-theorem B refutable mod some Zn for finite n. Noted here is that AI is pre-tabular, having the Scroggs property that every proper extension SI of AI, closed under substitution and detachment, has some finite Zn as its characteristic matrix. In particular FL is the extension for which n = 2 (Lewis, The structure of logic and its relation to other systems. The Journal of Philosophy 18, 505–516, 1921; Meyer and Slaney, Abelian logic. Abstract. Journal of Symbolic Logic 46, 425–426, 1981; This is an abstract of the much longer paper finally published in 1989 in G. G. Priest, R. Routley and J. Norman, eds., Paraconsistent logic: essays on the inconsistent, Philosophica Verlag, Munich, pp. 245–288, 1989). (shrink)
Eleonore Stump has recently articulated an account of grace which is neither deterministic nor Pelagian. Drawing on resources from Aquinas’s moral psychology, Stump’s account of grace affords the quiescence of the will a significant role in an individual’s coming to saving faith. In the present paper, I firstoutline Stump’s account and then raise a worry for that account. I conclude by suggesting a metaphysic that provides a way of resolving this worry. The resulting view allows one to maintain (...) both (i) that divine grace is the efficient cause of saving faith and (ii) that humans control whether or not they come to saving faith. (shrink)
Rowe argues that if for every good world there is a better, then God is not morally perfect since no matter what world God were to create he could have done better than he did. I contend that Rowe’s argument doesn’t do justice to the role grace plays in the theist’s doctrine of creation, and respond to five new criticisms of my position that Rowe offers in Can God be Free?
It is widely held that the methods of AI are the appropriate methods for cognitive science. Fodor, however, has argued that AI bears the same relation to psychology as Disneyland does to physics. This claim is examined in light of the widespread but paradoxical acceptance of the Turing Test--a behavioral criterion of intelligence--among advocates of cognitivism. It is argued that, given the recalcitrance of certain deep conceptual problems in psychology, and disagreements concerning psychology's basic vocabulary, it is unlikely that AI (...) will prove to be very psychologically enlightening until after some consensus on ontological issues in psychology is achieved. (shrink)
An Example for Natural Language Understanding and the AI Problems it Raises I think this 1976 memorandum is of 1996 interest. The problems it raises haven't been solved or even substantially reformulated.
. The management of ethics within organisations typically occurs within a problem-solving frame of reference. This often results in a reactive, problem-based and externally induced approach to managing ethics. Although basing ethics management interventions on dealing with and preventing current and possible future unethical behaviour are often effective in that it ensures compliance with rules and regulations, the approach is not necessarily conducive to the creation of sustained ethical cultures. Nor does the approach afford (mainly internal) stakeholders the opportunity to (...) be co-designers of the organisations ethical future. The aim of this paper is to present Appreciative Inquiry (AI) as an alternative approach for developing a shared meaning of ethics within an organisation with a view to embrace and entrench ethics, thereby creating a foundation for the development of an ethical cul- ture over time. A descriptive case study based on an application of AI is used to illustrate the utility of AI as a way of thinking and doing to precede and complement problem-based ethics management systems and interventions. (shrink)
This essay investigates Bonaventure’s account of the original state of human nature and his reasons for holding the theory that God created human beingswithout grace in an actual, historical moment. Bonaventure argues that positing a historical moment before grace is more congruent with the divine order, precisely because it emphasizes the distinction between nature and grace and delays the conferral of grace until man’s desire is elicited and his willingness to cooperate in the divine plan made (...) clear. Bonaventure incorporates Aristotle’s teleological view of nature into his thought while managing to avoid a view of nature as autonomous. He grounds nature’s heteronomy in the exigencies of natural desires, which dispose our nature to remain radically and intrinsically orderable to a good that transcends those natural powers (albeit not actually so ordered). Bonaventure’s theory thus affirms the integrity of nature, while also emphasizing the total gratuity of grace. He thinks human nature is suspended between its own finitude and a radical capacity for the transcendent that waits upon divine agency. (shrink)
This paper looks at Sartre's 1957 papers on Jacopo Tintoretto to examine his reading of action and space in Tintoretto's St George and the Dragon . I suggest that Sartre offers an idea of grace which, far from shoring up a sense of decisive resolution to the action depicted in the painting, speaks instead of an abandonment in the subjective situation. This notion of abandonment appears through the erasure of a conclusive causal point, the disappearance of which lies at (...) the heart of Sartre's reading. Once freed from causal moorings existence is not loosened but rather becomes weighed down in its very situation. Taking support from the work of Levinas this paper considers how Sartre follows the cursive lines of this burdened subjectivity within the deceptive play of Tintoretto's painting. (shrink)
Apoptosis proteins play an essential role in regulating a balance between cell proliferation and death. The successful prediction of subcellular localization of apoptosis proteins directly from primary sequence is much benefited to understand programmed cell death and drug discovery. In this paper, by use of Chou’s pseudo amino acid composition (PseAAC), a total of 317 apoptosis proteins are predicted by support vector machine (SVM). The jackknife cross-validation is applied to test predictive capability (...) of proposed method. The predictive results show that overall prediction accuracy is 91.1% which is higher than previous methods. Furthermore, another dataset containing 98 apoptosis proteins is examined by proposed method. The overall predicted successful rate is 92.9%. (shrink)
People often complain that AI is not developing as well as expected. They say, "Progress was quick in the early years of AI, but now it is not growing so fast." I find this funny, because people have been saying the same thing as long as I can remember. In fact we are still rapidly developing new useful systems for recognizing patterns and for supervising processes. Furthermore, modern hardware is so fast and reliable that we can employ almost any programs (...) we can create. Good new systems appear every year, for different "expert" applications. (shrink)
Reading through Mechanica1 Intelligence, volume III of Alan Turing's Collected Works, one begins to appreciate just how propitious Turing's timing was. If Turing's major accomplishment in ‘On Computable Numbers’ was to expose the epistemological premises built into formalism, his main achievement in the 1940s was to recognize the extent to which this outlook both harmonized with and extended contemporary psychological thought. Turing sought to synthesize these diverse mathematical and psychological elements so as to forge a union between ‘embodied rules’ and (...) ‘learning programs’. Through their joint service in the Mechanist Thesis each would validate the other: and the frameworks from whence each derived. In this paper I will try to show how Turing's psychological thesis forces us to reassess the consequences of establishing AI on the epistemological foundation that underlies behaviourism. (shrink)
This paper, along with the following paper by John McCarthy, introduces some of the topics to be discussed at the IJCAI95 event `A philosophical encounter: An interactive presentation of some of the key philosophical problems in AI and AI problems in philosophy.' Philosophy needs AI in order to make progress with many difficult questions about the nature of mind, and AI needs philosophy in order to help clarify goals, methods, and concepts and to help with several specific (...) technical problems. Whilst philosophical attacks on AI continue to be welcomed by a significant subset of the general public, AI defenders need to learn how to avoid philosophically naive rebuttals. (shrink)
From the standpoint of the moral theologian, perhaps the most influential aspect of Karl Rahner’s theology is the thesis of the fundamental option, that is, the claim that the individual’s status before God is determined by a basic, freely chosen and prethematic orientation of openness towards, or rejection of God which takes place at the level of core or transcendental freedom. This paper argues that this notion of the fundamental option is problematic because it is not concrete enough to provide (...) an adequate interpretation of our actual experience. Yet this problem cannot be addressed through reviving the traditional account of mortal and venial sins, which are equally problematic, albeit in a somewhat different way. The second half of the paper explores the alternative offered by Aquinas’s account of charity, which, it is argued, does provide us with an account of grace sufficiently rich and concrete to illuminate human experience. However, this alternative is likewise problematic, most notably in its commitment to the view that charity is lost through one mortal sin. Yet Aquinas’s account of charity provides resources for an internal critique and revision on this point, as can be seen through a consideration of cases of “sinful saints.”. (shrink)
Ai Ssu-ch'i is a little known but very important figure in the introduction of Marxism-Leninism into China. This first article provides a brief biography of Ai Ssu-ch'i as well as a detailed account of his activities as teacher, author and propagandist. Among his other services to the cause of Marxism-Leninism in China, one has to stress Ai Ssu-ch'i's systematic opposition to Yeh Ch'ing and to the non-Communist interpretation of Dr. Sun Yat-sen's Three Principles of the People. (cf.SST 10 (1970), 138–166.).
This article describes recent jurisprudential accountsof analogical legal reasoning andcompares them in detail to the computational modelof case-based legal argument inCATO. The jurisprudential models provide a theoryof relevance based on low-levellegal principles generated in a process ofcase-comparing reflective adjustment. Thejurisprudential critique focuses on the problemsof assigning weights to competingprinciples and dealing with erroneously decidedprecedents. CATO, a computerizedinstructional environment, employs ArtificialIntelligence techniques to teach lawstudents how to make basic legal argumentswith cases. The computational modelhelps students test legal hypotheses againsta database of (...) legal cases, draws analogiesto problem scenarios from the database, andcomposes arguments by analogy with a setof argument moves. The CATO model accountsfor a number of the important featuresof the jurisprudential accounts, includingimplementing a kind of reflective adjustment.It also avoids some of the problems identifiedin the critique; for instance, it deals withweights in a non-numeric, context-sensitivemanner. The article concludes by describingthe contributions AI research can make tojurisprudential investigations of complexcognitive phenomena of legal reasoning. Forinstance, unlike the jurisprudential models,CATO provides a detailed account of how togenerate multiple interpretations of a citedcase, downplaying or emphasizing the legalsignificance of distinctions in terms of thepurposes of the law as the argument contextdemands. (shrink)
Apocalyptic AI, the hope that we might one day upload our minds into machines and live forever in cyberspace, is a surprisingly wide-spread and influential idea, affecting everything from the world view of online gamers to government research funding and philosophical thought. In Apocalyptic AI, Robert Geraci offers the first serious account of this "cyber-theology¨and the people who promote it, drawing on interviews with roboticists and AI researchers and even devotees of the online game Second Life. He points out that (...) the rhetoric of Apocalyptic AI is strikingly similar to that of the apocalyptic traditions of Judaism and Christianity--in both systems the believer is trapped in a dualistic universe and expects a resolution in which he or she will be translated to a transcendent new world and live forever in a glorified new body. Geraci also shows how this worldview exerts significant influence by promoting certain types of research in robotics and artificial intelligence, and has also had an impact on philosophers of mind, theologians, and even legal scholars. (shrink)
Traditional views of grace assert that God owes us nothing. Grace is undeserved, supererogatory and free. In this paper I argue that while this is an accurate characterization of creating grace, it is not true of saving grace. We have no right to be created as spiritual beings whose true good is found in relationship with God. But once we exist as spiritual beings, God does owe us a genuine offer of the salvation that constitutes our (...) highest fulfillment. Creating grace is undeserved. Saving grace is deserved (being based on our inherent worth and vital interests as spiritual beings) but unearned (it is not based on anything we have done). (shrink)