Search results for 'STRONG AI' (try it on Scholar)

1000+ found
Sort by:
  1. Silin Ai (2011). Ai Silin Lun Wen Xuan. Zhonghua Shu Ju.score: 120.0
    Translate to English
    |
     
    My bibliography  
     
    Export citation  
  2. Stephan Zelewski (1991). Die Starke KI-TheseThe Strong AI-Thesis. Journal for General Philosophy of Science 22 (2):337-348.score: 90.0
    Summary The controversy about the strong AI-thesis was recently revived by two interrelated contributions stemming from J. R. Searle on the one hand and from P. M. and P. S. Churchland on the other hand. It is shown that the strong AI-thesis cannot be defended in the formulation used by the three authors. It violates some well accepted criterions of scientific argumentation, especially the rejection of essentialistic definitions. Moreover, Searle's ‘proof’ is not conclusive. Though it may be reconstructed (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  3. Andrew Melnyk (1996). Searle's Abstract Argument Against Strong AI. Synthese 108 (3):391-419.score: 66.0
    Discussion of Searle's case against strong AI has usually focused upon his Chinese Room thought-experiment. In this paper, however, I expound and then try to refute what I call his abstract argument against strong AI, an argument which turns upon quite general considerations concerning programs, syntax, and semantics, and which seems not to depend on intuitions about the Chinese Room. I claim that this argument fails, since it assumes one particular account of what a program is. I suggest (...)
    Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  4. Steffen Borge (2007). A Modal Defence of Strong AI. In Dermot Moran Stephen Voss (ed.), Epistemology. The Proceedings of the Twenty-First World Congress of Philosophy. Vol. 6. The Philosophical Society of Turkey. 127-131.score: 60.0
    John Searle has argued that the aim of strong AI of creating a thinking computer is misguided. Searle’s Chinese Room Argument purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality. But we are not mainly interested in the program itself but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  5. Jerome C. Wakefield (2003). The Chinese Room Argument Reconsidered: Essentialism, Indeterminacy, and Strong AI. [REVIEW] Minds and Machines 13 (2):285-319.score: 60.0
    I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about (...)
    Direct download (16 more)  
     
    My bibliography  
     
    Export citation  
  6. Ronald L. Chrisley, Weak Strong AI: An Elaboration of the English Reply to the Chinese Room.score: 60.0
    Searle (1980) constructed the Chinese Room (CR) to argue against what he called \Strong AI": the claim that a computer can understand by virtue of running a program of the right sort. Margaret Boden (1990), in giving the English Reply to the Chinese Room argument, has pointed out that there isunderstanding in the Chinese Room: the understanding required to recognize the symbols, the understanding of English required to read the rulebook, etc. I elaborate on and defend this response to (...)
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  7. Blake H. Dournaee (2010). Comments on “The Replication of the Hard Problem of Consciousness in AI and Bio-AI”. Minds and Machines 20 (2):303-309.score: 54.0
    In their joint paper entitled The Replication of the Hard Problem of Consciousness in AI and BIO-AI (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as H-consciousness ). The claim is that if we knew the inner workings of (...)
    Direct download (12 more)  
     
    My bibliography  
     
    Export citation  
  8. R. Michael Perry (2006). Consciousness as Computation: A Defense of Strong AI Based on Quantum-State Functionalism. In Charles Tandy (ed.), Death and Anti-Death, Volume 4: Twenty Years After De Beauvoir, Thirty Years After Heidegger. Palo Alto: Ria University Press.score: 51.0
  9. Nicholas Agar (2012). On the Irrationality of Mind-Uploading: A Rely to Neil Levy. [REVIEW] AI and Society 27 (4):431-436.score: 48.0
    In a paper in this journal, Neil Levy challenges Nicholas Agar’s argument for the irrationality of mind-uploading. Mind-uploading is a futuristic process that involves scanning brains and recording relevant information which is then transferred into a computer. Its advocates suppose that mind-uploading transfers both human minds and identities from biological brains into computers. According to Agar’s original argument, mind-uploading is prudentially irrational. Success relies on the soundness of the program of Strong AI—the view that it may someday be possible (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  10. Philip Cam (1990). Searle on Strong AI. Australasian Journal of Philosophy 68 (1):103-8.score: 45.0
    Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  11. Aaron Sloman (1986). Did Searle Attack Strong Strong AI or Weak Strong AI? In Artificial Intelligence and its Applications. Chichester.score: 45.0
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  12. Karl Pfeifer (1992). Searle, Strong AI, and Two Ways of Sorting Cucumbers. Journal of Philosophical Research 17:347-50.score: 45.0
    This paper defends Searle against the misconstrual of a key claim of “Minds, Brains, and Programs” and goes on to explain why an attempt to turn the tables by using the Chinese Room to argue for intentionality in computers fails.
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  13. Roland Puccetti (1980). The Chess Room: Further Demythologizing of Strong AI. Behavioral and Brain Sciences 3 (3):441.score: 45.0
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  14. Stephan Zelewski (1991). Die Starke KI-These. Journal for General Philosophy of Science 22 (2):337 - 348.score: 45.0
    The Strong AI-Thesis. The controversy about the strong AI-thesis was recently revived by two interrelated contributions stemming from J. R. Searle on the one hand and from P. M. and P. S. Churchland on the other hand. It is shown that the strong AI-thesis cannot be defended in the formulation used by the three authors. It violates some well accepted criterions of scientific argumentation, especially the rejection of essentialistic definitions. Moreover, Searle's 'proof' is not conclusive. Though it (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  15. Gerd Gigerenzer (1990). Strong AI and the Problem of “Second-Order” Algorithms. Behavioral and Brain Sciences 13 (4):663-664.score: 45.0
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  16. J. M. Bishop (2000). Redcar Rocks: Strong AI and Panpsychism. Consciousness and Cognition 9 (2):S35 - S35.score: 45.0
     
    My bibliography  
     
    Export citation  
  17. M. Gams (1997). " Strong AI": An Adolescent Disorder. In Matjaz Gams (ed.), Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: Ios Press. 43--1.score: 45.0
    No categories
    Direct download  
     
    My bibliography  
     
    Export citation  
  18. Georges Rey (2003). Searle's Misunderstandings of Functionalism and Strong AI. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. 201--225.score: 45.0
     
    My bibliography  
     
    Export citation  
  19. Burton Voorhees (1999). Godel's Theorem and Strong Ai: Is Reason Blind? In S. Smets J. P. Van Bendegem G. C. Cornelis (ed.), Metadebates on Science. Vub-Press and Kluwer. 6--43.score: 45.0
    No categories
     
    My bibliography  
     
    Export citation  
  20. Robert I. Damper (2006). The Logic of Searle's Chinese Room Argument. Minds and Machines 16 (2):163-183.score: 33.0
    John Searle’s Chinese room argument (CRA) is a celebrated thought experiment designed to refute the hypothesis, popular among artificial intelligence (AI) scientists and philosophers of mind, that “the appropriately programmed computer really is a mind”. Since its publication in 1980, the CRA has evoked an enormous amount of debate about its implications for machine intelligence, the functionalist philosophy of mind, theories of consciousness, etc. Although the general consensus among commentators is that the CRA is flawed, and not withstanding the popularity (...)
    Direct download (12 more)  
     
    My bibliography  
     
    Export citation  
  21. Ricardo Restrepo Echavarria (2009). Russell's Structuralism and the Supposed Death of Computational Cognitive Science. Minds and Machines 19 (2):181-197.score: 30.0
    John Searle believes that computational properties are purely formal and that consequently, computational properties are not intrinsic, empirically discoverable, nor causal; and therefore, that an entity’s having certain computational properties could not be sufficient for its having certain mental properties. To make his case, Searle’s employs an argument that had been used before him by Max Newman, against Russell’s structuralism; one that Russell himself considered fatal to his own position. This paper formulates a not-so-explored version of Searle’s problem with computational (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  22. Bernard Molyneux (2012). How the Problem of Consciousness Could Emerge in Robots. Minds and Machines 22 (4):277-297.score: 30.0
    I show how a robot with what looks like a hard problem of consciousness might emerge from the earnest attempt to make a robot that is smart and self-reflective. This problem arises independently of any assumption to the effect that the robot is conscious, but deserves to be thought of as related to the human problem in virtue of the fact that (1) the problem is one the robot encounters when it tries to naturalistically reduce its own subjective states (2) (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  23. Ricardo Restrepo Echavarria (2009). Russell's Structuralism and the Supposed Death of Computational Cognitive Science. Minds and Machines 19 (2):181-197.score: 30.0
    John Searle believes that computational properties are purely formal and that consequently, computational properties are not intrinsic, empirically discoverable, nor causal; and therefore, that an entity’s having certain computational properties could not be sufficient for its having certain mental properties. To make his case, Searle’s employs an argument that had been used before him by Max Newman, against Russell’s structuralism; one that Russell himself considered fatal to his own position. This paper formulates a not-so-explored version of Searle’s problem with computational (...)
    Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  24. Slawomir J. Nasuto, John Mark Bishop, Etienne B. Roesch & Matthew C. Spencer (forthcoming). Zombie Mouse in a Chinese Room. Philosophy and Technology:1-15.score: 30.0
    John Searle’s Chinese Room Argument (CRA) purports to demonstrate that syntax is not sufficient for semantics, and, hence, because computation cannot yield understanding, the computational theory of mind, which equates the mind to an information processing system based on formal computations, fails. In this paper, we use the CRA, and the debate that emerged from it, to develop a philosophical critique of recent advances in robotics and neuroscience. We describe results from a body of work that contributes to blurring the (...)
    No categories
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  25. Ricardo Restrepo (2009). Russell's Structuralism and the Supposed Death of Computational Cognitive Science. Minds and Machines 19 (2):181-197.score: 30.0
    John Searle believes that computational properties are purely formal and that consequently, computational properties are not intrinsic, empirically discoverable, nor causal; and therefore, that an entity’s having certain computational properties could not be sufficient for its having certain mental properties. To make his case, Searle employs an argument that had been used before him by Max Newman, against Russell’s structuralism; one that Russell himself considered fatal to his own position. This paper formulates a not-so-explored version of Searle’s problem with computational (...)
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  26. Raffaela Giovagnoli (2013). Representation, Analytic Pragmatism and AI. In Gordana Dodig-Crnkovic Raffaela Giovagnoli (ed.), Computing Nature. 161--169.score: 24.0
    Our contribution aims at individuating a valid philosophical strategy for a fruitful confrontation between human and artificial representation. The ground for this theoretical option resides in the necessity to find a solution that overcomes, on the one side, strong AI (i.e. Haugeland) and, on the other side, the view that rules out AI as explanation of human capacities (i.e. Dreyfus). We try to argue for Analytic Pragmatism (AP) as a valid strategy to present arguments for a form of weak (...)
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  27. Barbara Warnick (2004). Rehabilitating AI: Argument Loci and the Case for Artificial Intelligence. [REVIEW] Argumentation 18 (2):149-170.score: 24.0
    This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificial intelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca's (...)
    Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  28. André Kukla (1994). Medium AI and Experimental Science. Philosophical Psychology 7 (4):493-5012.score: 24.0
    It has been claimed that a great deal of AI research is an attempt to discover the empirical laws describing a new type of entity in the world—the artificial computing system. I call this enterprise 'medium AI', since it is in some respects stronger than Searle's 'weak AI', and in other respects weaker than 'strong AI'. Bruce Buchanan, among others, conceives of medium AI as an empirical science entirely on a par with psychology or chemistry. I argue that medium (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  29. Setargew Kenaw (2008). Hubert L. Dreyfus's Critique of Classical AI and its Rationalist Assumptions. Minds and Machines 18 (2):227-238.score: 21.0
    This paper deals with the rationalist assumptions behind researches of artificial intelligence (AI) on the basis of Hubert Dreyfus’s critique. Dreyfus is a leading American philosopher known for his rigorous critique on the underlying assumptions of the field of artificial intelligence. Artificial intelligence specialists, especially those whose view is commonly dubbed as “classical AI,” assume that creating a thinking machine like the human brain is not a too far away project because they believe that human intelligence works on the basis (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  30. Matjaz Gams (ed.) (1997). Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: IOS Press.score: 21.0
  31. Stuart Armstrong, Anders Sandberg & Nick Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. [REVIEW] Minds and Machines 22 (4):299-324.score: 21.0
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  32. Denis L. Baggi (2000). The Intelligence Left in AI. AI and Society 14 (3-4):348-378.score: 21.0
    In its forty years of existence, Artificial Intelligence has suffered both from the exaggerated claims of those who saw it as the definitive solution of an ancestral dream — that of constructing an intelligent machine-and from its detractors, who described it as the latest fad worthy of quacks. Yet AI is still alive, well and blossoming, and has left a legacy of tools and applications almost unequalled by any other field-probably because, as the heir of Renaissance thought, it represents a (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  33. Liu Feng (1989). The AI Elephant. AI and Society 3 (4):336-345.score: 21.0
    The paper presents a Chinese philosophical point of view of AI, and presents a novel system of the AI machine. There are two basic relations or contradictions which drive computer developments forward. One is between software and hardware and the other is between data structure and system organization. It is suggested that a description of a future AI system should primarily start from these contradictions.
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  34. Dr Toshiyuki Furukawa (1990). AI in Medicine: A Japanese Perspective. [REVIEW] AI and Society 4 (3):196-213.score: 21.0
    This article is concerned with the history and current state of research activities into medical expert systems (MES) in Japan. A brief review of expert systems' work over the last ten years is provided and here is a discussion on future directions of artificial intelligence (AI) applications in medicine, which we expect the Japanese AI community in medicine (AIM) to undertake.
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  35. Keizo Sato (1991). From AI to Cybernetics. AI and Society 5 (2):155-161.score: 21.0
    Well-known critics of AI such as Hubert Dreyfus and Michael Polanyi tend to confuse cybernetics with AI. Such a confusion is quite misleading and should not be overlooked. In the first place, cybernetics is not vulnerable to criticism of AI as cognitivistic and behaviouristic. In the second place, AI researchers are recommended to consider the cybernetics approach as a way of overcoming the limitations of cognitivism and behaviourism.
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  36. Ronald Stamper (1988). Pathologies of AI: Responsible Use of Artificial Intelligence in Professional Work. [REVIEW] AI and Society 2 (1):3-16.score: 21.0
    Although the AI paradigm is useful for building knowledge-based systems for the applied natural sciences, there are dangers when it is extended into the domains of business, law and other social systems. It is misleading to treat knowledge as a commodity that can be separated from the context in which it is regularly used. Especially when it relates to social behaviour, knowledge should be treated as socially constructed, interpreted and maintained through its practical use in context. The meanings of terms (...)
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  37. Berit Brogaard (2010). Strong Representationalism and Centered Content. Philosophical Studies 151 (3):373 - 392.score: 18.0
    I argue that strong representationalism, the view that for a perceptual experience to have a certain phenomenal character just is for it to have a certain representational content (perhaps represented in the right sort of way), encounters two problems: the dual looks problem and the duplication problem. The dual looks problem is this: strong representationalism predicts that how things phenomenally look to the subject reflects the content of the experience. But some objects phenomenally look to both have and (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  38. Jeff Kochan (2010). Contrastive Explanation and the 'Strong Programme' in the Sociology of Scientific Knowledge. Social Studies of Science 40 (1):127-44.score: 18.0
    In this essay, I address a novel criticism recently levelled at the Strong Programme by Nick Tosh and Tim Lewens. Tosh and Lewens paint Strong Programme theorists as trading on a contrastive form of explanation. With this, they throw valuable new light on the explanatory methods employed by the Strong Programme. However, as I shall argue, Tosh and Lewens run into trouble when they accuse Strong Programme theorists of unduly restricting the contrast space in which legitimate (...)
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  39. Y. J. Erden (2010). Could a Created Being Ever Be Creative? Some Philosophical Remarks on Creativity and AI Development. Minds and Machines 20 (3):349-362.score: 18.0
    Creativity has a special role in enabling humans to develop beyond the fulfilment of simple primary functions. This factor is significant for Artificial Intelligence (AI) developers who take replication to be the primary goal, since moves toward creating autonomous artificial-beings beg questions about their potential for creativity. Using Wittgenstein’s remarks on rule-following and language-games, I argue that although some AI programs appear creative, to call these programmed acts creative in our terms is to misunderstand the use of this word in (...)
    Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  40. Ben Fraser (2011). Explaining Strong Reciprocity: Cooperation, Competition, and Partner Choice. [REVIEW] Biological Theory 6 (2):113-119.score: 18.0
    Paul Seabright argues that strong reciprocity was crucial in the evolution of large-scale cooperation. He identifies three potential evolutionary explanations for strong reciprocity. Drawing (like Seabright) on experimental economics, I identify and elaborate a fourth explanation for strong reciprocity, which proceeds in terms of partner choice, costly signaling, and competitive altruism.
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  41. Dina Goldin & Peter Wegner (2008). The Interactive Nature of Computing: Refuting the Strong Church–Turing Thesis. [REVIEW] Minds and Machines 18 (1):17-38.score: 18.0
    The classical view of computing positions computation as a closed-box transformation of inputs (rational numbers or finite strings) to outputs. According to the interactive view of computing, computation is an ongoing interactive process rather than a function-based transformation of an input to an output. Specifically, communication with the outside world happens during the computation, not before or after it. This approach radically changes our understanding of what is computation and how it is modeled. The acceptance of interaction as a new (...)
    Direct download (14 more)  
     
    My bibliography  
     
    Export citation  
  42. Mark Coeckelbergh (2010). Health Care, Capabilities, and Ai Assistive Technologies. Ethical Theory and Moral Practice 13 (2):181 - 190.score: 18.0
    Scenarios involving the introduction of artificially intelligent (AI) assistive technologies in health care practices raise several ethical issues. In this paper, I discuss four objections to introducing AI assistive technologies in health care practices as replacements of human care. I analyse them as demands for felt care, good care, private care, and real care. I argue that although these objections cannot stand as good reasons for a general and a priori rejection of AI assistive technologies as such or as replacements (...)
    Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  43. Norihiro Kamide (2003). Normal Modal Substructural Logics with Strong Negation. Journal of Philosophical Logic 32 (6):589-612.score: 18.0
    We introduce modal propositional substructural logics with strong negation, and prove the completeness theorems (with respect to Kripke models) for these logics.
    Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  44. Colin Beardon (1994). Computers, Postmodernism and the Culture of the Artificial. AI and Society 8 (1):1-16.score: 18.0
    The term ‘the artificial’ can only be given a precise meaning in the context of the evolution of computational technology and this in turn can only be fully understood within a cultural setting that includes an epistemological perspective. The argument is illustrated in two case studies from the history of computational machinery: the first calculating machines and the first programmable computers. In the early years of electronic computers, the dominant form of computing was data processing which was a reflection of (...)
    Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  45. Motohiko Mouri & Norihiro Kamide (2008). Strong Normalizability of Typed Lambda-Calculi for Substructural Logics. Logica Universalis 2 (2):189-207.score: 18.0
    The strong normalization theorem is uniformly proved for typed λ-calculi for a wide range of substructural logics with or without strong negation.
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  46. M. Spinks & R. Veroff (2008). Constructive Logic with Strong Negation is a Substructural Logic. II. Studia Logica 89 (3):401 - 425.score: 18.0
    The goal of this two-part series of papers is to show that constructive logic with strong negation N is definitionally equivalent to a certain axiomatic extension NFL ew of the substructural logic FL ew . The main result of Part I of this series [41] shows that the equivalent variety semantics of N (namely, the variety of Nelson algebras) and the equivalent variety semantics of NFL ew (namely, a certain variety of FL ew -algebras) are term equivalent. In (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  47. Jinglin Li (2009). On the Creativity and Innateness of the “Strong, Moving Vital Force”: A Discussion of Feng Youlan's “Explanation of Mencius' Chapter on the 'Strong, Moving Vital Force'”. Frontiers of Philosophy in China 4 (2):198-210.score: 18.0
    Feng Youlan emphasizes the concept of “creativity” in his article “Explanation of Mencius’ Chapter on Strong, Moving Vital Force”, in particular highlighting the problem whether the “strong, moving vital force” is “innate” or “acquired”. Cheng Hao and Zhu Xi believed the “strong, moving vital force” was endowed by Heaven, so was therefore innate; “nourishment” cleared fog and allowed one to “recover one’s original nature”. Mencius’ theory on “the good of human nature” is illustrated in (...)
    Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  48. Norihiro Kamide (2006). Phase Semantics and Petri Net Interpretation for Resource-Sensitive Strong Negation. Journal of Logic, Language and Information 15 (4):371-401.score: 18.0
    Wansing’s extended intuitionistic linear logic with strong negation, called WILL, is regarded as a resource-conscious refinment of Nelson’s constructive logics with strong negation. In this paper, (1) the completeness theorem with respect to phase semantics is proved for WILL using a method that simultaneously derives the cut-elimination theorem, (2) a simple correspondence between the class of Petri nets with inhibitor arcs and a fragment of WILL is obtained using a Kripke semantics, (3) a cut-free sequent calculus for WILL, (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  49. L. J. van Vuuren & F. Crous (2005). Utilising Appreciative Inquiry (AI) in Creating a Shared Meaning of Ethics in Organisations. Journal of Business Ethics 57 (4):399-412.score: 18.0
    . The management of ethics within organisations typically occurs within a problem-solving frame of reference. This often results in a reactive, problem-based and externally induced approach to managing ethics. Although basing ethics management interventions on dealing with and preventing current and possible future unethical behaviour are often effective in that it ensures compliance with rules and regulations, the approach is not necessarily conducive to the creation of sustained ethical cultures. Nor does the approach afford (mainly internal) stakeholders the opportunity to (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  50. Anthony Rudd (forthcoming). Strong” Narrativity—a Response to Hutto. Phenomenology and the Cognitive Sciences:1-7.score: 18.0
    This paper responds to Dan Hutto’s paper, ‘Narrative Self-Shaping: a Modest Proposal’. Hutto there attacks the “strong” narrativism defended in my recent book, ‘Self, Value and Narrative’ and in recent work by Marya Schechtman. I rebut Hutto’s argument that non-narrative forms of evaluative self-shaping can plausibly be conceived, and defend the notion of implicit narrative against his criticisms. I conclude by briefly indicating some difficulties that arise for the “modest” form of narrativism that Hutto defends.
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
1 — 50 / 1000