About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as intelligent machines become ubiquitous. For example, AIs can and do model aspects essential to moral agency and so offer tools for the investigation of consciousness and other aspects of cognition contributing to moral status (either ascribed or achieved). This has deep implications for our understanding of moral agency, and so of systems of ethics meant to account for and to provide for the development of such capacities. This raises the issue of responsible and/or blameworthy AIs operating openly in general society, with deep implications again for systems of ethics which must accommodate moral AIs. Consider also that human social infrastructure (e.g. energy grids, mass-transit systems) are increasingly moderated by increasingly intelligent machines. This alone raises many moral/ethical concerns. For example, who or what is responsible in the case of an accident due to system error, or due to design flaws, or due to proper operation outside of anticipated constraints? Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These issues often bear on the long-term safety of intelligent systems, and not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI. 
Key works Bostrom manuscriptMüller 2014
Introductions Müller 2013White 2015Gunkel 2012
Related categories

792 found
Order:
1 — 50 / 792
Material to categorize
  1. Consensus and Authenticity in Representation: Simulation as Participative Theatre. [REVIEW]Michael T. Black - 1993 - AI and Society 7 (1):40-51.
    Representation was invented as an issue during the 17th century in response to specific developments in the technology of simulation. It remains an issue of central importance today in the design of information systems and approaches to artificial intelligence. Our cultural legacy of thought about representation is enormous but as inhibiting as it is productive. The challenge to designers of representative technology is to reshape this legacy by enlarging the politics rather than the technics of simulation.
  2. Computational Neural Modeling and the Philosophy of Ethics Reflections on the Particularism-Generalism Debate.Mar Cello Guarim - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  3. Design, Development, and Evaluation of an Interactive Simulator for Engineering Ethics Education (Seee).Christopher A. Chung & Michael Alfred - 2009 - Science and Engineering Ethics 15 (2):189-199.
    Societal pressures, accreditation organizations, and licensing agencies are emphasizing the importance of ethics in the engineering curriculum. Traditionally, this subject has been taught using dogma, heuristics, and case study approaches. Most recently a number of organizations have sought to increase the utility of these approaches by utilizing the Internet. Resources from these organizations include on-line courses and tests, videos, and DVDs. While these individual approaches provide a foundation on which to base engineering ethics, they may be limited in developing a (...)
  4. The Energetic Dimension of Emotions: An Evolution-Based Computer Simulation with General Implications.Luc Ciompi & Martin Baatz - 2008 - Biological Theory 3 (1):42-50.
    Viewed from an evolutionary standpoint, emotions can be understood as situation-specific patterns of energy consumption related to behaviors that have been selected by evolution for their survival value, such as environmental exploration, flight or fight, and socialization. In the present article, the energy linked with emotions is investigated by a strictly energy-based simulation of the evolution of simple autonomous agents provided with random cognitive and motor capacities and operating among food and predators. Emotions are translated into evolving patterns of energy (...)
  5. Linguistic Anchors in the Sea of Thought?Andy Clark - 1996 - Pragmatics and Cognition 4 (1):93-103.
    Andy Clark is currently Professor of Philosophy and Director of the Philosophy/Neuroscience/Psychology program at Washington University in St. Louis, Missouri. He is the author of two books MICROCOGNITION (MIT Press/Bradford Books 1989) and ASSOCIATIVE ENGINES (MIT Press/Bradford Books, 1993) as well as numerous papers and four edited volumes. He is an ex- committee member of the British Society for the Philosophy of Science and of the Society for Artificial Intelligence and the Simulation of Behavior. Awards include a visiting Fellowship at (...)
  6. Dialogues in Natural Language with Guru, a Psychologic Inference Engine.Kenneth M. Colby, Peter M. Colby & Robert J. Stoller - 1990 - Philosophical Psychology 3 (2 & 3):171 – 186.
    The aim of this project was to explore the possibility of constructing a psychologic inference engine that might enhance introspective self-awareness by delivering inferences about a user based on what he said in interactive dialogues about his closest opposite-sex relation. To implement this aim, we developed a computer program (guru) with the capacity to simulate human conversation in colloquial natural language. The psychologic inferences offered represent the authors' simulations of their commonsense psychology responses to expected user-input expressions. The heuristics of (...)
  7. DARES: Documents Annotation and Recombining System—Application to the European Law. [REVIEW]Fady Farah & François Rousselot - 2007 - Artificial Intelligence and Law 15 (2):83-102.
    Accessing legislation via the Internet is more and more frequent. As a result, systems that allow consultation of law texts are becoming more and more powerful. This paper presents DARES, a generic system which can be adapted to any domain to handle documents production needs. It is based on an annotation engine which allows obtaining XML documents inputs as required by the system, and on an XML fragments recombining system. The latter operates using a fragment manipulation functions toolbox to generate (...)
  8. On the Role of AI in the Ongoing Paradigm Shift Within the Cognitive Sciences.Tom Froese - 2007 - In M. Lungarella (ed.), 50 Years of AI. Springer Verlag.
    This paper supports the view that the ongoing shift from orthodox to embodied-embedded cognitive science has been significantly influenced by the experimental results generated by AI research. Recently, there has also been a noticeable shift toward enactivism, a paradigm which radicalizes the embodied-embedded approach by placing autonomous agency and lived subjectivity at the heart of cognitive science. Some first steps toward a clarification of the relationship of AI to this further shift are outlined. It is concluded that the success of (...)
  9. On the Re-Materialization of the Virtual.Ismo Kantola - 2013 - AI and Society 28 (2):189-198.
    The so-called new economy based on the global network of digitalized communication was welcomed as a platform of innovations and as a vehicle of advancement of democracy. The concept of virtuality captures the essence of the new economy: efficiency and free access. In practice, the new economy has developed into an heterogenic entity dominated by practices such as propagation of trust and commitment to standards and standard-like technological solutions; entrenchment of locally strategic subsystems; surveillance of unwanted behavior. Five empirical cases (...)
  10. Special Issue on Social Impact of AI: Killer Robots or Friendly Fridges. [REVIEW]Greg Michaelson & Ruth Aylett - 2011 - AI and Society 26 (4):317-318.
  11. Evolution: The Computer Systems Engineer Designing Minds.Aaron Sloman - 2011 - Avant: Trends in Interdisciplinary Studies 2 (2):45–69.
    What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which (...)
  12. Modelling Consciousness-Dependent Expertise in Machine Medical Moral Agents.Steve Torrance & Ron Chrisley - unknown
    It is suggested that some limitations of current designs for medical AI systems stem from the failure of those designs to address issues of artificial consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, it is argued, (...)
  13. Analog Simulation.Russell Trenholme - 1994 - Philosophy of Science 61 (1):115-131.
    The distinction between analog and digital representation is reexamined; it emerges that a more fundamental distinction is that between symbolic and analog simulation. Analog simulation is analyzed in terms of a (near) isomorphism of causal structures between a simulating and a simulated process. It is then argued that a core concept, naturalistic analog simulation, may play a role in a bottom-up theory of adaptive behavior which provides an alternative to representational analyses. The appendix discusses some formal conditions for naturalistic analog (...)
  14. Ethics and Aesthetics of Technologies.Arun Kumar Tripathi - 2010 - AI and Society 25 (1):5-9.
  15. Erratum To: Ethics and Aesthetics of Technologies. [REVIEW]Arun Kumar Tripathi - 2010 - AI and Society 25 (1):139-139.
  16. Inventive Machine: Second Generation.Valery M. Tsourikov - 1993 - AI and Society 7 (1):62-77.
    Inventive Machine project is the matter of discussion. The project aims to develop a family of AI systems for intelligent support of all stages of engineering design.Peculiarities of the IM project:deep and comprehensive knowledge base — the theory of inventive problem solving (TIPS)solving complex problems at the level of inventionsapplication in any area of engineeringstructural prediction of engineering system developmentThe systems of the second generation are described in detail.
  17. Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.Shannon Vallor - 2015 - Philosophy and Technology 28 (1):107-124.
    This paper explores the ambiguous impact of new information and communications technologies on the cultivation of moral skills in human beings. Just as twentieth century advances in machine automation resulted in the economic devaluation of practical knowledge and skillsets historically cultivated by machinists, artisans, and other highly trained workers , while also driving the cultivation of new skills in a variety of engineering and white collar occupations, ICTs are also recognized as potential causes of a complex pattern of economic deskilling, (...)
  18. Thinking Machines and the Philosophy of Computer Science: Concepts and Principles.Jordi Vallverdú (ed.) - 2010 - Information Science Reference.
    "This book offers a high interdisciplinary exchange of ideas pertaining to the philosophy of computer science, from philosophical and mathematical logic to epistemology, engineering, ethics or neuroscience experts and outlines new problems ...
  19. The Human Role in the Age of Information.Tibor Vámos - 2014 - AI and Society 29 (2):277-282.
    Age of automation entails freedom from most of the common working roles. Signs are changes in employment, unemployment, professional structures, relevance of services, entertainment industry, working hours, and the nature of social relations. Warnings are suggested against voluntaristic interventions, neglect of social, historical relations. New approaches are required in the fields of lifelong education and in the education of socially disadvantaged people. The changes in evolutionary inherited motivations and life styles are critical challenges to mankind. Open society and lessons of (...)
  20. Juricas: Legal Computer Advice Systems.J. G. L. Van der Weeks - 1992 - Artificial Intelligence and Law 1 (4):275-290.
  21. Machine Medical Ethics.Simon Peter van Rysewyk & Matthijs Pontier (eds.) - 2014 - Springer.
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. -/- As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What (...)
  22. Picturing Mind Machines, An Adaptation by Janneke van Leeuwen.Simon van Rysewyk & Janneke van Leeuwen - 2014 - In Simon Peter van Rysewyk & Matthijs Pontier (eds.), Machine Medical Ethics. Springer.
  23. Utilising Appreciative Inquiry (AI) in Creating a Shared Meaning of Ethics in Organisations.L. J. van Vuuren & F. Crous - 2005 - Journal of Business Ethics 57 (4):399-412.
    . The management of ethics within organisations typically occurs within a problem-solving frame of reference. This often results in a reactive, problem-based and externally induced approach to managing ethics. Although basing ethics management interventions on dealing with and preventing current and possible future unethical behaviour are often effective in that it ensures compliance with rules and regulations, the approach is not necessarily conducive to the creation of sustained ethical cultures. Nor does the approach afford (mainly internal) stakeholders the opportunity to (...)
  24. Framework for M&S with Agents in Regard to Agent Simulations in Social Sciences: Emulation and Simulation.Franck Varenne - 2010 - In Alexandre Muzy, David R. C. Hill & Bernard P. Zeigler (eds.), Activity-Based Modeling and Simulation. Presses Universitaires Blaise-Pascal.
    The aim of this paper is to discuss the “Framework for M&S with Agents” (FMSA) proposed by Zeigler et al. [2000, 2009] in regard to the diverse epistemological aims of agent simulations in social sciences. We first show that there surely are great similitudes, hence that the aim to emulate a universal “automated modeler agent” opens new ways of interactions between these two domains of M&S with agents. E.g., it can be shown that the multi-level conception at the core of (...)
  25. La simulation conçue comme expérience concrète.Franck Varenne - 2003 - In Jean-Pierre Müller (ed.), Le statut épistémologique de la simulation. Editions de l'ENST.
    Par un procédé d'objections/réponses, nous passons d'abord en revue certains des arguments en faveur ou en défaveur du caractère empirique de la simulation informatique. A l'issue de ce chemin clarificateur, nous proposons des arguments en faveur du caractère concret des objets simulés en science, ce qui légitime le fait que l'on parle à leur sujet d'une expérience, plus spécifiquement d'une expérience concrète du second genre.
  26. What Does a Computer Simulation Prove? The Case of Plant Modeling at CIRAD.Franck Varenne - 2001 - In N. Giambiasi & C. Frydman (eds.), Simulation in industry - ESS 2001, Proc. of the 13th European Simulation Symposium. Society for Computer Simulation (SCS).
    The credibility of digital computer simulations has always been a problem. Today, through the debate on verification and validation, it has become a key issue. I will review the existing theses on that question. I will show that, due to the role of epistemological beliefs in science, no general agreement can be found on this matter. Hence, the complexity of the construction of sciences must be acknowledged. I illustrate these claims with a recent historical example. Finally I temperate this diversity (...)
  27. Advocating an Ethical Memory Model for Artificial Companions From a Human-Centred Perspective.Patricia A. Vargas, Ylva Fernaeus, Mei Yii Lim, Sibylle Enz, Wan Chin Ho, Mattias Jacobsson & Ruth Ayllet - 2011 - AI and Society 26 (4):329-337.
    This paper considers the ethical implications of applying three major ethical theories to the memory structure of an artificial companion that might have different embodiments such as a physical robot or a graphical character on a hand-held device. We start by proposing an ethical memory model and then make use of an action-centric framework to evaluate its ethical implications. The case that we discuss is that of digital artefacts that autonomously record and store user data, where this data are used (...)
  28. Design for a Common World: On Ethical Agency and Cognitive Justice. [REVIEW]Maja Velden - 2009 - Ethics and Information Technology 11 (1):37-47.
    The paper discusses two answers to the question, How to address the harmful effects of technology? The first response proposes a complete separation of science from culture, religion, and ethics. The second response finds harm in the logic and method of science itself. The paper deploys a feminist technoscience approach to overcome these accounts of neutral or deterministic technological agency. In this technoscience perspective, agency is not an attribute of autonomous human users alone but enacted and performed in socio-material configurations (...)
  29. A Real‐World Rational Agent: Unifying Old and New AI.Paul F. M. J. Verschure & Philipp Althaus - 2003 - Cognitive Science 27 (4):561-590.
  30. Can Robots Be Moral?Laszlo Versenyi - 1974 - Ethics 84 (3):248-259.
  31. Fine-Tuning, Quantum Mechanics and Cosmological Artificial Selection.Clément Vidal - 2012 - Foundations of Science 17 (1):29-38.
    Jan Greben criticized fine-tuning by taking seriously the idea that “nature is quantum mechanical”. I argue that this quantum view is limited, and that fine-tuning is real, in the sense that our current physical models require fine-tuning. Second, I examine and clarify many difficult and fundamental issues raised by Rüdiger Vaas’ comments on Cosmological Artificial Selection.
  32. From the Ethics of Technology Towards an Ethics of Knowledge Policy.René von Schomberg - 2007 - AI and Society.
    My analysis takes as its point of departure the controversial assumption that contemporary ethical theories cannot capture adequately the ethical and social challenges of scientific and technological development. This assumption is rooted in the argument that classical ethical theory invariably addresses the issue of ethical responsibility in terms of whether and how intentional actions of individuals can be justified. Scientific and technological developments, however, have produced unintentional consequences and side-consequences. These consequences very often result from collective decisions concerning the way (...)
  33. Utilising Appreciative Inquiry (AI) in Creating a Shared Meaning of Ethics in Organisations.L. J. Van Vuuren & F. Crous - 2004 - Journal of Business Ethics 57 (4):399 - 412.
    The management of ethics within organisations typically occurs within a problem-solving frame of reference. This often results in a reactive, problem-based and externally induced approach to managing ethics. Although basing ethics management interventions on dealing with and preventing current and possible future unethical behaviour are often effective in that it ensures compliance with rules and regulations, the approach is not necessarily conducive to the creation of sustained ethical cultures. Nor does the approach afford (mainly internal) stakeholders the opportunity to be (...)
  34. Automation of Legal Reasoning: A Study on Artificial Intelligence and Law.Peter Wahlgren - 1992 - Kluwer Law and Taxation Publishers.
  35. A Framework for the Extraction and Modeling of Fact-Finding Reasoning From Legal Decisions: Lessons From the Vaccine/Injury Project Corpus. [REVIEW]Vern R. Walker, Nathaniel Carie, Courtney C. DeWitt & Eric Lesh - 2011 - Artificial Intelligence and Law 19 (4):291-331.
    This article describes the Vaccine/Injury Project Corpus, a collection of legal decisions awarding or denying compensation for health injuries allegedly due to vaccinations, together with models of the logical structure of the reasoning of the factfinders in those cases. This unique corpus provides useful data for formal and informal logic theory, for natural-language research in linguistics, and for artificial intelligence research. More importantly, the article discusses lessons learned from developing protocols for manually extracting the logical structure and generating the logic (...)
  36. New Mathematical Foundations for AI and Alife: Are the Necessary Conditions for Animal Consciousness Sufficient for the Design of Intelligent Machines?Rodrick Wallace - manuscript
    Rodney Brooks' call for 'new mathematics' to revitalize the disciplines of artificial intelligence and artificial life can be answered by adaptation of what Adams has called 'the informational turn in philosophy', aided by the novel perspectives that program gives regarding empirical studies of animal cognition and consciousness. Going backward from the necessary conditions communication theory imposes on animal cognition and consciousness to sufficient conditions for machine design is, however, an extraordinarily difficult engineering task. The most likely use of the first (...)
  37. Implementing Moral Decision Making Faculties in Computers and Robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...)
  38. Machine Morality: Bottom-Up and Top-Down Approaches for Modelling Human Moral Faculties. [REVIEW]Wendell Wallach, Colin Allen & Iva Smit - 2008 - AI and Society 22 (4):565-582.
    The implementation of moral decision making abilities in artificial intelligence (AI) is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of (...)
  39. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...)
  40. On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
    Artificial agents such as robots are performing increasingly significant ethical roles in society. As a result, there is a growing literature regarding their moral status with many suggesting it is justified to regard manufactured entities as having intrinsic moral worth. However, the question of whether artificial agents could have the high degree of moral status that is attributed to human persons has largely been neglected. To address this question, the author developed a respect-based account of the ethical criteria for the (...)
  41. The Dilemma of Artificial Love: The Ethics of Love and Recognition in AI-Artificial Intelligence.L. Werner - 2005 - Film and Philosophy 9:44.
  42. Cognition in Context: Phenomenology, Situated Robotics and the Frame Problem.Michael Wheeler - 2008 - International Journal of Philosophical Studies 16 (3):323 – 349.
    The frame problem is the difficulty of explaining how non-magical systems think and act in ways that are adaptively sensitive to context-dependent relevance. Influenced centrally by Heideggerian phenomenology, Hubert Dreyfus has argued that the frame problem is, in part, a consequence of the assumption (made by mainstream cognitive science and artificial intelligence) that intelligent behaviour is representation-guided behaviour. Dreyfus' Heideggerian analysis suggests that the frame problem dissolves if we reject representationalism about intelligence and recognize that human agents realize the property (...)
  43. Computing Machinery and Morality.Blay Whitby - 2008 - AI and Society 22 (4):551-563.
    Artificial Intelligence (AI) is a technology widely used to support human decision-making. Current areas of application include financial services, engineering, and management. A number of attempts to introduce AI decision support systems into areas which more obviously include moral judgement have been made. These include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals. Responding to these developments raises a complex set of moral questions. This paper proposes a clearer replacement question (...)
  44. Autonomous Reboot: The Challenges of Artificial Moral Agency and the Ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to (...)
  45. Rethinking Machine Ethics in the Era of Ubiquitous Technology.Jeffrey White (ed.) - 2015 - IGI.
  46. Manufacturing Morality A General Theory of Moral Agency Grounding Computational Implementations: The ACTWith Model.Jeffrey White - 2013 - In Floares (ed.), Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents comes (...)
  47. Understanding and Augmenting Human Morality: The Actwith Model of Conscience.Jeffrey White - 2009 - In L. Magnani (ed.), computational intelligence.
    Abstract. Recent developments, both in the cognitive sciences and in world events, bring special emphasis to the study of morality. The cognitive sci- ences, spanning neurology, psychology, and computational intelligence, offer substantial advances in understanding the origins and purposes of morality. Meanwhile, world events urge the timely synthesis of these insights with tra- ditional accounts that can be easily assimilated and practically employed to augment moral judgment, both to solve current problems and to direct future action. The object of the (...)
  48. Technology to Facilitate Ethical Action: A Proposed Design. [REVIEW]Douglas H. Wightman, Lucas G. Jurkovic & Yolande E. Chan - 2005 - AI and Society 19 (3):250-264.
    As emerging technologies support new ways in which people relate, ethical discourse is important to help guide designers of new technologies. This article endeavors to do just that by presenting an ethical analysis and design of technology intended to gather and act upon information on behalf of its users. The article elaborates on socio-technological factors that affect the development of technology to support ethical action. Research and practice implications are outlined.
  49. Knowledge-Based Systems and Issues of Integration: A Commercial Perspective. [REVIEW]Karl M. Wiig - 1988 - AI and Society 2 (3):209-233.
    Commercial applications of knowledge-based systems are changing from an embryonic to a growth business. Knowledge is classified by levels and types to differentiate various knowledge-based systems. Applications are categorized by size, generic types, and degree of intelligence to establish a framework for discussion of progress and implications. A few significant commercial applications are identified and perspectives and implications of these and other systems are discussed. Perspectives relate to development paths, delivery modes, types of integration, and resource requirements. Discussion includes organizational (...)
  50. Simulation, Theory, and the Frame Problem: The Interpretive Moment.William S. Wilkerson - 2001 - Philosophical Psychology 14 (2):141-153.
    The theory-theory claims that the explanation and prediction of behavior works via the application of a theory, while the simulation theory claims that explanation works by putting ourselves in others' places and noting what we would do. On either account, in order to develop a prediction or explanation of another person's behavior, one first needs to have a characterization of that person's current or recent actions. Simulation requires that I have some grasp of the other person's behavior to project myself (...)
1 — 50 / 792