Successful application of heuristics depends on how a problem is represented, mentally. Moral imagination is a good technique for reflecting on, and sharing, mental representations of ethical dilemmas, including those involving emerging technologies. Future research on moral heuristics should use more ecologically valid problems and combine quantitative and qualitative methods.
As computating technologies become ubiquitous and at least partly autonomous, they will have increasing impact on societies, both in the developed and developing worlds. This article outlines a framework for guiding emerging technologies in directions that promise social as well as technical progress. Multiple stakeholders will have to be engaged in dialogues over new technological directions, forming trading zones in which knowledge and resources are exchanged. Such discussions will have to incorporate cultural and individual values.
The societal and ethical impacts of emerging technological and business systems cannot entirely be foreseen; therefore, management of these innovations will require at least some ethicists to work closely with researchers. This is particularly critical in the development of new systems because the maximum degrees of freedom for changing technological direction occurs at or just after the point of breakthrough; that is also the point where the long-term implications are hardest to visualize. Recent work on shared expertise in Science & (...) Technology Studies (STS) can help create productive collaborations among scientists, engineers, ethicists and other stakeholders as these new systems are designed and implemented. But collaboration across these disciplines will be successful only if scientists, engineers, and ethicists can communicate meaningfully with each other. The establishment of a trading zone coupled with moral imagination present one method for such collaborative communication. (shrink)
This review of Gigerenzer, Todd, and the ABC Research Group's Simple heuristics that make us smart focuses on the role of heuristics in discovery, invention, and hypothesis-testing and concludes with a comment on the role of heuristics in population growth.
Dienes & Perner's theoretical framework should be applicable to two related areas: technological innovation and the psychology of scientific reasoning. For the former, this commentary focuses on the example of nuclear weapon design, and on the decision to launch the space shuttle Challenger. For the latter, this commentary focuses on Klayman and Ha's positive test heuristic and the invention of the telephone.
Patrick Toner has recently criticized accounts of substance provided by Kit Fine, E. J. Lowe, and the author, accounts which say (to a first approximation) that substances cannot depend on things other than their own parts. On Toner’s analysis, the inclusion of this parts exception results in a disjunctive definition of substance rather than a unified account. In this paper (speaking only for myself, but in a way that would, I believe, support the other authors that Toner discusses), I first (...) make clear what Toner’s criticism is, and then I respond to it. Including the parts exception is not the adding of a second condition but instead the creation of a new single condition. Since it is not the adding of a condition, the result is not disjunctive. Therefore, the objection fails. (shrink)
There is no consensus on how to define substance, but one popular view is that substances are entities that are independent in some sense or other. E. J. Lowe’s version of this approach stresses that substances are not dependent on other particulars for their identity. I develop the meaning of this proposal, defend it against some criticisms, and then show that others do require that the theory be modified.
Subjectivism about normativity (SN) is the view that norms are never intrinsic to things but are instead always imposed from without. After clarifying what SN is, I argue against it on the basis of its implications concerning intentionality. Intentional states with the mind-to-world direction of fit are essentially norm-subservient, i.e., essentially subject to norms such as truth, coherence, and the like. SN implies that nothing is intrinsically an intentional state of the mind-to-world sort: its being such a state is only (...) a status relative to the imposition of a norm. If one rejects this view of mind-to-world states, then one has grounds for rejecting SN itself. If one accepts it, an infinite regress arises that makes it impossible for norms to be imposed, which means that SN has undermined itself. (shrink)
According to Christian belief, Jesus Christ is a divine person who became “incarnate,” i.e., who became human. A key event in the second act of the drama of creation and redemption, the incarnation could not have failed to interest Aquinas, and he discusses it in a number of places. A proper understanding of what he thought about it is thus part of any complete understanding of his work. It is, furthermore, a window into his ideas on a variety of other (...) topics: God, human nature, language, substance, and so on. Finally, it forces us to come to grips with what is at stake in acknowledging that Aquinas was not only a philosopher but a theologian as well. (shrink)
According to authoritative Christian teaching, Jesus Christ is a single person existing in two natures, divinity and humanity. In attempting to understand this claim, the high-scholastic theologians often asked whether there was more than one existence in Christ. John Duns Scotus answers the question with a clear and strongly-formulated yes, and Thomists have sometimes suspected that his answer leads in a heretical direction. But before we can ask whether Scotus‘s answer is acceptable or not, we have to come to a (...) clear understanding of what his answer is. And before we can ask what his answer is, we have to come to a clear understanding of what question or questions he is trying to answer. In this paper I begin by explaining that the question about Christ‘s existence is ambiguous, i.e., that there are actually two questions hidden behind one formulation. Next I look at Scotus‘s writings on the topic in order to determine which question he is really trying to answer, and I argue that he is trying to answer both of them, even though he does not make this clear. Third, I provide an initial look at the answers that he gives. Fourth, I explain why these answers might seem problematic, especially from a Thomistic perspective. Fifth, I explain Scotus‘s answers in more detail and show that they are not problematic in the way that some Thomists have held. Indeed, at least some of Scotus‘s ideas are the very same ideas that Thomas spells out in one of his works. (shrink)
In spite of the wide variety of approaches to ethics training it is still debatable which approach has the highest potential to enhance professionals’ integrity. The current effort assesses a novel curriculum that focuses on metacognitive reasoning strategies researchers use when making sense of day-to-day professional practices that have ethical implications. The evaluated trainings effectiveness was assessed by examining five key sensemaking processes, such as framing, emotion regulation, forecasting, self-reflection, and information integration that experts and novices apply in ethical decision-making. (...) Mental models of trained and untrained graduate students, as well as faculty, working in the field of physical sciences were compared using a think-aloud protocol 6 months following the ethics training. Evaluation and comparison of the mental models of participants provided further validation evidence for sensemaking training. Specifically, it was found that trained students applied metacognitive reasoning strategies learned during training in their ethical decision-making that resulted in complex mental models focused on the objective assessment of the situation. Mental models of faculty and untrained students were externally-driven with a heavy focus on autobiographical processes. The study shows that sensemaking training has a potential to induce shifts in researchers’ mental models by making them more cognitively complex via the use of metacognitive reasoning strategies. Furthermore, field experts may benefit from sensemaking training to improve their ethical decision-making framework in highly complex, novel, and ambiguous situations. (shrink)
In recent years, several minimalist accounts of joint action have been offered (e.g. Tollefsen Philosophy of the Social Sciences 35:75–97, 2005; Sebanz et al. Journal of Experimental Psychology: Human Perception and Performance, 31(6): 234–1246, 2006; Vesper et al. Neural Networks 23 (8/9): 998–1003, 2010), which seek to address some of the shortcomings of classical accounts. Minimalist accounts seek to reduce the cognitive complexity demanded by classical accounts either by leaving out shared intentions or by characterizing them in a way that (...) does not demand common knowledge of complex, interconnected structures of intentions. Moreover, they propose models of the actual factors facilitating online coordination of movements. The present proposal aims to enrich a minimalist framework by showing how shared emotions can facilitate coordination without presupposing common knowledge of complex, interconnected structures of intentions. Shared emotions are defined for the purposes of this paper as affective states that fulfill two minimal criteria: (a) they are expressed (verbally or otherwise) by one person; and (b) the expression is perceived (consciously or unconsciously) by another person. Various ways in which the fulfillment of (a) and (b) can lead to effects that function as coordinating factors in joint action are distinguished and discussed. (shrink)
In recent years, a number of theorists have developed approaches to social cognition that highlight the centrality of social interaction as opposed to mindreading (e.g. Gallagher and Zahavi 2008 ; Gallagher 2001 , 2007 , 2008 ; Hobson 2002 ; Reddy 2008 ; Hutto 2004 ; De Jaegher 2009 ; De Jaegher and Di Paolo 2007 ; Fuchs and De Jaegher 2009 ; De Jaegher et al. 2010 ). There are important differences among these approaches, as I will discuss, but (...) they are united by their commitment to the claim that various embodied and extended processes sustain social understanding and interaction in the absence of mindreading and thus make mindreading superfluous. In this paper, I consider various ways of articulating and defending this claim. I will argue that the options that have been offered either fail to present an alternative to mindreading or commit one to a radical enactivist position that I will give reasons for being skeptical about. I will then present an alternative and moderate version of interactionism, according to which the embodied and extended processes that interactionists emphasize actually complement mindreading and may even contribute as an input to mindreading. (shrink)
E. Beltrami in 1868 did not intend to prove the consistency of non-euclidean plane geometry nor the independence of the euclidean parallel postulate. His approach would have been unsuccessful if so intended. J. Hoüel in 1870 described the relevance of Beltrami's work to the issue of the independence of the euclidean parallel postulate. Hoüel's method is different from the independence proofs using reinterpretation of terms deployed by Peano about 1890, chiefly in using a fixed interpretation for non-logical terms. Comparing the (...) work of Beltrami and Hoüel with the treatment of non-euclidean geometry after the development of the axiomatic method in the 1890s indicates an important shift in mathematicians? attitudes towards mathematical theories. (shrink)
Simulation as an epistemic tool between theory and practice: A Comparison of the Relationship between Theory and Simulation in Science and in Folk Psychology In this paper I explore the concept of simulation that is employed by proponents of the so-called simulation theory within the debate about the nature and scientific status of folk psychology. According to simulation theory, folk psychology is not a sort of theory that postulates theoretical entities (mental states and processes) and general laws, but a practice (...) whereby we put ourselves into others’ shoes and simulate their situation from our own perspective. On the basis of this sort of simulation, we supposedly know how we would act or think or feel, and then expect the same of others. A closer look at the concept of simulation reveals some problems with this view, but also helps to clarify the insight motivating simulation theory. Specifically, I defend the thesis that the analogy to simulations in science shows us how theoretical elements in folk psychology can be complemented by (i.e. not replaced by) the central idea of simulation theory – namely that our own cognitive habits and dispositions provide us with a resource that is distinct from propositional knowledge in folk psychology. I also discuss the idea that our use of simulations during cognitive development enables us to imitate the people around us and thereby to become more similar to them, which in turn makes simulation an increasingly effective epistemic strategy. Insofar as theoretical elements – such as the distinctions, relations, and entities referred to in folk psychological discourse – play a role in imitative learning, they are causally embedded in our cognitive development, so we have good reason to regard them as being among the real causes of our behavior. (shrink)