An important and supposedly impactful form of clinical ethics support is moral case deliberation. Empirical evidence, however, is limited with regard to its actual impact. With this literature review, we aim to investigate the empirical evidence of MCD, thereby a) informing the practice, and b) providing a focus for further research on and development of MCD in healthcare settings. A systematic literature search was conducted in the electronic databases PubMed, CINAHL and Web of Science. Both the data collection and the (...) qualitative data analysis followed a stepwise approach, including continuous peer review and careful documentation of our decisions. The qualitative analysis was supported by ATLAS.ti. Based on a qualitative analysis of 25 empirical papers, we identified four clusters of themes: 1) facilitators and barriers in the preparation and context of MCD, i.e., a safe and open atmosphere created by a facilitator, a concrete case, commitment of participants, a focus on the moral dimension, and a supportive organization; 2) changes that are brought about on a personal and inter-professional level, with regard to professional’s feelings of relief, relatedness and confidence; understanding of the perspectives of colleagues, one’s own perspective and the moral issue at stake; and awareness of the moral dimension of one’s work and awareness of the importance of reflection; 3) changes that are brought about in caring for patients and families; and 4) changes that are brought about on an organizational level. This review shows that MCD brings about changes in practice, mostly for the professional in inter-professional interactions. Most reported changes are considered positive, although challenges, frustrations and absence of change were also reported. Empirical evidence of a concrete impact on the quality of patient care is limited and is mostly based on self-reports. With patient-focused and methodologically sound qualitative research, the practice and the value of MCD in healthcare settings can be better understood, thus making a stronger case for this kind of ethics support. (shrink)
The design of collaborative robotics, such as driver-assisted operations, engineer a potential automation of decision-making predicated on unobtrusive data gathering of human users. This form of ‘somatic surveillance’ increasingly relies on behavioural biometrics and sensory algorithms to verify the physiology of bodies in cabin interiors. Such processes secure cyber-physical space, but also register user capabilities for control that yield data as insured risk. In this technical re-formation of human–machine interactions for control and communication ‘a dissonance of attribution’ :7684, 2019. https://doi.org/10.1073/pnas.1805770115) (...) is created between perceptions of phenomena, materials and decision-making. This reconfigures relations not only between humans and machines, objects and subjects, but possibly disrupts attributive functions in the social system of Law. What it requires is shifting a legal accountability for action from a sovereignty of the human to a new materialist account based on a ‘cognitive assemblage’ between physiological data, computation and algorithmic sensing. This paper investigates the function of law as a guidance system to acknowledge this account of sensory and algorithmic computation as autonomous ‘sensing agents’ that may be accountable in situations of risk. This assemblage of robotic computation and sensory determination requires a clearer legal differentiation across the current static terminologies of person, property, liability and rights that maintain strict separations of object from subject. To neglect this, we argue, law will solely impute attributions of error to humans despite evidence of operation via mutual control. (shrink)
According to Bayesian epistemology, rational learning from experience is consistent learning, that is learning should incorporate new information consistently into one's old system of beliefs. Simon M. Huttegger argues that this core idea can be transferred to situations where the learner's informational inputs are much more limited than Bayesianism assumes, thereby significantly expanding the reach of a Bayesian type of epistemology. What results from this is a unified account of probabilistic learning in the tradition of Richard Jeffrey's 'radical probabilism'. Along (...) the way, Huttegger addresses a number of debates in epistemology and the philosophy of science, including the status of prior probabilities, whether Bayes' rule is the only legitimate form of learning from experience, and whether rational agents can have sustained disagreements. His book will be of interest to students and scholars of epistemology, of game and decision theory, and of cognitive, economic, and computer sciences. (shrink)
Signaling games provide basic insights into some fundamental questions concerning the explanation of meaning. They can be analyzed in terms of rational choice theory and in terms of evolutionary game theory. It is argued that an evolutionary approach provides better explanations for the emergence of simple communication systems. To substantiate these arguments, I will look at models similar to those of Skyrms (2000) and Komarova and Niyogi (2004) and study their dynamical properties. My results will lend partial support to the (...) thesis that evolution leads to communication. In general, states of partial communication may evolve with positive probability under standard evolutionary dynamics. However, unlike states of perfect communication, they are unstable relative to neutral drift. (shrink)
We explore the question of whether sustained rational disagreement is possible from a broadly Bayesian perspective. The setting is one where agents update on the same information, with special consideration being given to the case of uncertain information. The classical merging of opinions theorem of Blackwell and Dubins shows when updated beliefs come and stay closer for Bayesian conditioning. We extend this result to a type of Jeffrey conditioning where agents update on evidence that is uncertain but solid. However, merging (...) of beliefs does not generally hold for Jeffrey conditioning on evidence that is fluid. Several theorems on the asymptotic behavior of subjective probabilities are proven. Taken together they show that while a consensus nearly always emerges in important special cases, sustained rational disagreement can be expected in many other situations. (shrink)
Many animals will invent new behaviour patterns, adjust established behaviours to a novel context, or respond to stresses in an appropriate and novel manner. This is the first ever book on the topic of 'animal innovation'. Bringing together leading scientific authorities on animal and human innovation, this book will put the topic of animal innovation on the map, and heighten awareness of this developing field.
Recently there has been some interest in studying the explanation of meaning by using signaling games. I shall argue that the meaning of signals in signaling games remains sufficiently unclear to motivate further investigation. In particular, the possibility of distinguishing imperatives and indicatives at a fundamental level will be explored. Thereby I am trying to preserve the generality of the signaling games framework while bringing it closer to human languages. A number of convergence results for the evolutionary dynamics of our (...) models will be proved. (shrink)
I discuss two ways of justifying reflection principles. First, I propose that an undogmatic reading of dynamic Dutch book arguments provides them with a sound foundation. Second, I show also that minimizing expected inaccuracy leads to a novel argument for reflection principles. The required inaccuracy measures comprise a natural class of functions that can be derived from a generalization of a condition known as propriety or immodesty. This shows that reflection principles are an essential feature not just of consistent degrees (...) of belief but also of degrees of belief that approximate truth. (shrink)
Game theory has a prominent role in evolutionary biology, in particular in the ecological study of various phenomena ranging from conflict behaviour to altruism to signalling and beyond. The two central methodological tools in biological game theory are the concepts of Nash equilibrium and evolutionarily stable strategy. While both were inspired by a dynamic conception of evolution, these concepts are essentially static—they only show that a population is uninvadable, but not that a population is likely to evolve. In this article, (...) we argue that a static methodology can lead to misleading views about dynamic evolutionary processes. We advocate, instead, a more pluralistic methodology, which includes both static and dynamic game theoretic tools. Such an approach provides a more complete picture of the evolution of strategic behaviour. 1 Introduction2 The Equilibrium Methodology3 Common Interest Signalling3.1 Lewis’s signalling game3.2 Static analysis3.3 Dynamic analysis4 The Sir Philip Sidney Game4.1 Static analysis4.2 Other equilibria4.3 Dynamic analysis5 Related Literature6 Static and Dynamic Approaches. (shrink)
Over the past few years the use of stimulants such as methylphenidate and modafinil among the student population has attracted considerable debate in the pages of bioethics journals. Under the rubric of cognitive enhancement, bioethicists have discussed this use of stimulants—along with future technologies of enhancement—and have launched a sometimes forceful debate of such practices. In the following paper, it is argued that even if we focus solely upon current practices, the term cognitive enhancement encompasses a wide range of ethical (...) considerations that can usefully be addressed without the need for speculation. In taking this position it is suggested that we divide cognitive enhancement into a series of empirically-constructed frameworks—medical risks and benefits, self-medication and under-prescription, prescription drug abuse and over-medication, and finally, the intention to cognitively enhance. These are not mutually exclusive frameworks, but provide a way in which to identify the scope of the issue at hand and particular ethical and medical questions that may be relevant to enhancement. By a process of elimination it is suggested that we can indeed talk of cognitive enhancement as an observable set of practices. However, in doing so we should be aware of how academic commentaries and discussion may be seen as both capturing reality and reifying cognitive enhancement as an entity. (shrink)
It is well known that Rudolf Carnap’s original system of inductive logic failed to provide an adequate account of analogical reasoning. Since this problem was identified, there has been no shortage of proposals for how to incorporate analogy into inductive inference. Most alternatives to Carnap’s system, unlike his original one, have not been derived from first principles; this makes it to some extent unclear what the epistemic situations are to which they apply. This paper derives a new analogical inductive logic (...) from a set of axioms which extend Carnap’s postulates in a natural way. The key insights come from Bruno de Finetti’s ideas about analogy. The axioms of the new system capture epistemic conditions that call for a strong kind of analogical reasoning. The new system has a number of merits, but is also subject to limitations. I shall discuss both, together with some possible ways to generalize the approach taken in this paper. (shrink)
One of the main contributions of Richard Bradley’s book is an elegant extension of Jeffrey’s Logic of Decision that countenances the evaluation of conditional prospects. This extension offers a promising new setting in which to model dynamic choice. In Bradley’s framework, plans can be understood as conditionals of an appropriate sort, while dynamic consistency can be viewed as providing a constraint on the evaluation of conditionals across time. In this paper, we study connections between planning conditionals and dynamic consistency.
Generalized probabilistic learning takes place in a black-box where present probabilities lead to future probabilities by way of a hidden learning process. The idea that generalized learning can be partially characterized by saying that it doesn’t foreseeably lead to harmful decisions is explored. It is shown that a martingale principle follows for finite probability spaces.
We study the handicap principle in terms of the Sir Philip Sidney game. The handicap principle asserts that cost is required to allow for honest signalling in the face of conflicts of interest. We show that the significance of the handicap principle can be challenged from two new directions. Firstly, both the costly signalling equilibrium and certain states of no communication are stable under the replicator dynamics ; however, the latter states are more likely in cases where honest signalling should (...) apply. Secondly, we prove the existence and stability of polymorphisms where players mix between being honest and being deceptive and where signalling costs can be very low. Neither the polymorphisms nor the states of no communication are evolutionarily stable, but they turn out to be more important for standard evolutionary dynamics than the costly signalling equilibrium. (shrink)
The Bayesian theorem on convergence to the truth states that a rational inquirer believes with certainty that her degrees of belief capture the truth about a large swath of hypotheses with increasing evidence. This result has been criticized as showcasing a problematic kind of epistemic immodesty when applied to infinite hypotheses that can never be approximated by finite evidence. The central point at issue—that certain hypotheses may forever be beyond the reach of a finite investigation no matter how large one’s (...) reservoir of evidence—cannot be captured adequately within standard probability theory. As an alternative, I propose a nonstandard probabilistic framework that, by using arbitrarily small and large numbers, makes room for the type of fine-grained conceptual distinctions appropriate for a deeper analysis of convergence to the truth. This framework allows for the right kind of modesty about attaining truth in the limit. (shrink)
In a recent paper, Belot argues that Bayesians are epistemologically flawed because they believe with probability 1 that they will learn the truth about observational propositions in the limit. While Belot’s considerations suggest that this result should be interpreted with some care, the concerns he raises can largely be defused by putting convergence to the truth in the context of learning from an arbitrarily large but finite number of observations.
We study a low-rationality learning dynamics called probe and adjust. Our emphasis is on its properties in games of information transfer such as the Lewis signaling game or the Bala-Goyal network game. These games fall into the class of weakly better reply games, in which, starting from any action profile, there is a weakly better reply path to a strict Nash equilibrium. We prove that probe and adjust will be close to strict Nash equilibria in this class of games with (...) arbitrarily high probability. In addition, we compare these asymptotic properties to short-run behavior. (shrink)
The handicap principle is one of the most influential ideas in evolutionary biology. It asserts that when there is conflict of interest in a signaling interaction signals must be costly in order to be reliable. While in evolutionary biology it is a common practice to distinguish between indexes and fakable signals, we argue this dichotomy is an artifact of existing popular signaling models. Once this distinction is abandoned, we show one cannot adequately understand signaling behavior by focusing solely on cost. (...) Under our reframing, cost becomes one—and probably not the most important—of a collection of factors preventing deception. (shrink)
The spontaneous emergence of signaling has already been studied in terms of standard evolutionary dynamics of signaling games. Standard evolutionary dynamics is given by the replicator equations. Thus, it is not clear whether the results for standard evolutionary dynamics depend crucially on the functional form of the replicator equations. In this paper I show that the basic results for the replicator dynamics of signaling games carry over to a number of other evolutionary dynamics. ‡This research was supported by the Konrad (...) Lorenz Institute for Evolution and Cognition Research. †To contact the author, please write to: Konrad Lorenz Institute for Evolution and Cognition Research, Adolf Lorenz Gasse 2, A-3422 Altenberg, Austria; e-mail: [email protected] (shrink)
Anthropological insights into the use of race/ethnicity to explore genetic contributions to disparities in health were developed using in-depth qualitative interviews with editorial staff from nineteen genetics journals, focusing on the methodological and conceptual mechanisms required to make race/ethnicity a genetic variable. As such, these analyses explore how and why race/ethnicity comes to be used in the context of genetic research, set against the background of continuing critiques from anthropology and related human sciences that focus on the social construction, structural (...) correlates and limited genetic validity of racial/ethnic categories. The analyses demonstrate how these critiques have failed to engage geneticists, and how geneticists use a range of essentially cultural devices to protect and separate their use of race/ethnicity as a genetic construct from its use as a societal and social science resource. Given its multidisciplinary, biosocial nature and the cultural gaze of its ethnographic methodologies, anthropology is well placed to explore the cultural separation of science and society, and of natural and social science disciplines. Anthropological insights into the use of race/ethnicity to explore disparities in health suggest that moving beyond genetic explanations of innate difference might benefit from a more even-handed critique of how both the natural and social sciences tend to essentialize selective elements of race/ethnicity. Drawing on the example of HIV/AIDS, this paper demonstrates how public health has been undermined by the use of race/ethnicity as an analytical variable, both as a cipher for innate genetic differences in susceptibility and response to treatment, and in its use to identify at greater risk of becoming infected and infecting others. Clearly, a tendency for biological reductionism can place many biomedical issues beyond the scope of public health interventions, while socio-cultural essentialization has tended to stigmatize and the communities where these are more prevalent. (shrink)
The work of Bruno Latour has animated debates in sociology, anthropology and philosophy over several decades, while attracting criticisms of the ontological, epistemological and political implications of his focus on networks. This article takes a particular depth example – the case of the genetic condition of sickle cell – and, drawing upon anthropological, archaeological and sociological evidence of the sickle cell body in history, appraises early, and later, Latourian ideas. The article concludes that while methodologically useful in drawing attention to (...) the complicated links of humans, animals and things, concerns remain about Latourian ontological claims. Limitations include an empiricist failure to account for absence; an insufficiently robust conception of emergence; an unwarranted curtailment of counterfactual human knowledge; a lack of concern for serial ‘undeserving losers’; a tendency to accord excessive freedoms to human actors; and a lack of a conception of how things may be considered as agents rather than actants. (shrink)
This article surveys the main philosophical and formal ideas revolving around language as being conventional from the perspective of game theory. For very basic situations, this leads to a coherent view of conventions that offers interesting insights. Although there exist many open problems, this article will argue by outlining partial solution attempts that there is no principled reason for not applying methods from game theory to them.
Evolutionary developmental biology (“evo-devo”) may provide insights and new methods for studies of cognition and cultural evolution. For example, I propose using cultural selection and individual learning to examine constraints on cultural evolution. Modularity, the idea that traits vary independently, can facilitate evolution (increase “evolvability”), because evolution can act on one trait without disrupting another. I explore links between cognitive modularity, evolutionary modularity, and cultural evolvability. (Published Online November 9 2006).
We consider the Stag Hunt in terms of Maynard Smith’s famous Haystack model. In the Stag Hunt, contrary to the Prisoner’s Dilemma, there is a cooperative equilibrium besides the equilibrium where every player defects. This implies that in the Haystack model, where a population is partitioned into groups, groups playing the cooperative equilibrium tend to grow faster than those at the non-cooperative equilibrium. We determine under what conditions this leads to the takeover of the population by cooperators. Moreover, we compare (...) our results to the case of an unstructured population and to the case of the Prisoner’s Dilemma. Finally, we point to some implications our findings have for three distinct ideas: Ken Binmore’s group selection argument in favor of the evolution of efficient social contracts, Sewall Wright’s Shifting Balance theory, and the equilibrium selection problem of game theory. (shrink)
How can players reach a Nash equilibrium? I offer one possible explanation in terms of a low-rationality learning method called probe and adjust by proving that it converges to strict Nash equilibria in an important class of games. This demonstrates that decidedly limited learning methods can support Nash equilibrium play.
Numerous studies have documented individual differences in exploratory tendencies and other phenomena related to search, and these differences have been linked to fitness. Here, I discuss the origins of these differences, focusing on how experience shapes animal search and exploration. The origin of individual differences will also depend upon the alternatives to exploration that are available. Given that search and exploration frequently carry significant costs, we might expect individuals to utilize cues indicating the potential net payoffs of exploration versus the (...) exploitation of known acts. Informative cues could arise from both recent and early-life experiences, from both the social and physical environment. Open questions are the extent to which an individual's exploratory tendencies are fixed throughout life versus being flexibly adjusted according to prevailing conditions and the actions of other individuals, and the extent to which individual differences in exploration extend across domains and are independent of other processes. (shrink)
We study a simple game theoretic model of information transfer which we consider to be a baseline model for capturing strategic aspects of epistemological questions. In particular, we focus on the question whether simple learning rules lead to an efficient transfer of information. We find that reinforcement learning, which is based exclusively on payoff experiences, is inadequate to generate efficient networks of information transfer. Fictitious play, the game theoretic counterpart to Carnapian inductive logic and a more sophisticated kind of learning, (...) suffices to produce efficiency in information transfer. (shrink)
Evolutionary questions require specialized approaches, part of which are comparisons between close relatives. However, to understand the origins of human tool behavior, comparisons with solely chimpanzees are insufficient, lacking the power to identify derived traits. Moreover, tool use is unlikely a unitary phenomenon. Large-scale comparative analyses provide an alternative and suggest that tool use co-evolves with a suite of cognitive traits.
The concept of fitness is central to evolutionary biology. Models of evolutionary change typically use some quantity called “fitness” which measures an organism’s reproductive success. But what exactly does it mean that fitness is such a measure? In what follows, we look at the interplay between abstract evolutionary models and quantitative measures of fitness and develop a measurement-theoretic perspective on fitness in order to explore what makes certain measures of fitness significant.
Guilt and shame are self-conscious emotions with implications for mental health, social and occupational functioning, and the effectiveness of sports practice. To date, the assessment and role of athlete-specific guilt and shame has been under-researched. Reporting data from 174 junior elite cricketers, the present study utilized exploratory factor analysis in validating the Athletic Perceptions of Performance Scale, assessing three distinct and statistically reliable factors: athletic shame-proneness, guilt-proneness, and no-concern. Conditional process analysis indicated that APPS shame-proneness mediated the relationship between general (...) and athlete-specific distress. While APPS domains of guilt-proneness and no-concern were not significant mediators, they exhibited correlations in the expected direction with indices of psychological distress and well-being. The APPS may assist coaches and support staff identify players who may benefit from targeted interventions to reduce the likelihood of experiencing shame-prone states. (shrink)
Behavioral innovations induced by the social or physical environment are likely to be of great functional and evolutionary importance, and thus warrant serious attention. Innovation provides a process by which animals can adjust to changed environments. Despite this apparent adaptive advantage, it is not known whether innovative propensities are adaptive specializations. Furthermore, the varied psychological processes underlying innovation remain poorly understood.