A fundamental assumption of theories of decision-making is that we detect mismatches between intention and outcome, adjust our behavior in the face of error, and adapt to changing circumstances. Is this always the case? We investigated the relation between intention, choice, and introspection. Participants made choices between presented face pairs on the basis of attractiveness, while we covertly manipulated the relationship between choice and outcome that they experienced. Participants failed to notice conspicuous mismatches between their intended choice and the outcome (...) they were presented with, while nevertheless offering introspectively derived reasons for why they chose the way they did. We call this effect choice blindness. (shrink)
Psychosis is associated with distorted perceptions and deficient bottom-up learning such as classical fear conditioning. This has been interpreted as reflecting imprecise priors in low-level predictive coding systems. Paradoxically, overly strong beliefs, such as overvalued beliefs and delusions, are also present in psychosis-associated states. In line with this, research has suggested that patients with psychosis and associated phenotypes rely more on high-order priors to interpret perceptual input. In this behavioural and fMRI study we studied two types of fear learning, i.e., (...) instructed fear learning mediated by verbal suggestions about fear contingencies and classical fear conditioning mediated by low level associative learning, in delusion proneness—a trait in healthy individuals linked to psychotic disorders. Subjects were shown four faces out of which two were coupled with an aversive stimulation while two were not in a fear conditioning procedure. Before the conditioning, subjects were informed about the contingencies for two of the faces of each type, while no information was given for the two other faces. We could thereby study the effect of both classical fear conditioning and instructed fear learning. Our main outcome variable was evaluative rating of the faces. Simultaneously, fMRI-measurements were performed to study underlying mechanisms. We postulated that instructed fear learning, measured with evaluative ratings, is stronger in psychosis-related phenotypes, in contrast to classical fear conditioning that has repeatedly been shown to be weaker in these groups. In line with our hypothesis, we observed significantly larger instructed fear learning on a behavioural level in delusion-prone individuals compared to non-delusion-prone subjects. Instructed fear learning was associated with a bilateral activation of lateral orbitofrontal cortex that did not differ significantly between groups. However, delusion-prone subjects showed a stronger functional connectivity between right lateral orbitofrontal cortex and regions processing fear and pain. Our results suggest that psychosis-related states are associated with a strong instructed fear learning in addition to previously reported weak classical fear conditioning. Given the similarity between nocebo paradigms and instructed fear learning, our results also have an impact on understanding why nocebo effects differ between individuals. (shrink)
We propose that moral behaviour of artificial agents could be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral emotions closely follows that of other non-social emotions used (...) in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame. (shrink)
Mitchell et al. propose that associative learning in humans and other animals requires the formation of propositions by means of conscious and controlled reasoning. This approach neglects important aspects of current thinking in evolutionary biology and neuroscience that support the claim that learning, here exemplified by fear learning, neither needs to be conscious nor controlled.