figure a

1 Introduction

On March 16, 2021, Peter Wakker turned 65. We considered this an outstanding opportunity to honor his work with a special issue of Theory and Decision and approached some of, what we consider, the world’s most prominent decision theorists with the question whether they would like to contribute to it. Our invitations were accepted with an enthusiasm that touched us and that reflects the enormous appreciation with which Peter and his work are considered in the decision theory community. As one of them wrote: “Peter Wakker is not only a brilliant researcher, but by far the greatest scholar in our field.”

We had hoped to be able compile a single issue, but in the end, the enthusiasm was such that we happily went for a double issue. This special issue reflects the breadth of Peter’s contributions, ranging from advanced theoretical results to insightful experimentation to deep, more philosophical, contributions about the methodology of decision theory. We are very grateful to all the authors for their contributions and for sticking to the rather tight deadline that we imposed.

Ever since reading de Finetti (1937) as a young mathematics student, Peter Wakker has dedicated his life to science and to decision theory in particular. We are extremely lucky to have worked closely with him over the years and to have witnessed his approach to science. Peter takes models seriously: few things can upset him more than a model with improperly specified primitives. He has a very personal view of decision theory, as Daniel Kahneman puts it nicely on the back of Peter’s famous book on Prospect Theory. A view to which he adheres, but, at the same time, that he is always open to discuss and, occasionally, willing to reconsider. Peter has contributed immensely to the field of decision theory, not only through his many articles, presentations, and two books, but also through his famous annotated bibliography, a new version of which was updated each year on his birthday, the people he trained, his role as an editor of journals and as an incredibly helpful and constructive referee, and through his comments on many, many articles. Few articles appear in decision theory that do not acknowledge his “extremely helpful” comments. Peter is exceptionally generous in helping others to improve their research. Although a manuscript full of his typical scribbled notes in pencil may feel, at first, overwhelming, in the end, his deep thoughts and careful comments always lead to fundamental improvements.

We are very grateful to Peter for all the support we have personally received and for his advancement of the field of decision theory. This special issue is a small token of appreciation. Thank you Peter for being such a great researcher, source of inspiration, and wonderful human being.

2 Preview of the Special Issue

We will now provide a brief overview of the papers in this special issue and their relation to Peter’s work. The papers are ordered alphabetically by the name of the first author. The introduction to each paper was written by the editor who handled it.

In empirical studies that focused on utility measurement, the trade-off method (Wakker & Deneffe, 1996) is particularly useful. Its formulation in terms of preference-based midpoints presents a simple way to carry out comparative statics while by-passing deviations from expected utility attributed to distortion of probabilities or ambiguity (Baillon et al., 2012). In Alon and Schmeidler (2014), the trade-off method was used to derive foundations for the multiple-prior or Maxmin expected utility model. The paper by Shiri Alon in this special issue extends the derivation of Alon and Schmeidler by formulating weaker principles. The focus is on 50:50 mixtures on the outcome scale, which are related to utility midpoints once this cardinal outcome measure has been identified. Under some standard assumptions for preferences, a binary version of comonotonic trade-off consistency can be used to pin down the cardinal utility within a biseparable preference representation. Next, certainty independence and uncertainty aversion can be invoked to obtain the Maxmin multiple-prior model in a purely uncertainty set-up. Alon shows that the latter two properties can be formulated to hold just for 50:50 mixtures. This is a nice theoretical result in particular for those who intend to bring the theory to the data. A test of the certainty independence and uncertainty aversion principles is obviously simpler in the 50:50 formulation of these characteristic properties.

The trade-off consistency property for preferences developed by Peter in the 1980s (Wakker, 1984, 1989a) is nowadays a well-known tool and widely applied. Originally, Peter developed trade-off consistency for the identification of cardinal utility in expected utility. Since the 1990s, the tool has further been extended to derive prospect theory (Wakker & Tversky, 1993), other ambiguity models (Köbberling and Wakker 2003) and regret theory (Diecidue & Somasundaram, 2017). The theoretical applications of Peter’s trade-off method to the probability scale started with Abdellaoui (2002); it was extended to uncertainty in Abdellaoui and Wakker (2005). Following the ideas of Wakker and Deneffe (1996), the probability version of trade-off consistency has been used to measure probability weighting in prospect theory (Abdellaoui, 2000; Bleichrodt & Pinto, 2000), and complementing Chateauneuf & Wakker (1999), to obtain a general version of prospect theory for risk (Werner & Zank, 2019).

The trade-off method has also been used in decision context where the decision-maker does not have such a well-specified structure over the state space as is the case in subjective expected utility. Then, decision-makers may face a choice situation now that reminds them of having experienced similar decision situations in the past and they can call on that memory. The trade-off consistency property can be formulated for such a setting to derive preference foundations for utility and similarity functions (Gilboa et al., 2002) or to measure aspects of case-based decision theory (Bleichrodt et al. 2017). If a decision-maker has met the same decision problem in the past and perfectly recalls the action and the result, this feedback can be used to inform decision-making. It is argued that the availability of such feedback may influence on choice behavior.

The research team at Bocconi has without any doubt contributed most (at least quantitatively) to this special issue. We asked them whether they would like to contribute with a paper and in the end they sent us three! The paper by Pierpaolo Battigalli, Simone Cerreia Vioglio, Fabio Maccheroni, Massimo Marinacci, and Thomas Sargent studies the interesting interplay between decision under uncertainty, game theory, and macroeconomic policy. They consider policy makers who are unsure about the data generating model. The paper shows that even patient policy makers who are willing to collect many data and who update their beliefs rationally may make suboptimal decisions based on incorrect beliefs about the data generating model. The paper illustrates these ideas using the trade-off between inflation and unemployment (the famous Phillips curve), which dominated macroeconomic policy in the 1970s. They show that observing unemployment and inflation in the long run will not resolve the uncertainty about the multiplier effect of policy on unemployment and thereby leaves room for debate about the optimality of (Keynesian) monetary policy. The paper does not include ambiguity aversion, Peter’s favorite topic, but at the end, the authors give interesting ideas of how modern ambiguity models, including prospect theory, may help to extend their results.

The second paper of the Bocconi team by Simone Cerreia Vioglio, Fabio Maccheroni, Massimo Marinacci, and Aldo Rustichini is on revealed preference, which Peter believes should be the foundation of any theory of decision-making and to which he has made important contributions (e.g., Peters & Wakker, 1991, 1994; Wakker, 1989a). The paper studies to what extent the law of demand for normal goods, arguably the main result of consumer theory, continues to hold when choices are stochastic (as is most likely to happen in real-world applications). The paper formulates a consistency requirement, which is a variation on the classic choice axiom of Luce (1959), and show that this is the stochastic counterpart of the weak axiom of revealed preference. Traditional consumer theory continues to hold when choices are stochastic provided that dominated alternatives are excluded from consideration. It is really nice that the paper by Simone, Fabio, Massimo, and Aldo builds on the work of Luce, who Peter held in high esteem and whose Foundation of Measurement (Krantz et al., 1971) has influenced Peter’s research a lot (e.g., Abdellaoui & Wakker, 2005, 2020; Wakker, 1988, 1991).

How to derive additive separability for rank-ordered sets of outcomes has been central in Peter’s work (Chateauneuf & Wakker, 1993). In the 1990s, Peter worked with Chew Soo Hong on characterizing rank-dependent theories by restricting Savage’s sure-thing principle to comonotonic acts (Chew and Wakker 1996). Chew’s work on exchangeability (together with Jacob Sagi), which characterizes probabilistic sophistication withing sources of ambiguity (Chew & Sagi, 2006, 2008), forms the basis of Peter’s current work on ambiguity (e.g., Abdellaoui et al., 2011). The paper by Chew, Robin Chark, Songfa Zhong, Shui Ying Tsang, Chiea Chuen Khor, Richard P. Ebstein, and Hong Xue in this special issue reflects Chew’s current research on relating economic behavior to variation in genes. It studies whether a well-known effect in decision under ambiguity, familiarity bias, can be related to genes which are known to be involved in processing anxiety and fear. The paper indeed finds evidence of this relation, giving new insights in the origins of source preference, a topic Peter has been working on for many years (e.g., Li et al., 2018; Tversky & Wakker, 1995).

Alain Chateauneuf and Michèle Cohen, two long time friends of Peter, joint with Mina Mostoufi provide a model-free characterization of risk aversion in the sense of aversion to left-monotone increases in risk. Left-monotone spreads in risk are a special case of mean-preserving spreads in risk, such that additional risk involves lower outcomes. In Chateauneuf et al. (2004), it was shown that left-monotone spreads in risk can be obtained by repeated applications of “Pigou–Dalton transfers” whereby, in a lottery, a worse outcome is reduced and a better outcome improved, such that the new lottery has the same expected value as the original one. For an expected utility maximizer, the notion of aversion to left-monotone spreads is reflected in a globally concave utility and, therefore, equivalent to strong risk aversion (aversion to mean-preserving spreads) and weak risk aversion (preference for expected value). As such, studying these notions of risk aversion in a model-free context is warranted.

The paper by Chateauneuf, Cohen, and Mostoufi builds on Gollier and Schlesinger (1996) who proved that under strong risk aversion, the optimality of insurance with deductible does not depend on expected utility preference, and of Vergnaud (1997), who showed that a deductible insurance policy is optimal for a decision-maker who is averse to left-monotone increases in risk. Chateauneuf et al. proved that the converse is also true: a decision-maker who regards a deductible insurance policy as optimal, must be averse to left-monotone spreads in risk (Theorem 3). Additionally, it is shown that in the context of the dual theory of Yaari (1987), the level of deductible can easily be derived (Theorem 4). In the dual theory, aversion to left-monotone increases in risk implies that the probability weighting function is star-shaped at certainty (Chateauneuf et al., 2004). Using an example of a two-parameter probability weighting function from Wakker (2010, p. 208), Chateauneuf, Cohen, and Mostoufi demonstrate under which conditions a dual theory decision-maker buys no insurance, full insurance, or a deductible insurance policy. In the late 1980s and early 1990s, Peter wrote several clarifying works on how additive separability across mutually exclusive events can be derived (Chateauneuf & Wakker, 1993; Wakker, 1989a, 1991, 1993). These important results served as a basis for deriving preference models with parametric utility (Wakker & Zank, 2002) or, when applied to the probability scale, parametric probability weighting functions (Abdellaoui et al., 2010; Diecidue et al., 2009; Webb & Zank, 2011). Constraints on those parameters apply, such as to capture optimism and pessimism (Wakker, 1994) that is often found in experiments, and which can explain the simultaneous gambling and purchase of insurance (Wakker, 2010, Exercise 7.2.3).

The paper by Ido Erev, a good friend of Peter’s from whom he claims to have learned doing experiments, Ofir Yakobi, Nathaniel Ashby, and Nick Chater provides an empirical analysis of how “decision by past experience” and “decision by prospective experience” trigger particular reactions by subjects. Prospective experience means that the decision-maker can repeatedly sample and obtain feedback without having to internalize the monetary outcomes. Erev et al. provide two experimental studies which they use to support the face-or-cue hypothesis. This model describes how subjects react to the information provided through sampling. Some information appears to triggers a “face strategy” that implies a focus on the differences in the value of monetary outcomes faced when sampling. By contrast, in the “cue strategy” sampling activates memory of similar sample outcomes from earlier trials. For both types of strategies, the assumption of reliance on small samples of outcomes appears to be crucial. These results are important judging the value of repeated choices in experiments that use such repetitions to verify if subjects choose consistently. Alternatively, avoiding subjects to turn to such a face-and-cue heuristic may be important in experiments that involve a large number of similar choice situations. In that case, adding “filler choices” (e.g., as in Wakker et al., 1994) may help avoiding the impact of heuristics.

The paper by Simon Gaechter, Eric Johnson, and Andreas Herrmann is a classic that has been circulating for many years as a working paper. It is a paper for which Peter has repeatedly expressed his great admiration. We are happy that the authors agreed to publish their paper in this special issue and we hope that by doing so, it will draw even more attention. Gaechter, Johnson, and Herrmann have collected an impressive dataset of 660 randomly selected customers of a German car manufacturer. They measure the customers’ loss aversion both in a riskless task [using willingness to pay (WTP) and willingness to accept (WTA)] and in a risky task using mixed prospects. What is impressive, besides their dataset, is that the authors find evidence of a WTP–WTA disparity, which can be attributed to loss aversion not only between subjects, but even within subjects. The strong correlation between loss aversion in the WTP–WTA task and loss aversion under risk suggests that loss aversion is a stable personality trait and not context-dependent. Loss aversion is one of the main insights from behavioral economics and Peter has written many papers on it, defining it theoretically and showing its importance in explaining data (e.g., Köbberling & Wakker, 2005; Li et al., 2018). Simon, Eric, and Andreas’ paper is one of the most convincing demonstrations of loss aversion and convincingly rejects challenges to it and related concepts like the endowment effect (e.g., Plott and Zeiler 2005).

In the 1980s, Itzhak Gilboa and Peter both worked as Ph.D. students under the supervision of David Schmeidler to develop axiomatizations of Choquet expected utility in Savage’s framework. This resulted in two great papers (Gilboa, 1987; Wakker, 1989b). Peter expresses his admiration for Gilboa’s paper in Wakker (2020, p. 388) where he writes “it continues to be one of the strongest papers in our field.” That the admiration is mutual is evident from the opening words of the paper Tzachi wrote for the special issue, joint with Larry Samuelson. In their paper, they address an intriguing problem: that in the presence of uncertainty, Pareto-improving trades may not be desirable when agents have different beliefs (in which case trades may simply reflect betting). Gilboa et al. (2014) introduced the concept of no-betting Pareto dominance, meaning that trade must be rationalizable by some common probabilities. However, this concept essentially requires agents to be Bayesians, which may be too restrictive given widespread evidence of ambiguity aversion. The paper by Gilboa and Samuelson extends no-betting Pareto dominance to ambiguity aversion by requiring that the trade can be rationalized by what they call “common ambiguity averse beliefs.”

Another classic that has been floating around for many years and that Peter appreciates a lot is the paper by Richard Gonzalez and George Wu. We are very pleased that Rich and George also agreed to publish their paper in the special issue. Their paper compares the predictive performance of original (Kahneman and Tversky 1979) and new or cumulative prospect theory (Tversky & Kahneman, 1992). It is well known that Peter prefers cumulative prospect theory (e.g., Diecidue and Wakker 2001; Fennema & Wakker, 1997). Gonzalez and Wu estimate prospect theory’s weighting and value function for two outcome gambles, a domain where original and new prospect theory agree, and use these to predict cash equivalents (CEs) for three outcome gambles, a domain where the two versions of prospect theory differ. They find that new prospect theory tends to underpredict CEs, whereas original prospect theory overpredicts CEs. An interesting finding of their study is that original prospect theory can predict a CE that exceeds the highest outcome of a gamble. Overall, the findings of their study seem to confirm Peter’s belief that the new version of prospect theory is not only theoretically, but also empirically preferable, even though decision weights do not reflect as much rank-dependence as cumulative prospect theory implies.

One of the central themes of Peter’s research has been how to measure the parameters of decision theories, be they utility (e.g., Wakker & Deneffe, 1996), beliefs (e.g., Kothiyal et al., 2010; Offerman et al., 2009), probability and decision weighting (e.g., Diecidue et al., 2007; Van De Kuilen & Wakker, 2011), ambiguity attitudes (e.g., Baillon et al., 2018, 2021; Dimmock et al., 2016), time preference (e.g., Attema et al., 2010, 2016), or quality of life (Attema et al. 2012; Van Osch et al., 2004). The paper by Edi Karni in this special issue is on a different kind of elicitation. The paper takes as a starting point an incomplete preference relation. It is well known that preferences over risky prospects are then (at best) represented by a set of utility functions (e.g., Dubra et al., 2004; Galaabaatar & Karni, 2013). How decision-makers choose from this set is unclear. Karni assumes that they have subjective probabilities over this set of utilities and proposes an incentive-compatible method to measure these subjective probabilities and the set of utilities. Edi Karni’s paper is a nice step towards solving the problem of how to apply models of incomplete preferences to practical decision problems and towards opening the door to use these models in policy.

Peter has thought and written much about the question how ambiguity attitudes can be appropriately measured. In several papers, he has advocated to do so using matching probabilities (e.g., Baillon et al., 2018; Dimmock et al., 2016; Li et al., 2018, 2020). Matching probability has the advantage that they filter out risk attitudes allowing a precise measurement of ambiguity attitudes. The paper by Fabio Maccheroni, Massimo Marinacci, and Peter’s former PhD student Jingni Yang offers a different way to isolate ambiguity attitudes from risk attitudes. They use the biseparable model that was axiomatized by Ghirardato and Marinacci (2001) and which Peter has also used repeatedly (e.g., Miyamoto & Wakker, 1996), because it has the important advantage that for acts with two outcomes, many ambiguity models are special cases of it. Ghirardato and Marinacci (2002) argued that to compare the ambiguity attitudes of two decision-makers within the biseparable model, their utilities should be equivalent to correct for risk attitudes. Ghirardato and Marinacci (2002) proposed a trade-off condition that has this effect. The paper by Fabio, Massimo, and Jingni uses a much simpler condition based on willingness to bet to characterize the cardinal equivalence of two biseparable preferences. Their willingness to bet condition is easy to observe and, thereby greatly simplifies the comparison of ambiguity attitudes. Their result is in the spirit of Peter’s work who has throughout his career aimed to facilitate the measurement of (sometimes complex) decision models.

Of all the contributors to this special issue, Hans Peters is Peter’s oldest (in the sense of friendship duration) friend. Their joint history goes back to the time Peter was studying mathematics at the Catholic (now Radboud) University of Nijmegen. Hans and Peter collaborated on several projects in the 1980s and 1990s, particularly related to revealed preference and bargaining (e.g., de Koster et al., 1983; Peters & Wakker, 1986, 1996; Wakker et al., 1987) with an occasional excursion to quality of life measurement (Miyamoto et al., 1998). The paper Hans contributed to the special issue builds on results from these papers. It introduces the notion of risk aversion for losses, which generalizes the concept of linear loss aversion used for example by Shalev (2000) and characterized by Peters (2012). The paper introduces risk aversion for losses in the Nash bargaining model and derives the Nash bargaining solution solution based on this concept. The paper concludes by giving a preference foundation for risk aversion for losses in the spirit of Yaari (1969), a paper Peter holds in high esteem.

In “Production under uncertainty and choice under uncertainty in the emergence of generalized expected utility theory,” John Quiggin illustrates how his thinking has been influenced by the duality between models of choice and production. In fact, he dates back his interest for the models generalizing expected utility, such as the one he invented, rank-dependent utility (Quiggin, 1982; Quiggin & Wakker, 1994), to the developments in the models of production under uncertainty. His paper reconstructs these developments and clarifies the duality between models for decision under uncertainty and production under uncertainty. This reconstruction focuses on the pioneering role Peter has played in the author’s thinking.

The paper “How we decide shapes what we choose: Decision modes track consumer decisions that help decarbonize electricity generation” by Crystal Reek and Elke Weber examines, in a battery of experimental studies, how different decision modes (affective, computational, or role) affect choice in the environmental domain. The paper shows that decision-makers tend to adopt a computational approach which has actually a decreasing effect on environmentally friendly choices. The result has significant implications for policy-making (i.e., adoption of environmentally friendly electricity plans) and provides useful insights combining decision analysis and psychology, a domain in which Peter has contributed throughout his career.

David Schmeidler was Peter’s Ph.D. supervisor and has had a huge influence on Peter’s thinking. Peter considers David’s 1989 paper in which he linked the Choquet integral with the Ellsberg paradox (Schmeidler, 1989) one of the highlights of decision theory, the real starting point of modeling decision under ambiguity, and worthy of a Nobel prize. In Wakker (2020, p. 387), Peter describes David Schmeidler as “the biggest and most creative innovator of our field”. Many of Peter’s later papers were inspired by Schmeidler (1989) (e.g., Sarin & Wakker, 1997; Trautmann & Wakker, 2018; Wakker, 1989b, 1994). For the special issue, David has written a deep philosophical piece on rationality and uncertainty, two concepts that permeate Peter’s work. David compares these to pieces of art in the sense that they do not have concrete manifestations and that we see them through “distorted framed glasses” that change with experience. Schmeidler’s observations on axiomatizations are beautiful; according to him, they basically serve two purposes rhetoric and rationality. He also provides an additional argument supporting Peter’s recent criticism of extreme ergodic economics (Doctor et al., 2020): people are not particles and often act out of strategic considerations. Schmeidler ends with the hope that in the decades to come humanity may succeed to concretize abstract concepts like rationality and uncertainty, e.g., through brain research, much like dancing, which only with the invention of the motion picture received its concrete manifestation. If that would invalidate economic theory, then that is a small price to pay for such an important advancement.

Bleichrodt et al. (2001, BPW) propose a method to elicit utility as a way of dealing with inconsistent responses to stated preference surveys. In “Debiasing or regularisation? Two interpretations of the concept of ‘true preference’ in behavioral economics,” Robert Sugden contrasts BPW’s approach to more recent applications which aim to help individuals to avoid supposed mistakes in their private choices. The author elaborates that BPW is a non-paternalistic approach and that BPW strives to find relevant information from inconsistent survey responses for sound policy-making. The author convincingly argues for the idea of regularisation of preferences by means of a regularisation function that, consistent with normative standards of rationality that are appropriate for public decision-making, best represents the individual’s preferences.