Elsevier

Cognition

Volume 203, October 2020, 104334
Cognition

Framing context effects with reference points

https://doi.org/10.1016/j.cognition.2020.104334Get rights and content

Abstract

Research on reference points highlights how alternatives outside the choice set can alter the perceived value of available alternatives, arguably framing the choice scenario. The present work utilizes reference points to study the effects of framing in preferential choice, using the similarity and attraction context effects as performance measures. We specifically test the predictions of Multialternative Decision by Sampling (MDbS; Noguchi & Stewart, 2018), a recent preferential choice model that can account for both reference points and context effects. In Experiment 1, consistent with predictions by MDbS, we find a standard similarity effect when no reference point is given that increases when both dimensions are framed negatively and decreases when both dimensions are framed positively. Contrary to predictions by MDbS, when the two dimensions are framed as tradeoffs, participants prefer whichever alternative performs best in the negatively framed dimension. Performance of MDbS was improved by the addition of a frame-based global attention allocation mechanism. Experiment 2 extends these results to a “by-dimension” presentation format in an attempt to bring participant behavior in line with MDbS assumptions. The empirical and modeling results replicated those of Experiment 1. Experiment 3 used the attraction effect to test the effects of framing when the best-performing alternative on each dimension was identical across target conditions, therefore reducing the potential effects of a global attention allocation mechanism. The effects of framing were indeed greatly reduced, and the performance of MDbS was markedly improved. The results extend framing to the context effects literature, provide new benchmarks for models and theories of context effects, and point to the need for a global attention mechanism.

Introduction

Preferential choice involves selecting the alternative with the highest perceived value out of a set of available options, such as selecting a favorite apartment from a realtor's listings. Context effects, such as the similarity (Tversky, 1972) and attraction (Huber, Payne, & Puto, 1982) effects, demonstrate that the perceived value of a given alternative may depend, in part, on the values of the other alternatives in the set. Often, however, there is an additional alternative that may also be relevant to the decision process: the alternative that the decision-maker currently owns and is seeking to replace, which may act as a reference point. Intuitively, replacing a high-value alternative constitutes a qualitatively different choice scenario than replacing a low-value alternative; in other words, the value of the reference point frames the choice scenario. Specifically, a value is positively or negatively framed if it is higher or lower than the corresponding value of the reference point. In the present study, we test the influence of framing on preferential choice behavior with a novel paradigm in which participants are presented with information regarding a hypothetical “current” alternative along with a set of potential replacement alternatives, using context effects as performance measures.

The paper proceeds as follows. We first review the literature on unavailable alternatives as reference points in preferential choice. We then present the similarity effect as a method of eliciting predictable choice behavior in order to better quantify the effects of reference points. The Multialternative Decision by Sampling (MDbS) model, developed by Noguchi and Stewart (2018), is then introduced as an ideal framework for testing the effects of reference points on context effects. We outline the specific predictions made by the model and test them in three experiments. Experiment 1 tests the influence of framing on preferential choice using multi-alternative, multi-attribute stimuli designed to elicit the similarity effect. To preview the results, we successfully replicated the similarity effect when no framing was used, i.e., when no explicit reference point was present. As predicted by MDbS, the similarity effect increased marginally when both dimensions of the alternatives were framed negatively but decreased when both dimensions were framed positively, suggesting that framing can moderate behavior even when congruent across dimensions. When the dimensions were framed as tradeoffs, i.e., one positive and one negative, participants overwhelmingly selected the alternative that rated best on the negatively framed dimension, resulting in a large dimensional bias not predicted by MDbS that overpowered the traditional effect of the decoy. Although the baseline version of MDbS fails to fully account for the effects of framing, the addition of framing-specific attention weights markedly improves performance. In Experiments 2 and 3, we extend both the experimental and modeling results to another test of the similarity effect and to the attraction effect, providing further support that framing influences preferential choice via a dimension-level attention mechanism. Other potential models are also briefly considered, though none ultimately outperform the modified MDbS.

Decades of research have shown that it is common to rely on a reference point to evaluate available alternatives and that this can have a strong effect on choice. Such behavior has been formally studied through a range of phenomena. In the endowment effect (Thaler, 1980), ownership of an object increases its perceived value relative to objects outside of the “endowment”, which otherwise may have been valued equally. Similarly, the status quo bias (Samuelson & Zeckhauser, 1988) refers to a preference to stick with a previous choice or currently-owned alternative, i.e. the “status quo”, rather than forfeiting it in favor of a new alternative. The improvements vs. tradeoffs effect (Tversky & Kahneman, 1991) is the observation that decision-makers are more inclined to trade for an alternative that confers an improvement on a given dimension rather than a trade-off between dimensions. Lastly, in the phantom decoy effect (Pratkanis & Farquhar, 1992), decision-makers are shown to prefer whichever alternative out of a set is most similar to a highly attractive but unavailable alternative, which can serve as reference point. Each of these effects implies, to varying degrees, that a reference point has the capacity to reframe the dimension values of available alternatives. That is, the subjective value of each alternative in the choice set, in part, is calculated relative to the value of the reference point.

Previous work has focused largely on the influence of a small subset of reference points, such as dominating alternatives in the case of the phantom decoy effect and similar alternatives in the case of the status quo and improvements vs. tradeoff effects. Work by Malkoc, Hedgcock, and Hoeffler (2013) utilized a moderately extended range of reference points by including currently owned alternatives that were unilaterally better or worse than the available alternatives (see Experiments 3A and 3B). In the present study, we seek to extend this literature to investigate the influence of reference points across a wider range of each dimension in a choice scenario. In the standard two-dimension scenario, a reference point may be better or worse than the available alternatives on one or both dimensions; in other words, the available alternatives may be framed as gains, losses, or tradeoffs in comparison to the reference point. Thus, the present study aims to use reference points to further study the influence of framing in preferential choice, and, in particular, to determine the effect of reference points on context effects.

To quantify the effects of reference points on choice behavior, we utilize stimuli designed to elicit context effects, in particular, the similarity effect (Tversky, 1972). The similarity effect is a preferential choice phenomenon associated with specific qualitative and quantitative behavioral predictions, and consequently represents an ideal tool for examining the influence of experimental manipulations on decision making. To illustrate the similarity effect, consider the scenario of choosing between several apartments that vary in ratings of their size and location, as depicted in Fig. 1. The axes depict the dimensions and each labeled point provides the dimension values of an alternative. First, consider a choice between Apartments X and Y. Apartment X rates well on location but poorly on size, and Apartment Y rates poorly on location but well on size. Because of the dimension trade-offs, assuming equal dimension weights, these two apartments would be valued equally. Indeed, all alternatives on the diagonal indifference line will have equal value.

Now, suppose that a third apartment becomes available and there is a choice between the three apartments. The similarity effect (Tversky, 1972) is the finding that the addition of Apartment SX in Fig. 1, which is similar to Apartment Y and dissimilar to Apartment X, but still on the indifference line, increases the preference of Apartment X over Apartment Y. Similarly, the addition of Apartment SY, which is similar to Apartment X and dissimilar to Apartment Y, but still on the indifference line, increases the preference of Apartment Y over Apartment X. The added alternative is called the decoy. Note that the subscript on the decoy indicates the target, i.e., which alternative, X or Y, is expected to show increased choice share. The remaining alternative is the competitor. For example, in a choice between X, Y, and SY, X is the competitor, Y is the target, and SY is the decoy.

We measure the similarity effect as a comparison between two three-choice scenarios, for example, a choice between X, Y, and SX and a choice between X, Y, and SY in Fig. 1 (Wedell, 1991). Under this framework, the similarity effect is obtained if P(X | X, Y, SX) > P(X | X, Y, SY) and P(Y | X, Y, SY) > P(Y | X, Y, SX). Likewise, defining ΔPX = P(X | X, Y, SX) - P(X | X, Y, SY) and ΔPY = P(Y | X, Y, SY) - P(Y | X, Y, SX), the similarity effect is obtained if both ΔPX > 0 and ΔPY > 0. Note that a dimension bias, without a similarity effect, could result in either ΔPX > 0 or ΔPY > 0. In light of the potential impact of dimension bias shifts due to framing, where possible, we measure and report ΔPX and ΔPY separately.

Tversky and Kahneman (1991) developed an early model of preferential choice to formally account for reference-dependent phenomena. In their model, each available alternative in a choice set is compared to a reference point. The valuation function includes loss aversion, in which a negative comparison to the reference point has greater influence than a positive comparison of the same size, and diminishing sensitivity, in which the influence of any given comparison diminishes with increased size. Although the model can account for reference-dependent phenomena, some authors have noted its inability to account for other behavioral effects, particularly context effects (Huber et al., 1982; Simonson, 1989; Tversky, 1972).

More recently, Noguchi and Stewart (2018) developed a sequential sampling model, Multialternative Decision by Sampling (MDbS), that can account for a wide variety of behavioral phenomena including both reference-dependent choice and context effects. MDbS assumes that each available alternative in a choice set is evaluated against the other alternatives in the set and any other alternative in working memory, such as reference points, through repeated pairwise comparisons within single dimensions. Thus, the model assumes that reference points are a natural component of the decision process. Further, as a dynamic model that also allows comparisons with other alternatives in the choice set, MDbS can additionally account for context effects and response times. Because MDbS constitutes a well-defined framework to investigate framing in preferential choice, we evaluate the experimental results through the lens of MDbS. Full details are included in Noguchi and Stewart (2018). In the sections below, we review the major components of the model and its predictions for the present experiments.

In MDbS, evidence accumulates over time in steps. Each step of the model consists of a pairwise comparison on one dimension in which the probability of selecting an available Alternative J for comparison on Dimension i is proportional to its similarity on that dimension to other alternatives, available or not, in the working memory set S:pevaluateJiKiSiKiJiexpαDJiKi.

According to Eq. (1), a value is more likely to be selected for comparison if it is similar to other values on a given dimension. Similarity in Eq. (1) is an exponentially decreasing function of distance scaled by the parameter α. The distance between values is defined by the function D:DJiKi=JiKiKi

Once selected for comparison, the probability of Ji being favored over Ki is determined bypfavorJioverKi=Fβ1DJiKiβ00ifJi>Kiotherwisewhere F is a logistic sigmoid function, β0 determines the proportional difference at which Ji is favored with probability 0.5, and β1 controls the steepness of the sigmoid. Thus, the probability that a given value is favored increases with the proportional difference between the two values. As will be discussed below, it can be useful to think of β0 as defining the point at which the difference between values becomes subjectively negligible.

Preference for an alternative accumulates in one-unit increments whenever one of its values is favored in a comparison. The preference accumulation rate is determined by the joint probability of Ji being first evaluated, then favored, on each dimension in the set D:paccumulateJ=iDpevaluateJiKiSiKiJipcompareJitoKipfavorJioverKi,in which pcompareJitoKi=1/S1. The accumulation process continues until preference for an available alternative exceeds the average of all available alternatives by θ. The greater the preference accumulation rate, the greater the probability that an alternative is chosen. Thus, MDbS predicts that an alternative is more likely to be chosen when it is similar to other alternatives, increasing its probability of being evaluated, and has dimension values that are sufficiently greater than at least some of its competitors, increasing its probability of being favored.

MDbS (Noguchi & Stewart, 2018) accounts for the similarity effect as a function of discounting small comparisons. Consider again a choice between X, Y, and SY in Fig. 1. The alternatives X and SY are more similar to each other and therefore, according to MDbS, more likely to be evaluated on each dimension than Y. Recall, however, that the proportional advantage of a given alternative must be greater than β0 to incur an above-chance probability of being favored. The similarity effect occurs when β0 is greater than the differences between X and SY, rendering these differences subjectively negligible. Thus, each of these two alternatives is favored over the other only at chance rates, allowing Y to rise above them in preference.

MDbS can also account for the effect of reference points. The present experiments test the influence of four reference points on choice, as depicted in Fig. 1. Recall that the two dimensions are size (“S”) and location (“L”). A reference point that negatively or positively frames the remaining alternatives on a given dimension is marked with a “-” and “+” on that dimension, respectively. The first reference point (-S-L) is better than the available alternatives on both dimensions, resulting in an overall negative frame. The second reference point (+S+L) is worse than the available alternatives on both dimensions, resulting in an overall positive frame. The third reference point (-S+L) is better than the available alternatives on size but worse on location, resulting in a tradeoff frame. The final reference point (+S-L) is worse than the available alternatives on size but better on location, resulting in a second tradeoff frame.

Note that the reference points were placed such that, on each dimension, the smallest possible difference between the reference point and an available alternative is the same as the smallest possible difference between the available alternatives. For example, in Fig. 1, the difference between +S-L and SY on location is the same as the difference between SY and X on location. Thus, if differences between the competitor and decoy are subjectively negligible according to MDbS, so are differences between the reference point and decoy.

MDbS predicts the effect of each of these reference points on the similarity effect as follows. Reference Point -S-L is rated better than the available alternatives on both dimensions and therefore does not serve to increase their probability of being favored. It is, however, similar to each alternative on their dominant dimensions, consequently increasing their likelihood of being evaluated in cases where they perform well. In the choice between X, Y, and SY, -S-L increases the probability of X and SY being evaluated on location and the probability of Y being evaluated on size. Recall that, whereas both X and SY can be favored over Y on location, Y can be favored over both X and SY on size. Thus, adding -S-L to the choice set ultimately benefits Y more than X or SY because Y is favored over multiple alternatives on its dominant dimension. For this same reason, in a choice between X, Y, and SX, adding -S-L benefits X (which can be favored over both Y and SX on location) more than Y or SX (which can each only be reliably favored over X on size). MDbS therefore predicts that -S-L is most likely to benefit the alternative that is distant from, and therefore targeted by, the decoy in a similarity context, thereby increasing the similarity effect.

In contrast, Reference Point +S+L is most similar to the available alternatives on their non-dominant dimensions, consequently increasing the likelihood of an alternative of being evaluated when it performs poorly. In the choice between X, Y, and SY, +S+L increases the probability of X and SY being evaluated on size and the probability of Y being evaluated on location. Critically, increasing the probability of Y being evaluated on location rather than size extinguishes the advantage that it traditionally gets from being favored over both X and SY on size. Further, due to its proximity to both +S+L and SY, X is more likely to be evaluated than Y, making it more likely to boost in preference from favorable comparisons with +S+L. Thus, adding +S+L to the choice set ultimately benefits X more than Y or SY because, despite being evaluated more often on its non-dominant dimension, its proximity to multiple alternatives gives it a better chance of incurring favorable comparisons with the reference point. For this same reason, in a choice between X, Y, and SX, adding +S+L benefits Y (which is similar to both +S+L and SX) more than X (which is less likely to be evaluated) or SX (which ought to only be favored over +S+L at chance rates). MDbS therefore predicts that +S+L is most likely to benefit the alternative that is not targeted by, and therefore similar to, the decoy in a similarity context, thereby reducing and potentially reversing the similarity effect. Interestingly, the joint predictions of MDbS for the -S-L and +S+L conditions imply that framing can affect preference even when the framing is congruent across dimensions. To preview, the present experiments support this prediction.

Reference points -S+L and +S-L are somewhat more straightforward by virtue of being equally similar to the available alternatives on both dimensions. In the choice between X, Y, and SY, the reference point -S+L is similar to Y on both size and location, increasing its probability of being evaluated on both dimensions. Further, though the probability of being favored increases for both X and Y on location, where each has the potential to be reliably favored over -S+L, this advantage is most impactful for Y given its additional increased probability of being evaluated. Thus, MDbS predicts that Y will be most preferred not only because it is the target alternative but because of its similarity to -S+L. In the choice between X, Y, and SX, however, the latter benefit to Y is blunted due to the nearby decoy SX. That is, relative to the SY choice set, Y still has a high probability of being evaluated against -S+L, where it is likely to be favored, but an even higher probability of being evaluated against SX, where it will only be favored at chance rates. Thus, despite the boost in preference from -S+L, Y has a decreased probability of being favored overall, allowing X to rise in preference. Reference point +S-L is predicted to operate similarly: In the choice between X, Y, and SY, the boost in preference to X from +S-L is blunted by increased comparisons with SY, allowing Y to rise in preference. In the choice between X, Y, and SX, however, X is well preferred. MDbS therefore predicts a positive, but asymmetric, effect of the decoy in the tradeoff conditions, that is, a larger shift in preference is expected for Y than X in the -S+L condition and vice versa for the +S-L condition.

In summary, MDbS predicts that, compared to a condition with no reference point, Reference Point -S-L will increase the similarity effect, Reference Point +S+L will decrease or reverse the similarity effect, and Reference Points -S+L and +S-L will produce asymmetric effects.

Section snippets

Experiment 1: Reference points and the similarity effect

The goal of Experiment 1 is to systematically measure the effect of reference points on preferential choice, using the similarity effect as a performance measure. On each trial, the participant selects from three alternatives, X, Y, and either SX or SY from Fig. 1. The options are apartments that vary on two dimensions, size and location. There are five reference point conditions. The first is a frameless condition with no reference point, in which we predict a standard similarity effect. The

Experiment 2: A replication emphasizing within-dimension comparisons

The goal of Experiment 2 is to replicate Experiment 1 while providing participants with stimuli that encourage the within-dimension comparisons assumed by MDbS and other models of preferential choice. A within-dimension comparison would be, for example, comparing the sizes of different apartments. Experiment 2 therefore proceeds exactly as Experiment 1, with one difference: The stimuli are presented in what Cataldo and Cohen, 2018, Cataldo and Cohen, 2019 called a “by-dimension” presentation

Experiment 3: The attraction effect

Experiments 1 and 2 tested the effect of framing on the similarity effect. Experiment 3 extends and generalizes the results to the attraction effect (Huber et al., 1982). To illustrate the attraction effect, consider a choice between Apartments X and Y in Fig. 1. The attraction effect is the finding that the addition of Apartment AX, which is similar to but dominated by Apartment X, increases the choice share of Apartment X. Similarly, the addition of Apartment AY, which is similar to but

Summary of findings

Context effects such as the similarity (Tversky, 1972) and attraction (Huber et al., 1982) effects demonstrate how adding additional alternatives to a choice set can alter the perceived value of the original alternatives. Decades of research on reference points, however, highlight how alternatives outside the choice set can similarly influence preferences. Replacing a low-value alternative intuitively constitutes a qualitatively different choice scenario than replacing a high-value alternative,

Supplementary material

Data and model code can be accessed through the Open Science Foundation: https://osf.io/sdzg3/.

CRediT authorship contribution statement

Andrea M. Cataldo: Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Software, Visualization, Writing - original draft, Writing - review & editing. Andrew L. Cohen: Conceptualization, Formal analysis, Methodology, Resources, Software, Supervision, Writing - original draft, Writing - review & editing.

References (40)

  • S. Frederick et al.

    The limits of attraction

    Journal of Marketing Research

    (2014)
  • J. Huber et al.

    Adding asymmetrically dominated alternatives: Violations of regularity and the similarity hypothesis

    Journal of Consumer Research

    (1982)
  • J. Huber et al.

    Let’s be honest about the attraction effect

    Journal of Marketing Research

    (2014)
  • J.K. Kruschke

    Doing Bayesian data anlaysis

    Doing Bayesian Data Analysis

    (2014)
  • S.X. Liew et al.

    The appropriacy of averaging in the study of context effects

    Psychonomic Bulletin & Review

    (2016)
  • J.C. Nash

    On best practice optimization methods in R

    Journal of Statistical Software

    (2014)
  • J.C. Nash et al.

    Unifying optimization algorithms to aid software system users: Optimx for R

    Journal of Statistical Software

    (2011)
  • T. Noguchi et al.

    Multialternative decision by sampling: A model of decision making constrained by process data

    Psychological Review

    (2018)
  • A. Parducci

    Category judgment: A range-frequency model

    Psychological Review

    (1965)
  • J.W. Payne et al.

    Adaptive strategy selection in decision making

    Journal of Experimental Psychology: Learning, Memory, and Cognition

    (1988)
  • Cited by (0)

    View full text