Decisions in the environmental and in particular the climate domain are burdened with uncertainty. Here, we focus on uncertainties faced by individuals when making decisions about environmental behavior, and we use the statistical sampling framework to develop a classification of different sources of uncertainty they encounter. We then map these sources to different public policy strategies aiming to help individuals cope with uncertainty when making environmental decisions.
Models of intertemporal choice draw on three evaluation rules, which we compare in the restricted domain of choices between smaller sooner and larger later monetary outcomes. The hyperbolic discounting model proposes an alternative-based rule, in which options are evaluated separately. The interval discounting model proposes a hybrid rule, in which the outcomes are evaluated separately, but the delays to those outcomes are evaluated in comparison with one another. The tradeoff model proposes an attribute-based rule, in which both outcomes and delays (...) are evaluated in comparison with one another: People consider both the intervals between the outcomes and the compensations received or paid over those intervals. We compare highly general parametric functional forms of these models by means of a Bayesian analysis, a method of analysis not previously used in intertemporal choice. We find that the hyperbolic discounting model is outperformed by the interval discounting model, which, in turn, is outperformed by the tradeoff model. Our cognitive modeling is among the first to offer quantitative evidence against the conventional view that people make intertemporal choices by discounting the value of future outcomes, and in favor of the view that they directly compare options along the time and outcome attributes. (shrink)
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ‘‘sampling’’ assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak sampling. In strong sampling, data are assumed to have been deliberately generated as positive examples of a concept, whereas in weak sampling, (...) data are assumed to have been generated without any restrictions. We develop a more general account of sampling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one-dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak sampling, but that there are large individual differences in the relative emphasis different people give to each type of sampling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak sampling, and possible extensions of our modeling approach to richer problems of inductive generalization. (shrink)
The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of (...) challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements. (shrink)
Computational modeling of the brain holds great promise as a bridge from brain to behavior. To fulfill this promise, however, it is not enough for models to be 'biologically plausible': models must be structurally accurate. Here, we analyze what this entails for so-called psychobiological models, models that address behavior as well as brain function in some detail. Structural accuracy may be supported by (1) a model's a priori plausibility, which comes from a reliance on evidence-based assumptions, (2) fitting existing data, (...) and (3) the derivation of new predictions. All three sources of support require modelers to be explicit about the ontology of the model, and require the existence of data constraining the modeling. For situations in which such data are only sparsely available, we suggest a new approach. If several models are constructed that together form a hierarchy of models, higher-level models can be constrained by lower-level models, and low-level models can be constrained by behavioral features of the higher-level models. Modeling the same substrate at different levels of representation, as proposed here, thus has benefits that exceed the merits of each model in the hierarchy on its own. (shrink)