This paper analyzes individual probabilistic predictions of state outcomes in the 2008 U.S. presidential election. Employing an original survey of more than 19,000 respondents, ours is the ﬁrst study of electoral forecasting to involve multiple subnational predictions and to incorporate the inﬂuence of respondents’ home states. We relate a range of demographic, political, and cognitive variables to individual accuracy and predictions, as well as to how accuracy improved over time. We ﬁnd strong support for wishful thinking bias in expectations, as (...) Republicans gave higher probabilities to McCain victories and were worse at overall prediction. In addition, we ﬁnd that respondents living in states with higher vote shares for Obama performed better at prediction and displayed less wishful thinking bias. We conclude by showing that suitable aggregations of our respondents’ predictions outperformed Intrade (a prediction market) and ﬁvethirtyeight.com (a poll-based forecast) at most points in time. (shrink)
Stochastic forecasts in complex environments can beneﬁt from combining the estimates of large groups of forecasters (“judges”). But aggregating multiple opinions faces several challenges. First, human judges are notoriously incoherent when their forecasts involve logically complex events. Second, individual judges may have specialized knowledge, so diﬀerent judges may produce forecasts for diﬀerent events. Third, the credibility of individual judges might vary, and one would like to pay greater attention to more trustworthy forecasts. These considerations limit the value of simple aggregation (...) methods like linear averaging. In this paper, a new algorithm is proposed for combining probabilistic assessments from a large pool of judges. Two measures of a judge’s likely credibility are introduced and used in the algorithm to determine the judge’s weight in aggregation. The algorithm was tested on a data set of nearly half a million probability estimates of events related to the 2008 U.S. presidential election (∼ 16000 judges). (shrink)
��The Coherent Approximation Principle (CAP) is a method for aggregating forecasts of probability from a group of judges by enforcing coherence with minimal adjustment. This paper explores two methods to further improve the forecasting accuracy within the CAP framework and proposes practical algorithms that implement them. These methods allow ﬂexibility to add ﬁxed constraints to the coherentization process and compensate for the psychological bias present in probability estimates from human judges. The algorithms were tested on a data set of nearly (...) half a million probability estimates of events related to the 2008 U.S. presidential election (from about 16000 judges). The results show that both methods improve the stochastic accuracy of the aggregated forecasts compared to using simple CAP. (shrink)
The classical theory of preference among monetary bets represents people as expected utility maximizers with concave utility functions. Critics of this account often rely on assumptions about preferences over wide ranges of total wealth. We derive a prediction of the theory that bears on bets at any fixed level of wealth, and test the prediction behaviorally. Our results are discrepant with the classical account. Competing theories are also examined in light of our data.
A criterion of adequacy is proposed for theories of relevant consequence. According to the criterion, scientists whose deductive reasoning is limited to some proposed subset of the standard consequence relation must not thereby suffer a reduction in scientific competence. A simple theory of relevant consequence is introduced and shown to satisfy the criterion with respect to a formally defined paradigm of empirical inquiry.
A paradigm of scientific discovery is defined within a first-order logical framework. It is shown that within this paradigm there exists a formal scientist that is Turing computable and universal in the sense that it solves every problem that any scientist can solve. It is also shown that universal scientists exist for no regular logics that extend first-order logic and satisfy the Löwenheim-Skolem condition.
A model of idealized scientific inquiry is presented in which scientists are required to infer the nature of the structure that makes true the data they examine. A necessary and sufficient condition is presented for scientific success within this paradigm.
Alternative models of idealized scientific inquiry are investigated and compared. Particular attention is devoted to paradigms in which a scientist is required to determine the truth of a given sentence in the structure giving rise to his data.
This paper provides a mathematical model of scientific discovery. It is shown in the context of this model that any discovery problem that can be solved by a computable scientist can be solved by a computable scientist all of whose conjectures are finitely axiomatizable theories.