The gambler’s fallacy is the tendency to expect random processes to switch more often than they actually do—for example, to think that after a string of tails, a heads is more likely. It’s often taken to be evidence for irrationality. It isn’t. Rather, it’s to be expected from a group of Bayesians who begin with causal uncertainty, and then observe unbiased data from an (in fact) statistically independent process. Although they converge toward the truth, they do so in an asymmetric way—ruling out “streaky” hypotheses more quickly than “switchy” ones. As a result, the majority (and the average) exhibit the gambler’s fallacy. If they have limited memory, this tendency persists even with arbitrarily-large amounts of data. Indeed, such Bayesians exhibit a variety of the empirical trends found in studies of the gambler’s fallacy: they expect switches after short streaks but continuations after long ones; these nonlinear expectations vary with their familiarity with the causal system; their predictions depend on the sequence they’ve just seen; they produce sequences that are too switchy; and they exhibit greater rates of gambler’s reasoning when forming binary predictions than when forming probability estimates. In short: what’s been thought to be evidence for irrationality may instead be rational responses to limited data and memory.