Abstract
The main goal of my paper is to argue that data compression is a necessary condition for intelligence. One key motivation for this proposal stems from a paradox about intuition and intelligence. For the purposes of this paper, it will be useful to consider playing board games—such as chess and Go—as a paradigm of problem solving and cognition, and computer programs as a model of human cognition. I first describe the basic components of computer programs that play board games, namely value functions and search functions. I then argue that value functions both play the same role as intuition in humans and work in essentially the same way. However, as will become apparent, using an ordinary value function is just a simpler and less accurate form of relying on a database or lookup table. This raises our paradox, since reliance on intuition is usually considered to manifest intelligence, whereas usage of a lookup table is not. I therefore introduce another condition for intelligence that is related to data compression. This proposal allows that even reliance on a perfectly accurate lookup table can be nonintelligent, while retaining the claim that reliance on intuition can be highly intelligent. My account is not just theoretically plausible, but it also captures a crucial empirical constraint. This is because all systems with limited resources that solve complex problems—and hence, all cognitive systems—need to compress data.
Similar content being viewed by others
Notes
The minimax principle was introduced by von Neumann (1928).
In what follows, I take value functions to map board positions to pairs of features and winning probabilities or expected utilities. This allows me to distinguish value functions that assign the same values to all positions.
There has been some discussion of the relation between philosophical and other kinds of intuitions in the debate about experimental philosophy—cf., e.g., Weinberg et al. (2010).
The view that intuition cannot be implemented in a computer program is defended in (Dreyfus and Dreyfus 1986).
It seems plausible that this issue is domain-dependent. For instance, expert intuition in chess and in Go is very likely rather complex. Otherwise, it would be hard to explain why it has turned out to be so difficult to transfer it to computer programs—cf. below.
To get a sense of how human intuition compares to a program’s value function, I tested the strongest traditional (not neural network based) program, Stockfish 9, in conditions in which the speed of its search is lowered to a level closer to that of humans. In the test, Stockfish 9, set to calculate 1000 moves per move, played ten games against a human with a FIDE rating of 2381 (which corresponds to an expectancy against the highest-rated human player—Magnus Carlsen—of 5.5%). The human player won 10–0, taking on average less than 3 s per move.
For instance, in a talk in 2017, he said this: “How can you find your way in this ocean of possibilities? And of course, how can a man fight a machine that could calculate tens and tens of millions of positions per second? Intuition, because it is all about the decision-making process. We never employ calculation as the main tool. It’s one percent of calculation or less, and 99 percent of our understanding, of our ability to find intuitive ways to comparing […] material versus quality, time versus material; intuition pays a key role” (“Kasparov Intuition” 2017).
A very similar point was already made by Shannon and McCarthy (1956), v–vi).
The exceptions are exclusively among those very rare positions that also involve castling rights.
Li and Vitányi 2008 provides an overview to Kolmogorov complexity.
The Kolmogorov complexity of an object is dependent on the language(s) involved. However, as Solomonoff’s invariance theorem shows, the difference in complexity between different languages only varies with a constant, whose relevance decreases with the length of the program (Solomonoff 1964).
One popular measure of accuracy is the Brier-score, which corresponds to the mean squared error.
While one might apply the term ‘compression’ only to the latter kind of process, I use the term here in a more liberal sense, such that any discarding of information counts as compression. Setting this terminological matter aside, the crucial point is that on an account of the type I suggest, in which intelligence depends on the complexity of the underlying process, systems that discard irrelevant information are evaluated favorably.
For an overview of the relevant research, cf. Kahneman and Klein (2009).
For an extensive overview, cf. Hernández-Orallo (2017).
Hernández-Orallo and Dowe (2010) criticize Legg & Hutter’s choice of environments.
Let me note that Legg and Hutter are aware of the issue raised by Block, but nevertheless prefer a behavioral account of intelligence (Legg and Hutter 2007, p. 425).
Strictly speaking, Dowe and Hajek’s argument only supports the claim that a program’s using a compressed algorithm is evidence that learning has occurred, and hence, that the program is intelligent. Accordingly, I am uncertain whether they suggest that using a compressed algorithm is a metaphysically necessary condition for intelligence, or whether they mean that it should be a practical requirement for passing an intelligence test, and thus, for being considered intelligent.
Scott Aaronson (2011, p. 14) also hints at a similar view—without, however, endorsing it. He notes that one might require an intelligent system to be polynomially-bounded.
Legg and Hutter (2005, p. 435) also suggest that the notion of Levin complexity might be of use in an account of intelligence.
References
Aaronson, S. (2011). Why philosophers should care about computational complexity. arXiv:1108.1791.
Block, N. (1981). Psychologism and behaviorism. Philosophical Review, 90, 5–43.
Campbell, M., Hoane, J. H., Jr., & Hsu, F-h. (2002). Deep blue. Artificial Intelligence, 134, 57–83.
Chase, W., & Simon, H. (1973). The mind’s eye in chess. In W. G. Chase (Ed.), Visual information processing (pp. 215–281). New York: Academic Press.
ChessOK. (2012). Lomonosov tablebases. Retrieved from http://tb7.chessok.com/. Accessed 5 Feb 2019.
Dowe, D., & Hajek, A. (1997). A computational extension to the turing test. Technical Report #97/322, Department of Computer Science, Monash University, Clayton 3168, Australia.
Dowe, D., Hernández-Orallo, J., & Das, P. (2011). Compression and intelligence: Social environments and communication. In J. Schmidhuber, K. R. Thorisson, & M. Looks (Eds.), Artificial general intelligence. AGI 2011 (pp. 204–211). Berlin: Springer.
Dreyfus, H., & Dreyfus, S. (1986). Mind over machine. The power of human intuition and expertise in the era of the computer. New York: The Free Press.
Epstein, S. (2008). Intuition from the perspective of cognitive-experiential self-theory. In H. Plessner, C. Betsch, & T. Betsch (Eds.), Intuition in judgment and decision making (pp. 23–37). New York: Lawrence Erlbaum Associates.
Gigerenzer, G. (2007). Gut feelings. The intelligence of the unconscious. New York: Viking.
Gigerenzer, G., & Goldstein, D. (1996). Reasoning the fast and frugal way. Models of bounded rationality. Psychological Review, 103(4), 650–669.
Glöckner, A. (2008). Does intuition beat fast and frugal heuristics? a systematic empirical analysis. In H. Plessner, C. Betsch, & T. Betsch (Eds.), Intuition in judgment and decision making (pp. 309–325). New York: Lawrence Erlbaum Associates.
Glöckner, A., & Betsch, T. (2008). Multiple reason decision making based on automatic processing. Journal of Experimental Psychology. Learning, Memory, and Cognition, 34(5), 1055–1075.
Gobet, F., & Simon, H. (1996). Templates in chess memory: A mechanism for recalling several boards. Cognitive Psychology, 31, 1–40.
Hernández-Orallo, J. (2000). Beyond the turing test. Journal of Logic, Language and Information, 9(4), 447–466.
Hernández-Orallo, J. (2017). The measure of all minds. Cambridge: Cambridge University Press.
Hernández-Orallo, J., & Dowe, D. (2010). Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence, 174(18), 1508–1539.
Hernández-Orallo, J., & Minaya-Collado, N. (1998). A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In E. Alpaydin (Ed.), Proceedings of the international ICSC symposium on engineering of intelligent systems (pp. 146–163). Millet: ICSC Press.
Hutter, M. (2000). A theory of universal artificial intelligence based on algorithmic complexity. arXiv:cs.AI/0004001.
Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Berlin: Springer.
Hutter, M. (2006). The human knowledge compression prize. http://prize.hutter1.net. Accessed 5 Feb 2019.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.
Kahneman, D., & Klein, G. (2009). Conditions of intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515–526.
Kasanoff, B. (2017, February 21). Intuition is the highest form of intelligence. Forbes. Retrieved from https://www.forbes.com/sites/brucekasanoff/2017/02/21/intuition-is-the-highest-form-of-intelligence/#4d0739583860. Accessed 5 Feb 2019.
Kasparov: ‘Intuition Versus the Brute Force of Calculation’. (2003, February 10). CNN.com. Retrieved from http://edition.cnn.com/2003/TECH/fun.games/02/08/cnna.kasparov/. Accessed 5 Feb 2019.
Kasparov: ‘Let Your Intuition Guide You’. (2017, March 13). Goalcast.com. Retrieved from https://goalcast.com/2017/03/13/chess-grandmaster-garry-kasparov-let-intuition-guide-you/. Accessed 5 Feb 2019.
Klein, G., Calderwood, R., & Clinton-Cirocco, A. (1985). Rapid decision making on the fire ground. In Proceedings of the human factors and ergonomics society 30th annual meeting (Vol. 1, pp. 576–580). Norwood, NJ: Ablex.
Kolmogorov, A. (1963). On tables of random numbers. Sankhyā Series A, 25, 369–375.
Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17, 391–444.
Levin, L. (1973). Universal sequential search problems. Problems of Information Transmission, 9(3), 265–266.
Li, M., & Vitányi, P. (2008). An introduction to Kolmogorov complexity and its applications (3rd ed.). New York: Springer.
Mahoney, M. (1999). Text compression as a test for artificial intelligence. In Proceedings of the National Conference on Artificial Intelligence, AAAI. Wiley, 970.
Myers, D. (2002). Intuition. Its powers and perils. New Haven: Yale University Press.
Okoli, J., Weller, G., & Watt, J. (2016). Information processing and intuitive decision-making on the fireground. Towards a model of expert intuition. Cognition, Technology & Work, 18(1), 89–103.
Pachur, T., & Marinello, G. (2013). Expert intuitions. How to model the decision strategies of airport customs officers? Acta Psychologica, 144, 97–103.
Poland, J., & Hutter, M. (2005). Asymptotics of discrete MDL for online prediction. IEEE Transactions on Information Theory, 51, 3780–3795.
Poland, J., & Hutter, M. (2006). MDL convergence speed for Bernoulli sequences. Statistics and Computing, 16(2), 161–175.
Shannon, C., & McCarthy, J. (1956). Preface. In ibid. Automata Studies, 34(v), viii.
Silver, D., et al. (2017). Mastering the game of go without human knowledge. Nature,. https://doi.org/10.1038/nature24270.
Silver, D., et al. (2018). A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419), 1140–1144.
Simon, H. (1995). Explaining the ineffable AI on the topics of intuition, insight and inspiration. In C. Mellish (Ed.), IJCAI’95 Proceedings of the 14th international joint conference on artificial intelligence (Vol. 1, pp. 939–948)., Morgan Kaufmann Publishers CA: San Francisco.
Solomonoff, R. (1964). A formal theory of inductive inference: Parts 1 and 2. Information and Control, 7, 1–22.
Stockfish Evaluation Guide: Piece Value mg. (2018, January). Retrieved from https://hxim.github.io/Stockfish-Evaluation-Guide/. Accessed 5 Feb 2019.
Turing, A. (1953). Chess. In B. V. Bowden (Ed.), Faster than thought. London: Pitman & Sons.
Von Neumann, J. (1928). Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100(1), 295–320.
Weinberg, J., et al. (2010). Are philosophers expert intuiters? Philosophical Psychology, 23(3), 331–355.
Acknowledgements
I have presented versions of this article at the University of Rochester, the University of Cambridge, and the University of Osnabrück. I would like to thank the audiences on these occasions for helpful comments. I am especially grateful for comments and discussions to Scott Aaronson, José Hernández-Orallo, Joachim Horvath, Zeynep Soysal and two anonymous referees for this journal.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Kipper, J. Intuition, intelligence, data compression. Synthese 198 (Suppl 27), 6469–6489 (2021). https://doi.org/10.1007/s11229-019-02118-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-019-02118-8