skip to main content
article

On Potential Cognitive Abilities in the Machine Kingdom

Authors Info & Claims
Published:01 May 2013Publication History
Skip Abstract Section

Abstract

Animals, including humans, are usually judged on what they could become, rather than what they are. Many physical and cognitive abilities in the 'animal kingdom' are only acquired (to a given degree) when the subject reaches a certain stage of development, which can be accelerated or spoilt depending on how the environment, training or education is. The term 'potential ability' usually refers to how quick and likely the process of attaining the ability is. In principle, things should not be different for the 'machine kingdom'. While machines can be characterised by a set of cognitive abilities, and measuring them is already a big challenge, known as 'universal psychometrics', a more informative, and yet more challenging, goal would be to also determine the potential cognitive abilities of a machine. In this paper we investigate the notion of potential cognitive ability for machines, focussing especially on universality and intelligence. We consider several machine characterisations (non-interactive and interactive) and give definitions for each case, considering permanent and temporal potentials. From these definitions, we analyse the relation between some potential abilities, we bring out the dependency on the environment distribution and we suggest some ideas about how potential abilities can be measured. Finally, we also analyse the potential of environments at different levels and briefly discuss whether machines should be designed to be intelligent or potentially intelligent.

References

  1. Amari, S., Fujita, N., Shinomoto, S. (1992). Four types of learning curves. Neural Computation 4(4), 605-618. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Aristotle (Translation, Introduction, and Commentary by Ross, W.D.) (1924). Aristotle's Metaphysics. Oxford: Clarendon Press.Google ScholarGoogle Scholar
  3. Barmpalias, G. & Dowe, D. L. (2012). Universality probability of a prefix-free machine. Philosophical transactions of the Royal Society A [Mathematical, Physical and Engineering Sciences] (Phil Trans A), Theme Issue 'The foundations of computation, physics and mentality: The Turing legacy' compiled and edited by Barry Cooper and Samson Abramsky, 370, pp 3488-3511.Google ScholarGoogle Scholar
  4. Chaitin, G. J. (1966). On the length of programs for computing finite sequences. Journal of the Association for Computing Machinery, 13, 547-569. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Chaitin, G. J. (1975). A theory of program size formally identical to information theory. Journal of the ACM (JACM), 22(3), 329-340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Dowe, D. L. (2008, September). Foreword re C. S. Wallace. Computer Journal, 51(5):523-560, Christopher Stewart WALLACE (1933-2004) memorial special issue.Google ScholarGoogle Scholar
  7. Dowe, D. L. (2011). MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness. In: P. S. Bandyopadhyay, M. R. Forster (Eds), Handbook of the philosophy of science--Volume 7: Philosophy of statistics (pp. 901-982). Amsterdam: Elsevier.Google ScholarGoogle Scholar
  8. Dowe, D. L. & Hajek, A. R. (1997a). A computational extension to the turing test. Technical report #97/322, Dept Computer Science, Monash University, Melbourne, Australia, 9 pp, http://www.csse.monash. edu.au/publications/1997/tr-cs97-322-abs.htmlGoogle ScholarGoogle Scholar
  9. Dowe, D. L. & Hajek, A. R. (1997b, September). A computational extension to the Turing Test. in Proceedings of the 4th conference of the Australasian Cognitive Science Society, University of Newcastle, NSW, Australia, 9 pp.Google ScholarGoogle Scholar
  10. Dowe, D. L. & Hajek, A. R. (1998, February). A non-behavioural, computational extension to the Turing Test. In: International conference on computational intelligence and multimedia applications (ICCIMA'98), Gippsland, Australia, pp 101-106.Google ScholarGoogle Scholar
  11. Dowe, D. L., Hernández-Orallo, J. (2012). IQ tests are not for machines, yet. Intelligence, 40(2), 77-81.Google ScholarGoogle ScholarCross RefCross Ref
  12. Gallistel, C. R., Fairhurst, S., & Balsam, P. (2004). The learning curve: Implications of a quantitative analysis. In Proceedings of the National Academy of Sciences of the United States of America, 101(36), 13124-13131.Google ScholarGoogle ScholarCross RefCross Ref
  13. Gardner, M. (1970). Mathematical games: The fantastic combinations of John Conway's new solitaire game ''life''. Scientific American, 223(4), 120-123.Google ScholarGoogle ScholarCross RefCross Ref
  14. Goertzel, B. & Bugaj, S. V. (2009). AGI preschool: A framework for evaluating early-stage humanlike AGIs. In Proceedings of the second international conference on artificial general intelligence (AGI- 09), pp 31-36.Google ScholarGoogle Scholar
  15. Hernández-Orallo, J. (2000a). Beyond the Turing Test. Journal of Logic, Language & Information, 9(4), 447-466. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Hernández-Orallo, J. (2000b). On the computational measurement of intelligence factors. In A. Meystel (Ed), Performance metrics for intelligent systems workshop (pp 1-8). Gaithersburg, MD: National Institute of Standards and Technology.Google ScholarGoogle Scholar
  17. Hernández-Orallo, J. (2010). On evaluating agent performance in a fixed period of time. In M. Hutter et al. (Eds.), Proceedings of 3rd international conference on artificial general intelligence (pp. 25-30). New York: Atlantis Press.Google ScholarGoogle Scholar
  18. Hernández-Orallo, J., & Dowe, D. L. (2010). Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence, 174(18), 1508-1539. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Hernández-Orallo, J. & Dowe, D. L. (2011, April). Mammals, machines and mind games. Who's the smartest?. The conversation, http://theconversation.edu.au/mammals-machines-and-mind-games-whos-the-smartest-566.Google ScholarGoogle Scholar
  20. Hernández-Orallo J., Dowe D. L., España-Cubillo S., Hernández-Lloreda M. V., & Insa-Cabrera J. (2011). On more realistic environment distributions for defining, evaluating and developing intelligence. In: J. Schmidhuber, K. R. Thórisson, & M. Looks (Eds.), Artificial general intelligence 2011, volume 6830, LNAI series, pp. 82-91. New York: Springer. Google ScholarGoogle Scholar
  21. Hernández-Orallo, J., Dowe, D. L., & Hernández-Lloreda, M. V. (2012a, March). Measuring cognitive abilities of machines, humans and non-human animals in a unified way: towards universal psychometrics. Technical report 2012/267, Faculty of Information Technology, Clayton School of I.T., Monash University, Australia.Google ScholarGoogle Scholar
  22. Hernández-Orallo, J., Insa, J., Dowe, D. L., & Hibbard, B. (2012b). Turing tests with Turing machines. In A. Voronkov (Ed.), The Alan Turing centenary conference, Turing-100, Manchester, volume 10 of EPiC Series, pp 140-156.Google ScholarGoogle Scholar
  23. Hernández-Orallo, J., & Minaya-Collado, N. (1998). A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In Proceedings of the international symposium of engineering of intelligent systems (EIS'98) (pp 146-163). Switzerland: ICSC Press.Google ScholarGoogle Scholar
  24. Herrmann, E., Call, J., Hernández-Lloreda, M. V., Hare, B., & Tomasello, M. (2007). Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science, 317(5843), 1360-1366.Google ScholarGoogle ScholarCross RefCross Ref
  25. Herrmann, E., Hernández-Lloreda, M. V., Call, J., Hare, B., & Tomasello, M. (2010). The structure of individual differences in the cognitive abilities of children and chimpanzees. Psychological Science, 21(1), 102-110.Google ScholarGoogle Scholar
  26. Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of educational psychology, 57(5), 253.Google ScholarGoogle ScholarCross RefCross Ref
  27. Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. New York: Springer. Google ScholarGoogle Scholar
  28. Insa-Cabrera, J., Dowe, D. L., España, S., Hernández-Lloreda, M. V., & Hernández-Orallo, J. (2011a). Comparing humans and AI agents. In AGI: 4th conference on artificial general intelligence-- Lecture Notes in Artificial Intelligence (LNAI), volume 6830, pp 122-132. Springer, New York. Google ScholarGoogle Scholar
  29. Insa-Cabrera, J., Dowe, D. L., & Hernández-Orallo, J. (2011b). Evaluating a reinforcement learning algorithm with a general intelligence test. In CAEPIA--Lecture Notes in Artificial Intelligence (LNAI), volume 7023, pages 1-11. Springer, New York. Google ScholarGoogle Scholar
  30. Kearns, M. & Singh, S. (2002). Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2), 209-232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problems of Information Transmission, 1, 4-7.Google ScholarGoogle Scholar
  32. Legg, S. (2008, June). Machine super intelligence. Department of Informatics, University of Lugano.Google ScholarGoogle Scholar
  33. Legg, S. & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391-444. Google ScholarGoogle ScholarCross RefCross Ref
  34. Legg, S., & Veness, J. (2012). An approximation of the universal intelligence measure. In Proceedings of Solomonoff 85th memorial conference. New York: Springer.Google ScholarGoogle Scholar
  35. Levin, L. A. (1973). Universal sequential search problems. Problems of Information Transmission, 9(3), 265-266.Google ScholarGoogle Scholar
  36. Li, M., Vitányi, P. (2008). An introduction to Kolmogorov complexity and its applications (3rd ed). New York: Springer. Google ScholarGoogle Scholar
  37. Little, V. L., & Bailey, K. G. (1972). Potential intelligence or intelligence test potential? A question of empirical validity. Journal of Consulting and Clinical Psychology, 39(1), 168.Google ScholarGoogle Scholar
  38. Mahoney, M. V. (1999). Text compression as a test for artificial intelligence. In Proceedings of the national conference on artificial intelligence, AAAI (pp. 486-502). New Jersey: Wiley. Google ScholarGoogle Scholar
  39. Mahrer, A. R. (1958). Potential intelligence: A learning theory approach to description and clinical implication. The Journal of General Psychology, 59(1), 59-71.Google ScholarGoogle Scholar
  40. Oppy, G., & Dowe, D. L. (2011). The Turing Test. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy. Stanford University. http://plato.stanford.edu/entries/turing-test/Google ScholarGoogle Scholar
  41. Orseau, L. & Ring, M. (2011). Self-modification and mortality in artificial agents. In AGI: 4th conference on artificial general intelligence--Lecture Notes in Artificial Intelligence (LNAI), volume 6830, pages 1-10. Springer, New York. Google ScholarGoogle Scholar
  42. Ring, M. & Orseau, L. (2011). Delusion, survival, and intelligent agents. In AGI: 4th conference on artificial general intelligence--Lecture Notes in Artificial Intelligence (LNAI), volume 6830, pp. 11-20. Springer, New York. Google ScholarGoogle Scholar
  43. Schaeffer, J., Burch, N., Bjornsson, Y., Kishimoto, A., Muller, M., Lake, R., et al. (2007). Checkers is solved. Science, 317(5844), 1518.Google ScholarGoogle ScholarCross RefCross Ref
  44. Solomonoff, R. J. (1962). Training sequences for mechanized induction. In M. Yovits, G. Jacobi, & G. Goldsteins (Eds.), Self-Organizing Systems, 7, 425-434.Google ScholarGoogle Scholar
  45. Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7(1-22), 224-254.Google ScholarGoogle ScholarCross RefCross Ref
  46. Solomonoff, R. J. (1967). Inductive inference research: Status, Spring 1967. RTB 154, Rockford Research, Inc., 140 1/2 Mt. Auburn St., Cambridge, Mass. 02138, July 1967.Google ScholarGoogle Scholar
  47. Solomonoff, R. J. (1978). Complexity-based induction systems: comparisons and convergence theorems. IEEE Transactions on Information Theory, 24(4), 422-432. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Solomonoff, R. J. (1984). Perfect training sequences and the costs of corruption--A progress report on induction inference research. Oxbridge research.Google ScholarGoogle Scholar
  49. Solomonoff, R. J. (1985). The time scale of artificial intelligence: Reflections on social effects. Human Systems Management, 5, 149-153.Google ScholarGoogle Scholar
  50. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: The MIT press. Google ScholarGoogle Scholar
  51. Thorp, T. R., & Mahrer, A. R. (1959). Predicting potential intelligence. Journal of Clinical Psychology, 15(3), 286-288.Google ScholarGoogle Scholar
  52. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.Google ScholarGoogle ScholarCross RefCross Ref
  53. Veness, J., Ng, K. S., Hutter, M., & Silver, D. (2011). A Monte Carlo AIXI approximation. Journal of Artificial Intelligence Research, JAIR, 40, 95-142. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Wallace, C. S. (2005). Statistical and inductive inference by minimum message length. New York: Springer. Google ScholarGoogle Scholar
  55. Wallace, C. S., & Boulton, D. M. (1968). An information measure for classification. Computer Journal, 11, 185-194.Google ScholarGoogle ScholarCross RefCross Ref
  56. Wallace, C. S., & Dowe, D. L. (1999a). Minimum message length and Kolmogorov complexity. Computer Journal 42(4), 270-283.Google ScholarGoogle ScholarCross RefCross Ref
  57. Wallace, C. S., & Dowe, D. L. (1999b). Refinements of MDL and MML coding. Computer Journal, 42(4), 330-337.Google ScholarGoogle ScholarCross RefCross Ref
  58. Woergoetter, F., & Porr, B. (2008). Reinforcement learning. Scholarpedia, 3(3), 1448.Google ScholarGoogle ScholarCross RefCross Ref
  59. Zvonkin, A. K., & Levin, L. A. (1970). The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Mathematical Surveys, 25, 83-124.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access