Skip to main content

Advertisement

Log in

What’s the Problem with the Frame Problem?

  • Published:
Review of Philosophy and Psychology Aims and scope Submit manuscript

Abstract

The frame problem was originally a problem for Artificial Intelligence, but philosophers have interpreted it as an epistemological problem for human cognition. As a result of this reinterpretation, however, specifying the frame problem has become a difficult task. To get a better idea of what the frame problem is, how it gives rise to more general problems of relevance, and how deep these problems run, I expound six guises of the frame problem. I then assess some proposed heuristic solutions to the frame problem; I show that these proposals misunderstand, and fail to address, an important aspect of the frame problem. Finally, I argue that though human cognition does not solve the frame problem in its epistemological guise, human cognition avoids some of the epistemological worries.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. In defending Classical computationalism from the purported threat posed by the frame problem, Samuels (2010) also describes the frame problem as constituting a set of related problems. But I do not think his analysis goes deep enough. The analysis I provide here may be seen as an extension of Samuels’.

  2. It is obviously too demanding of any system to maintain a completely veridical belief-set. What is required, instead, is a belief-set that is suitably veridical, or reasonably accurate (cf. Samuels 2010).

  3. One technical problem that is often cited is the “Yale Shooting Problem.” In this scenario, a gun is loaded at time t, and shooting the gun at Fred at t + 1 is expected to kill Fred. However, the formalism of nonmonotonic logics cannot uniquely determine whether Fred is dead. This is because there are two ways the formalism can characterize what happens as time passes from t to t + 1. On the one hand, the formalism can treat the gun’s being loaded as persisting through time, and Fred’s being alive as changing as a result of being shot by a loaded gun. This is the intuitive case. However, on the other hand, the formalism can treat Fred’s being alive as persisting through time, and the gun’s being loaded as changing as a result of Fred not dying in spite of the gun being fired at him. The problem is that the formalism has no way to determine which should persist between t and t + 1: Fred’s being alive or the gun’s being loaded. Hence, the formalism cannot predict whether Fred is alive or dead (Viger 2006a). We should note, however, that though the Yale Shooting Problem is a problem, it is not insurmountable, and indeed there are many solutions (Shanahan 2009).

  4. I assume a representational theory of mind, but nothing of what I provide in this paper hangs on this. Those who do not subscribe to any kind of representational theory of mind can simply substitute “information,” or something similar, for my talk of “representations.”

  5. What I am calling here the Epistemological Relevance Problem does not coincide with what Shanahan (2009) calls the epistemological frame problem. According to Shanahan, “The epistemological problem is this: How is it possible for holistic, open-ended, context-sensitive relevance to be captured by a set of propositional, language-like representations of the sort used in classical AI?” It appears that what Shanahan has in mind for the epistemological problem is something closer to the AI Frame Problem, but for humans in determining relevance.

  6. For further discussion see Carruthers (2006a, 2006b); Samuels (2005, 2006); Sperber and Wilson (1996).

  7. Of course, this does not mean that the system’s database will necessarily contain everything that is relevant to its tasks. And so it is possible that an encapsulated system can still have a problem with respect to considering what is relevant. But this poses a relevance problem of a different sort, which I will not be discussing here. In addition, even if an encapsulated system had access to all relevant information (but not only what is relevant), the system might still face the Epistemological Relevance Problem if its database was not organized in such a way that would facilitate determinations of relevance, or that would enable the system to consider (mostly) only what is relevant. (See below the discussion on the remaining guise if the frame problem.) Nevertheless, at the very least, a suitably encapsulated system will avoid the computational problem of tractably delimiting what gets considered in its tasks.

  8. Fodor (1983, 2000) argues that this implies that entire theories are the units of cognition. In Modularity of Mind, he refers to this as a Quinean property of central systems. See Samuels (2010) for arguments against this view.

  9. Carruthers believes that we may still speak of systems as informationally encapsulated if we construe encapsulation as a property that simply restricts informational access in processing. In this way, he believes that encapsulation can mean just that most information simply will not be considered in a system’s computations—a notion he calls “wide encapsulation” (Carruthers 2006a, 2006b). However, Samuels criticizes Carruthers’ view, claiming that “not only is it [wide encapsulation] different from what most theorists mean by ‘encapsulation,’ but it’s simply what you get by denying exhaustive search; and since virtually no one thinks exhaustive search is characteristic of human cognition, the present kind of ‘encapsulation’ is neither distinctive nor interesting” (Samuels 2006, p. 45).

  10. Roughly, the sleeping dog strategy is a rule to the effect of: Do not alter (update) a representation unless there is explicit indication of change, according to the dictum “let sleeping dogs lie.”

  11. Gigerenzer likewise believes that the majority of human inference and decision-making involves heuristics (Gigerenzer and Todd 1999; see also Gigerenzer 2007). But Gigerenzer’s analysis extends to reasoning and inference qua central cognition, whereas Carruthers, in arguing for the massive modularity thesis (Sperber 1994), suggests that Gigerenzer-style heuristics can be deployed by the various modules that purportedly compose central cognition.

  12. This is in stark contrast to Gigerenzer’s assertion that his research on heuristics is the study of “the way that real people make the majority of their inferences and decisions” (Gigerenzer and Todd 1999, p. 15, my emphasis). This also undermines Carruthers’ suggestion that Gigerenzer’s fast and frugal heuristics can provide tractable processes for cognition tout court.

  13. More precisely, a satisficing procedure sets an aspiration level appropriate to the task and goals of the agent; once this aspiration level is met, processing stops. As conceived by Simon (e.g., 1979), satisficing is a method to delimit the amount of time and resources devoted to a search task by avoiding exhaustive search and foregoing optimizing or maximizing aspiration levels.

  14. “The moral is not that the sleeping-dog strategy is wrong; it is that the sleeping-dog strategy is empty unless we have, together with the strategy, some idea of what is to count as a fact for the purposes at hand” (Fodor 1987, p. 31).

  15. Notwithstanding that Carruthers believes that central cognition is massively modular.

  16. Notice that objective relevance also cannot be determined a priori; but rather than against a set of beliefs, relevance is determined against a background of facts and phenomena. The motions of terrestrial objects, for instance, are not objectively relevant per se; it is objectively relevant with respect certain phenomena, such as planetary motion.

  17. Thanks to Richard Samuels for bringing these points to my attention.

  18. I do not claim that Cliff Hooker would endorse my arguments regarding objective relevance as a possible regulatory ideal. Whether he would, I suppose, depends on whether pursuing objective relevance is, in Hooker’s terminology, a degenerate idealization, which is a non-perspicuously represented deviation of real behavior by real agents. I believe that pursuing objective relevance is not a degenerate idealization, but I am not prepared to argue the point here.

  19. By this I mean that we cannot solve the problem, given our limited cognitive wherewithal. I do not mean “unsolvable” in a sense that implies that the problem is not recursive.

  20. As a point of interest, this idea was intimated by Fodor in The Language of Thought, though his purposes for broaching the issue were slightly different. He there commented that “a fundamental and pervasive feature of higher cognitive processes [is] the intelligent management of internal representations” (Fodor 1975, p. 164, emphasis in original).

  21. I note in passing that, although I speak here of heuristic solutions generally, I take Samuels’ gesture toward web-search-engine-like techniques to be more informative than Carruthers’ gesture toward Gigerenzer’s fast and frugal heuristics. As discussed above, Gigerenzer’s fast and frugal heuristics do not seem to apply to the representative tasks of human cognition.

References

  • Carruthers, P. 2003. On Fodor’s problem. Mind Lang 18: 502–523.

    Article  Google Scholar 

  • Carruthers, P. 2006a. The architecture of the mind: Massive modularity and the flexibility of thought. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Carruthers, P. 2006b. Simple heuristics meet massive modularity. In The innate mind: Culture and cognition, ed. P. Carruthers, S. Laurence, and S. Stich, 181–198. Oxford: Oxford University Press.

    Google Scholar 

  • Dennett, D.C. 1984. Cognitive wheels: The frame problem of AI. In Minds, machines, and evolution, ed. C. Hookway, 129–152. Cambridge, MA: Cambridge University Press.

    Google Scholar 

  • Evans, G. 1982. The varieties of reference. Oxford: Oxford University Press.

    Google Scholar 

  • Fodor, J.A. 1968. The appeal to tacit knowledge in psychological explanation. The Journal of Philosophy 65: 627–640.

    Article  Google Scholar 

  • Fodor, J.A. 1975. The language of thought. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Fodor, J.A. 1983. The modularity of mind. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Fodor, J.A. 1987. Modules, frames, fridgeons, sleeping dogs and the music of the spheres. In Modularity in knowledge, representation and natural-language understanding, ed. J.L. Garfield, 25–36. Cambridge, MA: The MIT Press. Reprinted from The robot’s dilemma: The frame problem in artificial intelligence, Z. Pylyshyn, ed., 1987, Norwood, NJ: Ablex.

    Google Scholar 

  • Fodor, J.A. 2000. The mind doesn’t work that way: The scope and limits of computational psychology. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Fodor, J.A. 2008. LOT 2: The language of thought revisited. Oxford: Clarendon.

    Google Scholar 

  • Gabbay, D., and J. Woods. 2003. A practical logic of cognitive systems, volume 1. Agenda relevance: A study in formal pragmatics. Amsterdam: Elsevier.

    Google Scholar 

  • Gigerenzer, G. 2007. Gut feelings: The intelligence of the unconscious. New York: Viking.

    Google Scholar 

  • Gigerenzer, G., and D.G. Goldstein. 1999. Betting on one good reason: The Take the Best heuristic. In Simple heuristics that make us smart, ed. G. Gigerenzer, P.M. Todd, and the ABC Research Group, 75–95. New York: Oxford University Press.

    Google Scholar 

  • Gigerenzer, G., and P.M. Todd. 1999. Fast and frugal heuristics: The adaptive toolbox. In Simple heuristics that make us smart, ed. G. Gigerenzer, P.M. Todd, and the ABC Research Group, 3–34. New York: Oxford University Press.

    Google Scholar 

  • Gigerenzer, G., P.M. Todd, and the ABC Research Group (eds.). 1999. Simple heuristics that make us smart. New York: Oxford University Press.

    Google Scholar 

  • Haugeland, J. 1985. Artificial intelligence: The very idea. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Hooker, C.A. 1994. Idealisation, naturalism, and rationality: Some lessons from minimal rationality. Synthese 99: 181–231.

    Google Scholar 

  • Hooker, C. 2011. Rationality as effective organisation of interaction and its naturalist framework. Axiomathes 21: 99–172.

    Article  Google Scholar 

  • Kahneman, D., A. Treisman, and B.J. Gibbs. 1992. The reviewing of object files: Object-specific integration of information. Cognitive Psychology 24: 175–219.

    Article  Google Scholar 

  • Kyburg Jr., H.E. 1996. Dennett’s beer. In The robot’s dilemma revisited: The frame problem in artificial intelligence, ed. K.M. Ford and Z.W. Pylyshyn, 49–60. Norwood, NJ: Ablex.

    Google Scholar 

  • Lawlor, K. 2001. New thoughts about old things: Cognitive policies as the ground of singular concepts. New York: Garland Publishing.

    Google Scholar 

  • Lormand, E. 1990. Framing the frame problem. Synthese 82: 353–374.

    Article  Google Scholar 

  • Lormand, E. 1996. The holorobophobe’s dilemma. In The robot’s dilemma revisited: The frame problem in artificial intelligence, ed. K.M. Ford and Z.W. Pylyshyn, 61–88. Norwood, NJ: Ablex.

    Google Scholar 

  • McCarthy, J., and P. Hayes. 1969. Some philosophical problems from the standpoint of artificial intelligence. In Machine intelligence, ed. B. Meltzer and D. Michie, 463–502. Edinburgh: Edinburgh University Press.

    Google Scholar 

  • McDermott, D. 1987. We’ve been framed: Or, why AI is innocent of the frame problem. In The robot’s dilemma: The frame problem in artificial intelligence, ed. Z.W. Pylyshyn, 113–122. Norwood, NJ: Ablex.

    Google Scholar 

  • Noë, A. 2005. Against intellectualism. Analysis 65: 278–290.

    Article  Google Scholar 

  • Plantinga, A. 1993. Warrant and proper function. New York: Oxford University Press.

    Book  Google Scholar 

  • Pylyshyn, Z.W. 1996. The frame problem blues. Once more, with feeling. In The robot’s dilemma revisited: The robot’s dilemma revisited: The frame problem in artificial intelligence, ed. K.M. Ford and Z.W. Pylyshyn, xi–xviii. Norwood, NJ: Ablex.

    Google Scholar 

  • Pylyshyn, Z.W. 2003. Seeing and visualizing: It’s not what you think. Cambridge: The MIT Press.

    Google Scholar 

  • Récanati, F. 1993. Direct reference: From language to thought. Oxford: Blackwell Publishing.

    Google Scholar 

  • Ryle, G. 1949. The concept of mind. London: Hutchinson.

    Google Scholar 

  • Samuels, R. 2005. The complexity of cognition: Tractability arguments for massive modularity. In The innate mind: Structure and contents, ed. P. Carruthers, S. Laurence, and S. Stich, 107–121. Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Samuels, R. 2006. Is the human mind massively modular? In Contemporary debates in cognitive science, ed. R.J. Stainton, 37–56. Malden, MA: Blackwell.

    Google Scholar 

  • Samuels, R. 2010. Classical computationalism and the many problems of cognitive relevance. Studies in History and Philosophy of Science 41: 280–293.

    Article  Google Scholar 

  • Shanahan, M. 2009. The frame problem. In E.N. Zalta (Ed.) The Stanford encyclopedia of philosophy (Winter 2009 Ed.). <http://plato.stanford.edu/archives/win2009/entries/frame-problem/>.

  • Simon, H.A. 1979. Models of thought. New Haven: Yale University Press.

    Google Scholar 

  • Snowdon, P. 2004. Knowing how and knowing that: A distinction reconsidered. Proceedings of the Aristotelian Society 104: 1–29.

    Article  Google Scholar 

  • Sperber, D. 1994. The modularity of thought and the epidemiology of representations. In Mapping the mind: Domain specificity in cognition and culture, ed. L.A. Hirschfeld and S.A. Gelman, 39–67. Cambridge, MA: Cambridge University Press.

    Chapter  Google Scholar 

  • Sperber, D., and D. Wilson. 1996. Fodor’s frame problem and relevance theory (reply to Chiappe & Kukla). The Behavioral and Brain Sciences 19: 530–532.

    Article  Google Scholar 

  • Stanley, J., and T. Williamson. 2001. Knowing how. The Journal of Philosophy 97: 411–444.

    Article  Google Scholar 

  • Sterelny, K. 2003. Thought in a hostile world. Malden, MA: Blackwell.

    Google Scholar 

  • Sterelny, K. 2006. Cognitive load and human decision, or, three ways of rolling the rock uphill. In The innate mind: Culture and cognition, ed. P. Carruthers, S. Laurence, and S. Stich, 218–233. Oxford: Oxford University Press.

    Google Scholar 

  • Treisman, A. 1982. Perceptual grouping and attention in visual search for features and for objects. Journal of Experimental Psychology. Human Perception and Performance 8: 194–214.

    Article  Google Scholar 

  • Viger, C. 2006a. Frame problem. In Encyclopedia of language and linguistics, ed. K. Brown. Oxford: Elsevier.

    Google Scholar 

  • Viger, C. 2006b. Is the aim of perception to provide accurate representations? A case for the “no” side. In Contemporary debates in cognitive science, ed. R.J. Stainton, 275–288. Malden, MA: Blackwell Publishing.

    Google Scholar 

  • Viger, C. 2006c. Symbols: What cognition requires of representationalism. Protosociology Int J Interdiscipl Res 22: 40–59.

    Google Scholar 

Download references

Acknowledgments

Many thanks to Chris Viger for all his comments and suggestions. Thanks also to Richard Samuels and Rob Stainton for comments and input on earlier drafts. And thanks to an anonymous referee of this journal whose comments helped to improve the latter part of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheldon J. Chow.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chow, S.J. What’s the Problem with the Frame Problem?. Rev.Phil.Psych. 4, 309–331 (2013). https://doi.org/10.1007/s13164-013-0137-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13164-013-0137-4

Keywords

Navigation