FRAME PROBLEM

MIT Encyclopedia of Cognitive Science

May, 1998

Eric Lormand
Dept. of Philosophy
University of Michigan

From its humble origins labeling a technical annoyance for a particular AI formalism, the term "frame problem" has grown to cover issues confronting broader research programs in AI.  In philosophy, the term has come to encompass allegedly fundamental, but merely superficially related, objections to computational models of mind in AI and beyond.

The original frame problem appears within the SITUATION CALCULUS for representing a changing world.  In such systems there are "axioms" about changes conditional on prior occurrences—that pressing a switch changes the illumination of a lamp, that selling the lamp changes who owns it, etc.  Unfortunately, since inferences are to be made solely by deduction, axioms are needed for purported nonchanges—that pressing the switch doesn't change the owner, that selling the lamp doesn't change its illumination, etc.  Without such "frame axioms," a system is unable strictly to deduce that any states persist.  The resulting problem is to do without huge numbers of frame axioms potentially relating each representable occurrence to each representable nonchange.

A common response is to handle nonchanges implicitly by allowing the system to assume by default that a state persists, unless there is an axiom specifying that it is changed by an occurrence, given surrounding conditions.  Since such assumptions are not deducible from the axioms of change (even given surrounding conditions), and since the licensed conclusions are not cumulative as evidence is added, the frame problem helps motivate the development of special "NONMONOTONIC LOGICS" intended to minimize the assumptions that must be retracted given further evidence.  This is related to discussions of defeasibility and ceteris paribus reasoning in epistemology and philosophy of science (e.g., Harman, 1986).

A related challenge is to determine which assumptions to retract when necessary, as in the "Yale Shooting Problem" (Hanks and McDermott, 1986).  Let a system assume by default (i) that live creatures remain alive, and (ii) that loaded guns remain loaded.  Confront it with this information: Fred is alive, then a gun is loaded, then, after a delay, the gun is fired at Fred.  If assumption (ii) is in force through the delay, Fred probably violates (i).  But equally, if assumption (i) is in force after the shooting, the gun probably violates (ii).  Why is (ii) the more natural assumption to enforce?  Some favor (ii) because the delay occurs before the shooting (e.g., Shoham, 1988).  Others favor (ii) because there is no represented reason to believe it violated, while the shooting provides some reason for believing (i) violated (e.g., Morgenstern, 1996; cf. philosophical discussions of inference to the best EXPLANATION, e.g., Thagard, 1988).  Work continues in this vein, seeking to formalize the relevant temporal and rational notions, and to insure that the strategies apply more broadly than the situation calculus. 

Another approach to the frame problem seeks to remain within the strictures of classical (monotonic) logic (Reiter, 1991).  In most circumstances, it avoids the use of huge numbers of axioms about nonchanges, but at the cost of using hugely and implausibly bold axioms about nonchanges.  For example, it is assumed that all the possible causes of a certain kind of effect are known, or that all the actual events or actions operating on a given situation are known.

Some philosophers of mind maintain that the original frame problem portends deeper problems for traditional AI, or at least for cognitive science more broadly.  (Unless otherwise mentioned, the relevant papers of the authors cited below may be found in Pylyshyn, 1987.)  Daniel Dennett wonders how to ignore information obviously irrelevant to one's goals, as one ignores many obvious nonchanges.  John Haugeland wonders how to keep track of salient side effects without constantly checking for them.  This includes the "ramification" and "qualification" problems of AI; see Morgenstern, 1996 for a survey.  Jerry Fodor wonders how to avoid the use of "kooky" concepts that render intuitive nonchanges as changes—e.g., "fridgeon" which applies to physical particles if and only if Fodor's fridge is on, so that Fodor can "change" the entire universe simply by unplugging his fridge.  AI researchers, including Drew McDermott and Pat Hayes, protest that these further issues are unconnected to the original frame problem.

Nevertheless, the philosophers' challenges must be met somehow if human cognition is to be understood in computational terms (see CAUSAL REASONING).  Exotic suggestions involve mental IMAGERY as opposed to a LANGUAGE OF THOUGHT (Haugeland, cf. Janlert in AI), nonrepresentational practical skills (Dreyfus and Dreyfus), and EMOTION-induced temporary modularity (de Sousa, 1987, ch. 7).  The authors of the Yale Shooting Problem argue, as well, against the hegemony of logical deduction—whether classical or nonmonotonic—in AI simulations of commonsense reasoning.  More conservative proposed solutions appeal to HEURISTIC SEARCH techniques and ideas about MEMORY long familiar in AI and cognitive psychology (Lormand, in Ford and Pylyshyn, 1996; Morgenstern, 1996 provides an especially keen survey of AI proposals).

 

de Sousa, R. (1987). The Rationality of Emotion. Cambridge: MIT Press.

Ford, K. and Z. Pylyshyn (Eds.). (1996). The Robot's Dilemma Revisited. Norwood, NJ: Ablex.

Hanks, S. and D. McDermott. (1986). Default reasoning, nonmonotonic logic, and the frame problem. Proceedings of the American Association for Artificial Intelligence: 328-33.

Harman, G. (1986). Change in View. Cambridge: MIT Press.

Morgenstern, L. (1996). The problem with solutions to the frame problem. In K. Ford and Z. Pylyshyn (Eds.), pp. 99-133.

Pylyshyn, Z. (Ed.). (1987). The Robot's Dilemma. Norwood, NJ: Ablex.

Reiter, R. (1991). The frame problem in the situation calculus: A simple solution (sometimes) and a completeness result for goal regression. In V. Lifschitz (Ed.), Artificial intelligence and mathematical theory of computation: Papers in honor of John McCarthy. Boston: Academic Press, pp. 359-380.

Shoham, Y. (1988). Reasoning about Change. Cambridge: MIT Press.

Thagard, P. (1988). Computational Philosophy of Science. Cambridge: MIT Press.