Human beings think of themselves in terms of a privileged non-descriptive designator — a mental “I”. Such thoughts are called “de se” thoughts. The mind/body problem is the problem of deciding what kind of thing I am, and it can be regarded as arising from the fact that we think of ourselves non-descriptively. Why do we think of ourselves in this way? We investigate the functional role of “I” (and also “here” and “now”) in cognition, arguing that the use of (...) such non-descriptive “reflexive” designators is essential for making sophisticated cognition work in a general-purpose cognitive agent. If we were to build a robot capable of similar cognitive tasks as humans, it would have to be equipped with such designators. Once we understand the functional role of reflexive designators in cognition, we will see that to make cognition work properly, an agent must use a de se designator in specific ways in its reasoning. Rather simple arguments based upon how “I” works in reasoning lead to the conclusion that it cannot designate the body or part of the body. If it designates anything, it must be something non-physical. However, for the purpose of making the reasoning work correctly, it makes no difference whether “I” actually designates anything. If we were to build a robot that more or less duplicated human cognition, we would not have to equip it with anything for “I” to designate, and general physicalist inclinations suggest that there would be nothing for “I” to designate in the robot. In particular, it cannot designate the physical contraption. So the robot would believe “I exist”, but it would be wrong. Why should we think we are any different? (shrink)
This paper presents a challenge problem for decision-theoretic planners. State-space planners reason globally, building a map of the parts of the world relevant to the planning problem, and then attempt to distill a plan out of the map. A planning problem is constructed that humans find trivial, but no state-space planner can solve. Existing POCL planners cannot solve the problem either, but for a less fundamental reason.
This paper investigates decision-theoretic planning in sophisticated autonomous agents operating in environments of real-world complexity. An example might be a planetary rover exploring a largely unknown planet. It is argued th a t existing algorithms for decision-theoretic planning are based on a logically incorrect theory of rational decision making. Plans cannot be evaluated directly in terms of their expected values, because plans can be of different scopes, and they can interact with other previously adopted plans. Furthermore, in the real world, (...) the search for optimal plans is completely intractable. An alternative theory of rational decision making is proposed, called “locally global planning”. (shrink)
As a high school student, I rediscovered Hume’s problem of induction on my own. For a while, I was horrified. I thought, “We cannot know anything!” After a couple of weeks I calmed down and reasoned that there had to be something wrong with my thinking, and that led me quickly to the realization that good reasons need not be deductive, and to the discovery of defeasible reasoning. From there it was a short jump to a more general interest in (...) how rational cognition works. I am interested in rational cognition in general. Epistemology is one constituent of rational cognition, practical cognition (rational decision making) another. Much of the work on rational cognition begins with the supposition that only ideal agents can be truly rational. Real agents have limited powers of reasoning and limited memory capacity. It is often supposed that such resource-bounded agents can only approximate rationality, and that as philosophers we should confine our attention to ideal agents. If one wishes, one can of course define “rationality” in this way, but this has never been what interested me. We come to philosophy wondering what we should believe, what we should do, and how we should go about deciding these matters. These are questions about ourselves, with all of our cognitive limitations. For example, it is often claimed that ideal agents, with unlimited cognitive powers, should believe all of the logical consequences of their beliefs. But we, as real resource-bounded agents, cannot do that, so that is not something we should do. What I want to know is how I, as a real agent, should go about deciding what to believe and what to do. Thus my topic is real rationality as opposed to ideal rationality. In the realm of practical decision making, I have explored this distinction at great length in my recent book (2006). Here I will focus on its implications for epistemology. For many years epistemology was derailed by the Gettier problem.. (shrink)
It is conjectured that MDP and POMDP planning will remain unfeasible for complex domains, so some form of ÒclassicalÓ decision-theoretic planning is sought. However, local plans cannot be properly compared in terms of their expected values, because those values will be affected by the other plans the agent has adopted. Plans must instead be merged into a single Òmaster-planÓ, and new plans evaluated in terms of their contribution to the value of the master plan. To make both the construction and (...) evaluation of plans feasible, it is proposed to evaluate plans and their interactions defeasibly. (shrink)
Probability plays an essential role in many branches of AI, where it is typically assumed that we have a complete probability distribution when addressing a problem. But this is unrealistic for problems of real-world complexity. Statistical investigation gives us knowledge of some probabilities, but we generally want to know many others that are not directly revealed by our data. For instance, we may know prob(P/Q) (the probability of P given Q) and prob(P/R), but what we really want is prob(P/Q&R), and (...) we may not have the data required to assess that directly. The probability calculus is of no help here. Given prob(P/Q) and prob(P/R), it is consistent with the probability calculus for prob(P/Q&R) to have any value between 0 and 1. Is there any way to make a reasonable estimate of the value of prob(P/Q&R)? A related problem occurs when probability practitioners adopt undefended assumptions of statistical independence simply on the basis of not seeing any connection between two propositions. This is common practice, but its justification has eluded probability theorists, and researchers are typically apologetic about making such assumptions. Is there any way to defend the practice? This paper shows that on a certain conception of probability — nomic probability — there are principles of “probable probabilities” that license inferences of the above sort. These are principles telling us that although certain inferences from probabilities to probabilities are not deductively valid, nevertheless the second-order probability of their yielding correct results is 1. This makes it defeasibly reasonable to make the inferences. Thus I argue that it is defeasibly reasonable to assume statistical independence when we have no information to the contrary. And I show that there is a function Y(r,s:a) such that if prob(P/Q) = r, prob(P/R) = s, and prob(P/U) = a (where U is our background knowledge) then it is defeasibly reasonable to expect that prob(P/Q&R) = Y(r,s:a).. (shrink)
It’s morning. You sit down at your desk, cup of coffee in hand, and prepare to begin your day. First, you turn on your computer. Once it is running, you check your e-mail. Having decided it is all spam, you trash it. You close the window on your e-mail program, but leave the program running so that it will periodically check the mail server to see whether you have new mail. If it finds new mail it will alert you by (...) playing a musical tone. Next you start your word processor. You have in mind to write a paper in moral philosophy about whether people who send spam deserve capital punishment. So you open a new window and type several paragraphs of text into it. You like what you wrote, so you save it, creating a file. Later, you have more thoughts about spam and capital punishment, so you open the file again and make some changes. Then it is time to go to class. You turn off your word processor, but leave your computer running so that your email program can collect your e-mail. This mundane sequence of events can seem philosophically puzzling when we think about it carefully. While in your word processor, you opened several windows, entered text into them, and created files. What sorts of things are these files, windows, and text? It might seem that windows are easy to understand. You can, after all, see windows. That is the whole point of them. You see a window by seeing a pattern on the surface of your monitor. Isn’t the window identical with that physical pattern? But that is too quick. First, you can turn your monitor off. The window is still open. You can type text into it, and if you turn the monitor back on you can verify that you made that change. Second, you can drag another window in front of the original window. The original window disappears from view, but it still exists. Things may be happening in it that you cannot see. For example, if it is an e-mail window, new messages may be listed in it as they are 1 downloaded.. (shrink)
The objective of this book is to produce a theory of rational decision making for realistically resource-bounded agents. My interest is not in “What should I do if I were an ideal agent?”, but rather, “What should I do given that I am who I am, with all my actual cognitive limitations?” The book has three parts. Part One addresses the question of where the values come from that agents use in rational decision making. The most comon view among philosophers (...) is that they are based on preferences, but I argue that this is computationally impossible. I propose an alternative theory somewhat reminiscent of Bentham, and explore how human beings actually arrive at values and how they use them in decision making. Part Two investigates the knowledge of probability that is required for decision-theoretic reasoning. I argue that subjective probability makes no sense as applied to realistic agents. I sketch a theory of objective probability to put in its place. Then I use that to define a variety of causal probability and argue that this is the kind of probability presupposed by rational decision making. So what is to be defended is a variety of causal decision theory. Part Three explores how these values and probabilities are to be used in decision making. In chapter eight, it is argued first that actions cannot be evaluated in terms of their expected values as ordinarily defined, because that does not take account of the fact that a cognizer may be unable to perform an action, and may even be unable to try to perform it. An alternative notion of “expected utility” is defined to be used in place of expected values. In chapternine it is argued that individual actions cannot be the proper objects of decision-theoretic evaluation. We must instead choose plans, and select actions indirectly on the grounds that they are prescribed by the plans we adopt. However, our objective cannot be to find plans with maximal expected utilities. Plans cannot be meaningfully compared in that way.. (shrink)
It is argued that we cannot build a sophisticated autonomous planetary rover just by implementing sophisticated planning algorithms. Planning must be based on information, and the agent must have the cognitive capability of acquiring new information about its environment. That requires the implementation of a sophisticated epistemology. Epistemological considerations indicate that the rover cannot be assumed to have a complete probability distribution at its disposal. Its planning must be based upon “thin” knowledge of probabilities, and that has important implications for (...) what planning algorithms might be employed. (shrink)
In the past, few mainstream epistemologists have endorsed Bayesian epistemology, feeling that it fails to capture the complex structure of epistemic cognition. The defenders of Bayesian epistemology have tended to be probability theorists rather than epistemologists, and I have always suspected they were more attracted by its mathematical elegance than its epistemological realism. But recently Bayesian epistemology has gained a following among younger mainstream epistemologists. I think it is time to rehearse some of the simpler but still quite devastating objections (...) to Bayesian epistemology. Most of these objections are familiar, but have never been adequately addressed by the Bayesians. (shrink)
Chisholm's ontological objective is the reductionist one of translating statements which appear to be about propositions and generic events into statements about states of affairs, denying the existence of concrete events altogether. The paper questions this program by criticising the notion of concretization on which Chisholm heavily relies. It is argued that there are no convincing arguments in favor of eliminative reductionism. Translability of statements about one kind of entity into statements about another kind of entity has nothing to do (...) with what exists. (shrink)
In the Newcomb problem, the standard arguments for taking either one box or both boxes adduce what seem to be relevant considerations, but they are not complete arguments, and attempts to complete the arguments rely upon incorrect principles of rational decision making. It is argued that by considering how the predictor is making his prediction, we can generate a more complete argument, and this in turn supports a form of causal decision theory.
The strategy of this paper is to throw light on rational cognition and epistemic justification by examining irrationality. Epistemic irrationality is possible because we are reflexive cognizers, able to reason about and redirect some aspects of our own cognition. One consequence of this is that one cannot give a theory of epistemic rationality or epistemic justification without simultaneously giving a theory of practical rationality. A further consequence is that practical irrationality can affect our epistemic cognition. I argue that practical irrationality (...) derives from a general difficulty we have in overriding built-in shortcut modules aimed at making cognition more efficient, and all epistemic irrationality can be traced to this same source. A consequence of this account is that a theory of rationality is a descriptive theory, describing contingent features of a cognitive architecture, and it forms the core of a general theory of “voluntary” cognition — those aspects of cognition that are under voluntary control. It also follows that most of the so-called “rules for rationality” that philosophers have proposed are really just rules describing default (non- reflexive) cognition. It can be perfectly rational for a reflexive cognizer to break these rules. The “normativity” of rationality is a reflection of a built-in feature of reflexive cognition — when we detect violations of rationality, we have a tendency to desire to correct them. This is just another part of the descriptive theory of rationality. Although theories of rationality are descriptive, the structure of reflexive cognition gives philosophers, as human cognizers, privileged access to certain aspects of rational cognition. Philosophical theories of rationality are really scientific theories, based on inference to the best explanation, that take contingent introspective data as the evidence to be explained. (shrink)
It’s morning. You sit down at your desk, cup of coffee in hand, and prepare to begin your day. First, you turn on your computer. Once it is running, you check your e-mail. Having decided it is all spam, you trash it. You close the window on your e-mail program, but leave the program running so that it will periodically check the mail server to see whether you have new mail. If it finds new mail it will alert you by (...) playing a musical tone. Next you start your word processor. You have in mind to write a paper in moral philosophy about whether people who send spam. (shrink)
When your word processor or email program is running on your computer, this creates a "virtual machine” that manipulates windows, files, text, etc. What is this virtual machine, and what are the virtual objects it manipulates? Many standard arguments in the philosophy of mind have exact analogues for virtual machines and virtual objects, but we do not want to draw the wild metaphysical conclusions that have sometimes tempted philosophers in the philosophy of mind. A computer file is not made of (...) epiphenomenal ectoplasm. I argue instead that virtual objects are "supervenient objects". The stereotypical example of supervenient objects is the statue and the lump of clay. To this end I propose a theory of supervenient objects. Then I turn to persons and mental states. I argue that my mental states are virtual states of a cognitive virtual machine implemented on my body, and a person is a supervenient object supervening on his cognitive virtual machine. (shrink)
Human beings think of themselves in terms of a privileged non-descriptive designator — a mental “I”. Such thoughts are called “_de se_” thoughts. The mind/body problem is the problem of deciding what kind of thing I am, and it can be regarded as arising from the fact that we think of ourselves non-descriptively. Why do we think of ourselves in this way? We investigate the functional role of “I” (and also “here” and “now”) in cognition, arguing that the use of (...) such non-descriptive “reflexive” designators is essential for making sophisticated cognition work in a general-purpose cognitive agent. If we were to build a robot capable of similar cognitive tasks as humans, it would have to be equipped with such designators. (shrink)
Imagine yourself sitting on your front porch, sipping your morning coffee and admiring the scene before you. You see trees, houses, people, automobiles; you see a cat running across the road, and a bee buzzing among the flowers. You see that the flowers are yellow, and blowing in the wind. You see that the people are moving about, many of them on bicycles. You see that the houses are painted different colors, mostly earth tones, and most are one-story but a (...) few are two-story. It is a beautiful morning. Thus the world interfaces with your mind through your senses. There is a strong intuition that we are not disconnected from the world. We and the other things we see around us are part of a continuous whole, and we have direct access to them through vision, touch, etc. However, the philosophical tradition tries to drive a wedge between us and the world by insisting that the information we get from perception is the result of inference from indirect evidence that is about how things look and feel to us. The philosophical problem of perception is then to explain what justifies these inferences. We will focus on visual perception. Figure one presents a crude diagram of the cognitive system of an agent capable of forming beliefs on the basis of visual perception. Cognition begins with the stimulation of the rods and cones on the retina. From that physical input, some kind of visual processing produces an introspectible visual image. In response to the production of the visual image, the cognizer forms beliefs about his or her surroundings. Some beliefs the perceptual beliefs are formed as direct responses to the visual input, and other beliefs are inferred from the perceptual beliefs. The perceptual beliefs are, at the very least, caused or causally influenced by having the image. This is signified by the dashed arrow marked with a large question mark. We will refer to this as the mystery link. Figure one makes it apparent that in order to fully understand how knowledge is based on perception, we need three different theories.. (shrink)
Examples growing out of the Newcomb problem have convinced many people that decision theory should proceed in terms of some kind of causal probability. I endorse this view and define and investigate a variety of causal probability. My definition is related to Skyrms' definition, but proceeds in terms of objective probabilities rather than subjective probabilities and avoids taking causal dependence as a primitive concept.
Counterexamples are constructed for the theory of rational choice that results from a direct application of classical decision theory to ordinary actions. These counterexamples turn on the fact that an agent may be unable to perform an action, and may even be unable to try to perform an action. An alternative theory of rational choice is proposed that evaluates actions using a more complex measure, and then it is shown that this is equivalent to applying classical decision theory to "conditional (...) policies" rather than ordinary actions. (shrink)
Cognitive agents form beliefs representing the world, evaluate the world as represented, form plans for making the world more to their liking, and perform actions executing the plans. Then the cycle repeats. This is the doxastic-conative loop, diagrammed in figure one.1 Both human beings and the autonomous rational agents envisaged in AI are cognitive agents in this sense. The cognition of a cognitive agent can be subdivided into two parts. Epistemic cognition is that kind of cognition responsible for producing and (...) maintaining beliefs. Practical cognition evaluates the world, adopts plans, and initiates action. There is a massive literature both in philosophy and artificial intelligence concerning various aspects of epistemic cognition, and large parts of it are well understood. Practical cognition is less well understood. We can usefully divide practical cognition into five parts: (1) the evaluation of the world as represented by the agent’s beliefs, (2) the adoption of goals for changing it, (3) the construction of plans for achieving goals, (4) the adoption of plans, and (5) the execution of plans. There is a substantial literature in AI concerning the construction and execution of plans, and I will say nothing further about those topics here. This paper will focus on the evaluative aspects of practical cognition. Evaluation plays an essential role in both goal selection and plan adoption. My concern here is the investigation of evaluation as a cognitive enterprise performed by cognitive agents. I am interested both in how it is performed in human beings and how it might be performed in artificial rational agents. (shrink)
I argue here that sophisticated AI systems, with the exception of those aimed at the psychological modeling of human cognition, must be based on general philosophical theories of rationality and, conversely, philosophical theories of rationality should be tested by implementing them in AI systems. So the philosophy and the AI go hand in hand. I compare human and generic rationality within a broad philosophy of AI and conclude by suggesting that ultimately, virtually all familiar philosophical problems will turn out to (...) be at least indirectly relevant to the task of building an autonomous rational agent, and conversely, the AI enterprise has the potential to throw light at least indirectly on most philosophical problems. (shrink)
Postulational approaches attempt to understand the dynamics of belief revision by appealing to no more than the set of beliefs held by an agent and the logical relations between them. It is argued there that such an approach cannot work. A proper account of belief revision must also appeal to the arguments supporting beliefs, and recognize that those arguments can be defeasible. If we begin with a mature epistemological theory that accommodates this, it can be seen that the belief revision (...) operators on which the postulational theories are based are ill-defined. It is further argued that there is no way to repair the definitions so as to retain the spirit of those theory. Belief revision is better studied from within an independently motivated epistemological theory. (shrink)
Stuart Russell  describes rational agents as --œthose that do the right thing--�. The problem of designing a rational agent then becomes the problem of figuring out what the right thing is. There are two approaches to the latter problem, depending upon the kind of agent we want to build. On the one hand, anthropomorphic agents are those that can help human beings rather directly in their intellectual endeavors. These endeavors consist of decision making and data processing. An agent that (...) can help humans in these enterprises must make decisions and draw conclusions that are rational by human standards of rationality. Anthropomorphic agents can be contrasted with goal-oriented agents --” those that can carry out certain narrowly-defined tasks in the world. Here the objective is to get the job done, and it makes little difference how the agent achieves its design goal. (shrink)
The objective of the OSCAR Project is twofold. On the one hand, it is to construct a general theory of rational cognition. On the other hand, it is to construct an artificial rational agent (an "artilect") implementing that theory. This is a joint project in philosophy and AI.
Practical reasoning aims at deciding what actions to perform in light of the goals a rational agent possesses. This has been a topic of interest in both philosophy and artificial intelligence, but these two disciplines have produced very different models of practical reasoning. The purpose of this paper is to examine each model in light of the other and produce a unified model adequate for the purposes of both disciplines and superior to the standard models employed by either.The philosophical (decision-theoretic) (...) model directs activity by evaluating acts one at a time in terms of their expected utilities. It is argued that, except in certain special cases, this constitutes an inadequate theory of practical reasoning leading to intuitively incorrect action prescriptions. Acts must be viewed as parts of plans, and plans evaluated as coherent units rather than piecemenal in terms of the acts comprising them. Rationality dictates choosing acts by first choosing the plans prescribing them. Plans, in turn, are compared by looking at their expected values. However, because plans can be embedded in one another, we cannot select plans just by maximizing expected values. Instead, we must employ a more complex criterion here named coextendability. (shrink)
To summarize, in order for rational agents to be able to engage in the sophisticated kinds of reasoning exemplified by human beings, they must be able to introspect much of their cognition. The problem of other minds and the problem of knowing the mental states of others will arise automatically for any rational agent that is able to introspect its own cognition. The most that a rational agent can reasonably believe about other rational agents is that they have rational architectures (...) similar to its own, and that they have thoughts related to their rational architectures in certain ways. This leads to Rational Functionalism as an account of what it is to be a cognizer having mental states. That in turn entails that a computer can be a person in precisely the same sense as my next door neighbor if it can appropriately mimic my rational architecture. There is nothing I could know about my neighbor that I could not believe with equal justification about the computer. Rational Functionalism also makes it reasonable to define the narrow content of a thought to be its overall place in the agent's rational architecture, that is, its conceptual role. This is, however, avery narrow notion of content. For practical purposes, we are not interested in knowing the narrow contents of other people's thoughts. We are only interested in rather general properties of those narrow contents. This is what is expressed by the use ofthat-clauses in public language. (shrink)
This article sketches a theory of objective probability focusing on nomic probability, which is supposed to be the kind of probability figuring in statistical laws of nature. The theory is based upon a strengthened probability calculus and some epistemological principles that formulate a precise version of the statistical syllogism. It is shown that from this rather minimal basis it is possible to derive theorems comprising (1) a theory of direct inference, and (2) a theory of induction. The theory of induction (...) is not of the familiar Bayesian variety, but consists of a precise version of the traditional Nicod Principle and its statistical analogues. (shrink)
Probabilities are important in belief updating, but probabilistic reasoning does not subsume everything else (as the Bayesian would have it). On the contrary, Bayesian reasoning presupposes knowledge that cannot itself be obtained by Bayesian reasoning, making generic Bayesianism an incoherent theory of belief updating. Instead, it is indefinite probabilities that are of principal importance in belief updating. Knowledge of such indefinite probabilities is obtained by some form of statistical induction, and inferences to non-probabilistic conclusions are carried out in accordance with (...) the statistical syllogism. Such inferences have been the focus of much attention in the nonmonotonic reasoning literature, but the logical complexity of such inference has not been adequately appreciated. (shrink)
An argument is self-defeating when it contains defeaters for some of its own defeasible lines. It is shown that the obvious rules for defeat among arguments do not handle self-defeating arguments correctly. It turns out that they constitute a pervasive phenomenon that threatens to cripple defeasible reasoning, leading to almost all defeasible reasoning being defeated by unexpected interactions with self-defeating arguments. This leads to some important changes in the general theory of defeasible reasoning.
The author poses a question: when a person has a thought, what is it that determines what thought he is having? and, equivalently, what is it that determines what thought he is having. looking for an answer he sketches some general aspects of the problems involved in answering these questions, like the mind/body problem, for example. his conclusion is that the posed questions should be set against the background assumption that thoughts are just internal physical occurrences, and that thoughts are (...) categorized in the same way other mental occurrences are categorized. so the author categorizes them in terms of their introspectible characteristics and in terms of "that" clauses. these classifications can be understood as describing thought in terms of their place in the rational architecture. (shrink)