This new edition of the classic Contemporary Theories of Knowledge has been significantly updated to include analyses of the recent literature in epistemology.
"A sequel to Pollock's How to Build a Person, this volume builds upon that theoretical groundwork for the implementation of rationality through artificial ...
There was a long tradition in philosophy according to which good reasoning had to be deductively valid. However, that tradition began to be questioned in the 1960’s, and is now thoroughly discredited. What caused its downfall was the recognition that many familiar kinds of reasoning are not deductively valid, but clearly confer justification on their conclusions. Here are some simple examples.
In his groundbreaking new book, John Pollock establishes an outpost at the crossroads where artificial intelligence meets philosophy. Specifically, he proposes a general theory of rationality and then describes its implementation in OSCAR, an architecture for an autonomous rational agent he claims is the "first AI system capable of performing reasoning that philosophers would regard as epistemically sophisticated." A sequel to Pollock's How to Build a Person, this volume builds upon that theoretical groundwork for the implementation of rationality through artificial (...) intelligence. Pollock argues that progress in AI has stalled because of its creators' reliance upon unformulated intuitions about rationality. Instead, he bases the OSCAR architecture upon an explicit philosophical theory of rationality, encompassing principles of practical cognition, epistemic cognition, and defeasible reasoning. One of the results is the world's first automated defeasible reasoner capable of reasoning in a rich, logical environment. Underlying Pollock's thesis is a conviction that the tenets of artificial intelligence and those of philosophy can be complementary and mutually beneficial. And, while members of both camps have in recent years grown skeptical of the very possibility of "symbol processing" AI, Cognitive Carpentry establishes that such an approach to AI can be successful. A Bradford Book. (shrink)
In this book Pollock deals with the subject of probabilistic reasoning, making general philosophical sense of objective probabilities and exploring their ...
Pollock describes an exciting theory of rationality and its partial implementation in OSCAR, a computer system whose descendants will literally be persons.
Pollock argues that theories of ideal rationality are largely irrelevant to the decision making of real agents. Thinking about Acting aims to provide a theory of "real rationality.".
The objective of this book is to produce a theory of rational decision making for realistically resource-bounded agents. My interest is not in “What should I do if I were an ideal agent?”, but rather, “What should I do given that I am who I am, with all my actual cognitive limitations?” The book has three parts. Part One addresses the question of where the values come from that agents use in rational decision making. The most comon view among philosophers (...) is that they are based on preferences, but I argue that this is computationally impossible. I propose an alternative theory somewhat reminiscent of Bentham, and explore how human beings actually arrive at values and how they use them in decision making. Part Two investigates the knowledge of probability that is required for decision-theoretic reasoning. I argue that subjective probability makes no sense as applied to realistic agents. I sketch a theory of objective probability to put in its place. Then I use that to define a variety of causal probability and argue that this is the kind of probability presupposed by rational decision making. So what is to be defended is a variety of causal decision theory. Part Three explores how these values and probabilities are to be used in decision making. In chapter eight, it is argued first that actions cannot be evaluated in terms of their expected values as ordinarily defined, because that does not take account of the fact that a cognizer may be unable to perform an action, and may even be unable to try to perform it. An alternative notion of “expected utility” is defined to be used in place of expected values. In chapter nine it is argued that individual actions cannot be the proper objects of decision-theoretic evaluation. We must instead choose plans, and select actions indirectly on the grounds that they are prescribed by the plans we adopt. However, our objective cannot be to find plans with maximal expected utilities. Plans cannot be meaningfully compared in that way. (shrink)
The contributions in this volume make an important effort to resurrect a rather old fashioned form of foundationalism. They defend the position that there are some beliefs that are justified, and are not themselves justified by any further beliefs. This epistemic foundationalism has been the subject of rigorous attack by a wide range of theorists in recent years, leading to the impression that foundationalism is a thing of the past. DePaul argues that it is precisely the volume and virulence of (...) the assaults which points directly to the strength and coherence of the position. (shrink)
Reliabilist theories propose to analyse epistemic justification in terms of reliability. This paper argues that if we pay attention to the details of probability theory we find that there is no concept of reliability that can possibly play the role required by reliabilist theories. A distinction is drawn between the general reliability of a process and the single case reliability of an individual belief, And it is argued that neither notion can serve the reliabilist adequately.
Probability is sometimes regarded as a universal panacea for epistemology. It has been supposed that the rationality of belief is almost entirely a matter of probabilities. Unfortunately, those philosophers who have thought about this most extensively have tended to be probability theorists first, and epistemologists only secondarily. In my estimation, this has tended to make them insensitive to the complexities exhibited by epistemic justification. In this paper I propose to turn the tables. I begin by laying out some rather simple (...) and uncontroversial features of the structure of epistemic justification, and then go on to ask what we can conclude about the connection between epistemology and probability in the light of those features. My conclusion is that probability plays no central role in epistemology. This is not to say that probability plays no role at all. In the course of the investigation, I defend a pair of probabilistic acceptance rules which enable us, under some circumstances, to arrive at justified belief on the basis of high probability. But these rules are of quite limited scope. The effect of there being such rules is merely that probability provides one source for justified belief, on a par with perception, memory, etc. There is no way probability can provide a universal cure for all our epistemological ills. (shrink)
The question addressed in this paper is how the degree of justification of a belief is determined. A conclusion may be supported by several different arguments, the arguments typically being defeasible, and there may also be arguments of varying strengths for defeaters for some of the supporting arguments. What is sought is a way of computing the “on sum” degree of justification of a conclusion in terms of the degrees of justification of all relevant premises and the strengths of all (...) relevant reasons. (shrink)
Probability is sometimes regarded as a universal panacea for epistemology. It has been supposed that the rationality of belief is almost entirely a matter of probabilities. Unfortunately, those philosophers who have thought about this most extensively have tended to be probability theorists first, and epistemologists only secondarily. In my estimation, this has tended to make them insensitive to the complexities exhibited by epistemic justification. In this paper I propose to turn the tables. I begin by laying out some rather simple (...) and uncontroversial features of the structure of epistemic justification, and then go on to ask what we can conclude about the connection between epistemology and probability in the light of those features. My conclusion is that probability plays no central role in epistemology. This is not to say that probability plays no role at all. In the course of the investigation, I defend a pair of probabilistic acceptance rules which enable us, under some circumstances, to arrive at justified belief on the basis of high probability. But these rules are of quite limited scope. The effect of there being such rules is merely that probability provides one source for justified belief, on a par with perception, memory, etc. There is no way probability can provide a universal cure for all our epistemological ills. (shrink)
In a number of recent papers I have been developing the theory of "nomic probability," which is supposed to be the kind of probability involved in statistical laws of nature. One of the main principles of this theory is an acceptance rule explicitly designed to handle the lottery paradox. This paper shows that the rule can also handle the paradox of the preface. The solution proceeds in part by pointing out a surprising connection between the paradox of the preface and (...) the gambler's fallacy. (shrink)
Imagine yourself sitting on your front porch, sipping your morning coffee and admiring the scene before you. You see trees, houses, people, automobiles; you see a cat running across the road, and a bee buzzing among the flowers. You see that the flowers are yellow, and blowing in the wind. You see that the people are moving about, many of them on bicycles. You see that the houses are painted different colors, mostly earth tones, and most are one-story but a (...) few are two-story. It is a beautiful morning. Thus the world interfaces with your mind through your senses. There is a strong intuition that we are not disconnected from the world. We and the other things we see around us are part of a continuous whole, and we have direct access to them through vision, touch, etc. However, the philosophical tradition tries to drive a wedge between us and the world by insisting that the information we get from perception is the result of inference from indirect evidence that is about how things look and feel to us. The philosophical problem of perception is then to explain what justifies these inferences. We will focus on visual perception. Figure one presents a crude diagram of the cognitive system of an agent capable of forming beliefs on the basis of visual perception. Cognition begins with the stimulation of the rods and cones on the retina. From that physical input, some kind of visual processing produces an introspectible visual image. In response to the production of the visual image, the cognizer forms beliefs about his or her surroundings. Some beliefs the perceptual beliefs are formed as direct responses to the visual input, and other beliefs are inferred from the perceptual beliefs. The perceptual beliefs are, at the very least, caused or causally influenced by having the image. This is signified by the dashed arrow marked with a large question mark. We will refer to this as the mystery link. Figure one makes it apparent that in order to fully understand how knowledge is based on perception, we need three different theories.. (shrink)
Internalism in epistemology is the view that all the factors relevant to the justification of a belief are importantly internal to the believer, while externalism is the view that at least some of those factors are external. This extremely modest first approximation cries out for refinement (which we undertake below), but is enough to orient us in the right direction, namely that the debate between internalism and externalism is bound up with the controversy over the correct account of the distinction (...) between justified beliefs and unjustified beliefs.1 Understanding that distinction has occasionally been obscured by attention to the analysis of knowledge and to the Gettier problem, but our view is that these problems, while interesting, should not completely seduce philosophers away from central questions about epistemic justification. A plausible starting point in the discussion of justification is that the distinction between justified beliefs and unjustified beliefs is not the same as the distinction between true beliefs and false beliefs. This follows from the mundane observation that it is possible to rationally believe.. (shrink)
Counterexamples are constructed for the theory of rational choice that results from a direct application of classical decision theory to ordinary actions. These counterexamples turn on the fact that an agent may be unable to perform an action, and may even be unable to try to perform an action. An alternative theory of rational choice is proposed that evaluates actions using a more complex measure, and then it is shown that this is equivalent to applying classical decision theory to "conditional (...) policies" rather than ordinary actions. (shrink)
Postulational approaches attempt to understand the dynamics of belief revision by appealing to no more than the set of beliefs held by an agent and the logical relations between them. It is argued there that such an approach cannot work. A proper account of belief revision must also appeal to the arguments supporting beliefs, and recognize that those arguments can be defeasible. If we begin with a mature epistemological theory that accommodates this, it can be seen that the belief revision (...) operators on which the postulational theories are based are ill-defined. It is further argued that there is no way to repair the definitions so as to retain the spirit of those theory. Belief revision is better studied from within an independently motivated epistemological theory. (shrink)
An argument is self-defeating when it contains defeaters for some of its own defeasible lines. It is shown that the obvious rules for defeat among arguments do not handle self-defeating arguments correctly. It turns out that they constitute a pervasive phenomenon that threatens to cripple defeasible reasoning, leading to almost all defeasible reasoning being defeated by unexpected interactions with self-defeating arguments. This leads to some important changes in the general theory of defeasible reasoning.
Philosophy and AI presents invited contributions that focus on the different perspectives and techniques that philosophy and AI bring to the theory of ...
A theory of rational choice is a theory of how an agent should, rationally, go about deciding what actions to perform at any given time. For example, I may want to decide whether to go to a movie this evening or stay home and read a book. The actions between which we want to choose are perfectly ordinary actions, and the presumption is that to make such a decision we should attend to the likely consequences of our decision. It is (...) assumed that these decisions must be made in the face of uncertainty regarding both the agent’s initial situation and the consequences of his actions. (shrink)
One of the most striking characteristics of human beings is their ability to function successfully in complex environments about which they know very little. In light of our pervasive ignorance, we cannot get around in the world just reasoning deductively from our prior beliefs together with new perceptual input. As our conclusions are not guaranteed to be true, we must countenance the possibility that new information will lead us to change our minds, withdrawing previously adopted beliefs. In this sense, our (...) reasoning is “defeasible”. The question arises how defeasible reasoning works, or ought to work. In particular we need rules governing what a cognizer ought to believe given a set of interacting arguments some of which defeat others. That is what is called a “semantics” for defeasible reasoning, and this chapter will propose a new semantics that avoids certain clear counter-examples to all existing semantics. (shrink)
In a number of recent papers I have been developing the theory of “nomic probability,“ which is supposed to be the kind of probability involved in statistical laws of nature. One of the main principles of this theory is an acceptance rule explicitly designed to handle the lottery paradox. This paper shows that the rule can also handle the paradox of the preface. The solution proceeds in part by pointing out a surprising connection between the paradox of the preface and (...) the gambler's fallacy. (shrink)
Counterexamples are constructed for classical decision theory, turning on the fact that actions must often be chosen in groups rather than individually, i.e., the objects of rational choice are plans. It is argued that there is no way to define optimality for plans that makes the finding of optimal plans the desideratum of rational decision-making. An alternative called “locally global planning” is proposed as a replacement for classical decision theory. Decision-making becomes a non-terminating process without a precise target rather than (...) a terminating search for an optimal solution. (shrink)
Examples growing out of the Newcomb problem have convinced many people that decision theory should proceed in terms of some kind of causal probability. I endorse this view and define and investigate a variety of causal probability. My definition is related to Skyrms' definition, but proceeds in terms of objective probabilities rather than subjective probabilities and avoids taking causal dependence as a primitive concept.
Practical reasoning aims at deciding what actions to perform in light of the goals a rational agent possesses. This has been a topic of interest in both philosophy and artificial intelligence, but these two disciplines have produced very different models of practical reasoning. The purpose of this paper is to examine each model in light of the other and produce a unified model adequate for the purposes of both disciplines and superior to the standard models employed by either.The philosophical (decision-theoretic) (...) model directs activity by evaluating acts one at a time in terms of their expected utilities. It is argued that, except in certain special cases, this constitutes an inadequate theory of practical reasoning leading to intuitively incorrect action prescriptions. Acts must be viewed as parts of plans, and plans evaluated as coherent units rather than piecemenal in terms of the acts comprising them. Rationality dictates choosing acts by first choosing the plans prescribing them. Plans, in turn, are compared by looking at their expected values. However, because plans can be embedded in one another, we cannot select plans just by maximizing expected values. Instead, we must employ a more complex criterion here named coextendability. (shrink)
In concrete applications of probability, statistical investigation gives us knowledge of some probabilities, but we generally want to know many others that are not directly revealed by our data. For instance, we may know prob(P/Q) (the probability of P given Q) and prob(P/R), but what we really want is prob(P/Q& R), and we may not have the data required to assess that directly. The probability calculus is of no help here. Given prob(P/Q) and prob(P/R), it is consistent with the probability (...) calculus for prob(P/Q& R) to have any value between 0 and 1. Is there any way to make a reasonable estimate of the value of prob(P/Q& R) 1 A related problem occurs when probability practitioners adopt undefended assumptions of statistical independence simply on the basis of not seeing any connection between two propositions. This is common practice, but its justification has eluded probability theorists, and researchers are typically apologetic about making such assumptions. Is there any way to defend the practice? This paper shows that on a certain conception of probability—nomic probability—there are principles of "probable probabilities" that license inferences of the above sort. These are principles telling us that although certain inferences from probabilities to probabilities are not deductively valid, nevertheless the second-order probability of their yielding correct results is 1. This makes it defeasibly reasonable to make the inferences. Thus I argue that it is defeasibly reasonable to assume statistical independence when we have no information to the contrary. And I show that there is a function Y(r, s, a) such that if prob(P/Q) = r, prob(P/R) = s, andprob(P/U) = a (where U is our background knowledge) then it is defeasibly reasonable to expect that prob(P/Q&R) = Y(r, s, a). Numerous other defeasible inferences are licensed by similar principles of probable probabilities. This has the potential to greatly enhance the usefulness of probabilities in practical application. (shrink)