There was a long tradition in philosophy according to which good reasoning had to be deductively valid. However, that tradition began to be questioned in the 1960’s, and is now thoroughly discredited. What caused its downfall was the recognition that many familiar kinds of reasoning are not deductively valid, but clearly confer justification on their conclusions. Here are some simple examples.
The contributions in this volume make an important effort to resurrect a rather old fashioned form of foundationalism. They defend the position that there are some beliefs that are justified, and are not themselves justified by any further beliefs. This epistemic foundationalism has been the subject of rigorous attack by a wide range of theorists in recent years, leading to the impression that foundationalism is a thing of the past. DePaul argues that it is precisely the volume and virulence of (...) the assaults which points directly to the strength and coherence of the position. (shrink)
The question addressed in this paper is how the degree of justification of a belief is determined. A conclusion may be supported by several different arguments, the arguments typically being defeasible, and there may also be arguments of varying strengths for defeaters for some of the supporting arguments. What is sought is a way of computing the “on sum” degree of justification of a conclusion in terms of the degrees of justification of all relevant premises and the strengths of all (...) relevant reasons. (shrink)
The objective of this book is to produce a theory of rational decision making for realistically resource-bounded agents. My interest is not in “What should I do if I were an ideal agent?”, but rather, “What should I do given that I am who I am, with all my actual cognitive limitations?” The book has three parts. Part One addresses the question of where the values come from that agents use in rational decision making. The most comon view among philosophers (...) is that they are based on preferences, but I argue that this is computationally impossible. I propose an alternative theory somewhat reminiscent of Bentham, and explore how human beings actually arrive at values and how they use them in decision making. Part Two investigates the knowledge of probability that is required for decision-theoretic reasoning. I argue that subjective probability makes no sense as applied to realistic agents. I sketch a theory of objective probability to put in its place. Then I use that to define a variety of causal probability and argue that this is the kind of probability presupposed by rational decision making. So what is to be defended is a variety of causal decision theory. Part Three explores how these values and probabilities are to be used in decision making. In chapter eight, it is argued first that actions cannot be evaluated in terms of their expected values as ordinarily defined, because that does not take account of the fact that a cognizer may be unable to perform an action, and may even be unable to try to perform it. An alternative notion of “expected utility” is defined to be used in place of expected values. In chapter nine it is argued that individual actions cannot be the proper objects of decision-theoretic evaluation. We must instead choose plans, and select actions indirectly on the grounds that they are prescribed by the plans we adopt. However, our objective cannot be to find plans with maximal expected utilities. Plans cannot be meaningfully compared in that way. (shrink)
Counterexamples are constructed for the theory of rational choice that results from a direct application of classical decision theory to ordinary actions. These counterexamples turn on the fact that an agent may be unable to perform an action, and may even be unable to try to perform an action. An alternative theory of rational choice is proposed that evaluates actions using a more complex measure, and then it is shown that this is equivalent to applying classical decision theory to "conditional (...) policies" rather than ordinary actions. (shrink)
Imagine yourself sitting on your front porch, sipping your morning coffee and admiring the scene before you. You see trees, houses, people, automobiles; you see a cat running across the road, and a bee buzzing among the flowers. You see that the flowers are yellow, and blowing in the wind. You see that the people are moving about, many of them on bicycles. You see that the houses are painted different colors, mostly earth tones, and most are one-story but a (...) few are two-story. It is a beautiful morning. Thus the world interfaces with your mind through your senses. There is a strong intuition that we are not disconnected from the world. We and the other things we see around us are part of a continuous whole, and we have direct access to them through vision, touch, etc. However, the philosophical tradition tries to drive a wedge between us and the world by insisting that the information we get from perception is the result of inference from indirect evidence that is about how things look and feel to us. The philosophical problem of perception is then to explain what justifies these inferences. We will focus on visual perception. Figure one presents a crude diagram of the cognitive system of an agent capable of forming beliefs on the basis of visual perception. Cognition begins with the stimulation of the rods and cones on the retina. From that physical input, some kind of visual processing produces an introspectible visual image. In response to the production of the visual image, the cognizer forms beliefs about his or her surroundings. Some beliefs the perceptual beliefs are formed as direct responses to the visual input, and other beliefs are inferred from the perceptual beliefs. The perceptual beliefs are, at the very least, caused or causally influenced by having the image. This is signified by the dashed arrow marked with a large question mark. We will refer to this as the mystery link. Figure one makes it apparent that in order to fully understand how knowledge is based on perception, we need three different theories.. (shrink)
Internalism in epistemology is the view that all the factors relevant to the justification of a belief are importantly internal to the believer, while externalism is the view that at least some of those factors are external. This extremely modest first approximation cries out for refinement (which we undertake below), but is enough to orient us in the right direction, namely that the debate between internalism and externalism is bound up with the controversy over the correct account of the distinction (...) between justified beliefs and unjustified beliefs.1 Understanding that distinction has occasionally been obscured by attention to the analysis of knowledge and to the Gettier problem, but our view is that these problems, while interesting, should not completely seduce philosophers away from central questions about epistemic justification. A plausible starting point in the discussion of justification is that the distinction between justified beliefs and unjustified beliefs is not the same as the distinction between true beliefs and false beliefs. This follows from the mundane observation that it is possible to rationally believe.. (shrink)
Reliabilist theories propose to analyse epistemic justification in terms of reliability. This paper argues that if we pay attention to the details of probability theory we find that there is no concept of reliability that can possibly play the role required by reliabilist theories. A distinction is drawn between the general reliability of a process and the single case reliability of an individual belief, And it is argued that neither notion can serve the reliabilist adequately.
In the Newcomb problem, the standard arguments for taking either one box or both boxes adduce what seem to be relevant considerations, but they are not complete arguments, and attempts to complete the arguments rely upon incorrect principles of rational decision making. It is argued that by considering how the predictor is making his prediction, we can generate a more complete argument, and this in turn supports a form of causal decision theory.
Probability is sometimes regarded as a universal panacea for epistemology. It has been supposed that the rationality of belief is almost entirely a matter of probabilities. Unfortunately, those philosophers who have thought about this most extensively have tended to be probability theorists first, and epistemologists only secondarily. In my estimation, this has tended to make them insensitive to the complexities exhibited by epistemic justification. In this paper I propose to turn the tables. I begin by laying out some rather simple (...) and uncontroversial features of the structure of epistemic justification, and then go on to ask what we can conclude about the connection between epistemology and probability in the light of those features. My conclusion is that probability plays no central role in epistemology. This is not to say that probability plays no role at all. In the course of the investigation, I defend a pair of probabilistic acceptance rules which enable us, under some circumstances, to arrive at justified belief on the basis of high probability. But these rules are of quite limited scope. The effect of there being such rules is merely that probability provides one source for justified belief, on a par with perception, memory, etc. There is no way probability can provide a universal cure for all our epistemological ills. (shrink)
In concrete applications of probability, statistical investigation gives us knowledge of some probabilities, but we generally want to know many others that are not directly revealed by our data. For instance, we may know prob(P/Q) (the probability of P given Q) and prob(P/R), but what we really want is prob(P/Q& R), and we may not have the data required to assess that directly. The probability calculus is of no help here. Given prob(P/Q) and prob(P/R), it is consistent with the probability (...) calculus for prob(P/Q& R) to have any value between 0 and 1. Is there any way to make a reasonable estimate of the value of prob(P/Q& R) 1 A related problem occurs when probability practitioners adopt undefended assumptions of statistical independence simply on the basis of not seeing any connection between two propositions. This is common practice, but its justification has eluded probability theorists, and researchers are typically apologetic about making such assumptions. Is there any way to defend the practice? This paper shows that on a certain conception of probability—nomic probability—there are principles of "probable probabilities" that license inferences of the above sort. These are principles telling us that although certain inferences from probabilities to probabilities are not deductively valid, nevertheless the second-order probability of their yielding correct results is 1. This makes it defeasibly reasonable to make the inferences. Thus I argue that it is defeasibly reasonable to assume statistical independence when we have no information to the contrary. And I show that there is a function Y(r, s, a) such that if prob(P/Q) = r, prob(P/R) = s, andprob(P/U) = a (where U is our background knowledge) then it is defeasibly reasonable to expect that prob(P/Q&R) = Y(r, s, a). Numerous other defeasible inferences are licensed by similar principles of probable probabilities. This has the potential to greatly enhance the usefulness of probabilities in practical application. (shrink)
In concrete applications of probability, statistical investigation gives us knowledge of some probabilities, but we generally want to know many others that are not directly revealed by our data. For instance, we may know prob(P/Q) (the probability of P given Q) and prob(P/R), but what we really want is prob(P/Q&R), and we may not have the data required to assess that directly. The probability calculus is of no help here. Given prob(P/Q) and prob(P/R), it is consistent with the probability calculus (...) for prob(P/Q&R) to have any value between 0 and 1. Is there any way to make a reasonable estimate of the value of prob(P/Q&R)? A related problem occurs when probability practitioners adopt undefended assumptions of statistical independence simply on the basis of not seeing any connection between two propositions. This is common practice, but its justification has eluded probability theorists, and researchers are typically apologetic about making such assumptions. Is there any way to defend the practice? This paper shows that on a certain conception of probability — nomic probability — there are principles of “probable probabilities” that license inferences of the above sort. These are principles telling us that although certain inferences from probabilities to probabilities are not deductively valid, nevertheless the second-order probability of their yielding correct results is 1. This makes it defeasibly reasonable to make the inferences. Thus I argue that it is defeasibly reasonable to assume statistical independence when we have no information to the contrary. And I show that there is a function Y(r,s:a) such that if prob(P/Q) = r, prob(P/R) = s, and prob(P/U) = a (where U is our background knowledge) then it is defeasibly reasonable to expect that prob(P/Q&R) = Y(r,s:a). Numerous other defeasible inferences are licensed by similar principles of probable probabilities.. (shrink)
Counterexamples are constructed for classical decision theory, turning on the fact that actions must often be chosen in groups rather than individually, i.e., the objects of rational choice are plans. It is argued that there is no way to define optimality for plans that makes the finding of optimal plans the desideratum of rational decision-making. An alternative called “locally global planning” is proposed as a replacement for classical decision theory. Decision-making becomes a non-terminating process without a precise target rather than (...) a terminating search for an optimal solution. (shrink)
In the past, few mainstream epistemologists have endorsed Bayesian epistemology, feeling that it fails to capture the complex structure of epistemic cognition. The defenders of Bayesian epistemology have tended to be probability theorists rather than epistemologists, and I have always suspected they were more attracted by its mathematical elegance than its epistemological realism. But recently Bayesian epistemology has gained a following among younger mainstream epistemologists. I think it is time to rehearse some of the simpler but still quite devastating objections (...) to Bayesian epistemology. Most of these objections are familiar, but have never been adequately addressed by the Bayesians. (shrink)
An argument is self-defeating when it contains defeaters for some of its own defeasible lines. It is shown that the obvious rules for defeat among arguments do not handle self-defeating arguments correctly. It turns out that they constitute a pervasive phenomenon that threatens to cripple defeasible reasoning, leading to almost all defeasible reasoning being defeated by unexpected interactions with self-defeating arguments. This leads to some important changes in the general theory of defeasible reasoning.
Postulational approaches attempt to understand the dynamics of belief revision by appealing to no more than the set of beliefs held by an agent and the logical relations between them. It is argued there that such an approach cannot work. A proper account of belief revision must also appeal to the arguments supporting beliefs, and recognize that those arguments can be defeasible. If we begin with a mature epistemological theory that accommodates this, it can be seen that the belief revision (...) operators on which the postulational theories are based are ill-defined. It is further argued that there is no way to repair the definitions so as to retain the spirit of those theory. Belief revision is better studied from within an independently motivated epistemological theory. (shrink)
Human beings think of themselves in terms of a privileged non-descriptive designator — a mental “I”. Such thoughts are called “_de se_” thoughts. The mind/body problem is the problem of deciding what kind of thing I am, and it can be regarded as arising from the fact that we think of ourselves non-descriptively. Why do we think of ourselves in this way? We investigate the functional role of “I” (and also “here” and “now”) in cognition, arguing that the use of (...) such non-descriptive “reflexive” designators is essential for making sophisticated cognition work in a general-purpose cognitive agent. If we were to build a robot capable of similar cognitive tasks as humans, it would have to be equipped with such designators. (shrink)
In a number of recent papers I have been developing the theory of "nomic probability," which is supposed to be the kind of probability involved in statistical laws of nature. One of the main principles of this theory is an acceptance rule explicitly designed to handle the lottery paradox. This paper shows that the rule can also handle the paradox of the preface. The solution proceeds in part by pointing out a surprising connection between the paradox of the preface and (...) the gambler's fallacy. (shrink)
This article sketches a theory of objective probability focusing on nomic probability, which is supposed to be the kind of probability figuring in statistical laws of nature. The theory is based upon a strengthened probability calculus and some epistemological principles that formulate a precise version of the statistical syllogism. It is shown that from this rather minimal basis it is possible to derive theorems comprising (1) a theory of direct inference, and (2) a theory of induction. The theory of induction (...) is not of the familiar Bayesian variety, but consists of a precise version of the traditional Nicod Principle and its statistical analogues. (shrink)
Practical reasoning aims at deciding what actions to perform in light of the goals a rational agent possesses. This has been a topic of interest in both philosophy and artificial intelligence, but these two disciplines have produced very different models of practical reasoning. The purpose of this paper is to examine each model in light of the other and produce a unified model adequate for the purposes of both disciplines and superior to the standard models employed by either.The philosophical (decision-theoretic) (...) model directs activity by evaluating acts one at a time in terms of their expected utilities. It is argued that, except in certain special cases, this constitutes an inadequate theory of practical reasoning leading to intuitively incorrect action prescriptions. Acts must be viewed as parts of plans, and plans evaluated as coherent units rather than piecemenal in terms of the acts comprising them. Rationality dictates choosing acts by first choosing the plans prescribing them. Plans, in turn, are compared by looking at their expected values. However, because plans can be embedded in one another, we cannot select plans just by maximizing expected values. Instead, we must employ a more complex criterion here named coextendability. (shrink)
Examples growing out of the Newcomb problem have convinced many people that decision theory should proceed in terms of some kind of causal probability. I endorse this view and define and investigate a variety of causal probability. My definition is related to Skyrms' definition, but proceeds in terms of objective probabilities rather than subjective probabilities and avoids taking causal dependence as a primitive concept.
When your word processor or email program is running on your computer, this creates a "virtual machine” that manipulates windows, files, text, etc. What is this virtual machine, and what are the virtual objects it manipulates? Many standard arguments in the philosophy of mind have exact analogues for virtual machines and virtual objects, but we do not want to draw the wild metaphysical conclusions that have sometimes tempted philosophers in the philosophy of mind. A computer file is not made of (...) epiphenomenal ectoplasm. I argue instead that virtual objects are "supervenient objects". The stereotypical example of supervenient objects is the statue and the lump of clay. To this end I propose a theory of supervenient objects. Then I turn to persons and mental states. I argue that my mental states are virtual states of a cognitive virtual machine implemented on my body, and a person is a supervenient object supervening on his cognitive virtual machine. (shrink)