The goal of a partial belief is to be accurate, or close to the truth. By appealing to this norm, I seek norms for partial beliefs in self-locating and non-self-locating propositions. My aim is to find norms that are analogous to the Bayesian norms, which, I argue, only apply unproblematically to partial beliefs in non-self-locating propositions. I argue that the goal of a set of partial beliefs is to minimize the expected inaccuracy of those beliefs. However, (...) in the self-locating framework, there are two equally legitimate definitions of expected inaccuracy. And, while each gives rise to the same synchronic norm for partial beliefs, they give rise to different, inconsistent diachronic norms. I conclude that both norms are rationally permissible. En passant, I note that this entails that both Halfer and Thirder solutions to the well-known Sleeping Beauty puzzle are rationally permissible. (shrink)
Dr. Evil learns that a duplicate of Dr. Evil has been created. Upon learning this, how seriously should he take the hypothesis that he himself is that duplicate? I answer: very seriously. I defend a principle of indifference for self-locatingbelief which entails that after Dr. Evil learns that a duplicate has been created, he ought to have exactly the same degree of belief that he is Dr. Evil as that he is the duplicate. More generally, the (...) principle shows that there is a sharp distinction between ordinary skeptical hypotheses, and self-locating skeptical hypotheses. (shrink)
Philosophical interest in the role of self-locating information in the confirmation of hypotheses has intensified in virtue of the Sleeping Beauty problem. If the correct solution to that problem is 1/3, various attractive views on confirmation and probabilistic reasoning appear to be undermined; and some writers have used the problem as a basis for rejecting some of those views. My interest here is in two such views. One of them is the thesis that self-locating information cannot be evidentially (...) relevant to a non-self-locating hypothesis. The other, a basic tenet of Bayesian confirmation theory, is the thesis that an ideally rational agent updates her credence in a non-self-locating hypothesis in response to new information only by conditionalization. I argue that we can disprove these two theses by way of cases that are much less puzzling than Sleeping Beauty. I present two such cases in this paper. (shrink)
This article defends the Doomsday Argument, the Halfer Position in Sleeping Beauty, the Fine-Tuning Argument, and the applicability of Bayesian confirmation theory to the Everett interpretation of quantum mechanics. It will argue that all four problems have the same structure, and it gives a unified treatment that uses simple models of the cases and no controversial assumptions about confirmation or self-locating evidence. The article will argue that the troublesome feature of all these cases is not self-location but selection effects.
How should we update our beliefs when we learn new evidence? Bayesian confirmation theory provides a widely accepted and well understood answer – we should conditionalize. But this theory has a problem with self-locating beliefs, beliefs that tell you where you are in the world, as opposed to what the world is like. To see the problem, consider your current belief that it is January. You might be absolutely, 100%, sure that it is January. But you will soon (...) believe it is February. This type of belief change cannot be modelled by conditionalization. We need some new principles of belief change for this kind of case, which I call belief mutation. In part 1, I defend the Relevance-Limiting Thesis, which says that a change in a purely self-locatingbelief of the kind that results in belief mutation should not shift your degree of belief in a non-self-locatingbelief, which can only change by conditionalization. My method is to give detailed analyses of the puzzles which threaten this thesis: Duplication, Sleeping Beauty, and The Prisoner. This also requires giving my own theory of observation selection effects. In part 2, I argue that when self-locating evidence is learnt from a position of uncertainty, it should be conditionalized on in the normal way. I defend this position by applying it to various cases where such evidence is found. I defend the Halfer position in Sleeping Beauty, and I defend the Doomsday Argument and the Fine-Tuning Argument. (shrink)
Can self-locating beliefs be relevant to non-self-locating claims? Traditional Bayesian modeling techniques have trouble answering this question because their updating rule fails when applied to situations involving contextsensitivity. This essay develops a fully general framework for modeling stories involving context-sensitive claims. The key innovations are a revised conditionalization rule and a principle relating models of the same story with different modeling languages. The essay then applies the modeling framework to the Sleeping Beauty Problem, showing that when Beauty awakens (...) her degree of belief in heads should be one-third. This demonstrates that it can be rational for an agent who gains only self-locating beliefs between two times to alter her degree of belief in a non-self-locating claim. (shrink)
In addition to being uncertain about what the world is like, one can also be uncertain about one’s own spatial or temporal location in the world. My aim is to pose a problem arising from the interaction between these two sorts of uncertainty, solve the problem, and draw two lessons from the solution.
1. How big is the smallest fish in the pond? You take your wide-meshed fishing net and catch one hundred fishes, every one of which is greater than six inches long. Does this evidence support the hypothesis that no fish in the pond is much less than six inches long? Not if your wide-meshed net can’t actually catch smaller fish.
Current cosmological theories say that the world is so big that all possible observations are in fact made. But then, how can such theories be tested? What could count as negative evidence? To answer that, we need to consider observation selection effects.
One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjective probability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and self-locating beliefs (where a self-locatingbelief of (...) a temporal part of an individual is a belief about where or when that temporal part is located). We analyze the Sleeping Beauty problem, the duplication version of the Sleeping Beauty problem, and various related problems. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending (...) Bayesianism in order to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
A plea: If you're going to propose a Bayesian framework for updating self-locating degrees of belief, please read this piece first. I've tried to survey all the extant formalisms, group them by their general approach, then describe challenges faced by every formalism employing a given approach. Hopefully this survey will prevent further instances of authors' re-inventing updating rules already proposed elsewhere in the literature.
This paper offers an epistemological discussion of self-validating belief systems and the recurrence of ?epistemic defense mechanisms? and ?immunizing strategies? across widely different domains of knowledge. We challenge the idea that typical ?weird? belief systems are inherently fragile, and we argue that, instead, they exhibit a surprising degree of resilience in the face of adverse evidence and criticism. Borrowing from the psychological research on belief perseverance, rationalization and motivated reasoning, we argue that the human mind is particularly (...) susceptible to belief systems that are structurally self-validating. On this cognitive-psychological basis, we construct an epidemiology of beliefs, arguing that the apparent convenience of escape clauses and other defensive ?tactics? used by believers may well derive not from conscious deliberation on their part, but from more subtle mechanisms of cultural selection. (shrink)
How do temporal and eternal beliefs interact? I argue that acquiring a temporal belief should have no effect on eternal beliefs for an important range of cases. Thus, I oppose the popular view that new norms of belief change must be introduced for cases where the only change is the passing of time. I defend this position from the purported counter-examples of the Prisoner and Sleeping Beauty. I distinguish two importantly different ways in which temporal beliefs can be (...) acquired and draw some general conclusions about their impact on eternal beliefs. (shrink)
One of the most common views about self-deception ascribes contradictory beliefs to the self-deceiver. In this paper it is argued that this view (the contradiction strategy) is inconsistent with plausible common-sense principles of belief attribution. Other dubious assumptions made by contradiction strategists are also examined. It is concluded that the contradiction strategy is an inadequate account of self-deception. Two other well-known views — those of Robert Audi and Alfred Mele — are investigated and found wanting. A new theory of (...) self-deception relying on an extension of Mark Johnston's subintentional mental tropisms is proposed and defended. (shrink)
In Part I, I consider the normal contexts of assertions of belief and declarations of intentions, arguing that many action-guiding beliefs are accepted uncritically and even pre-consciously. I analyze the function of avowals as expressions of attempts at self-transformation. It is because assertions of beliefs are used to perform a wide range of speech acts besides that of speaking the truth, and because there is a large area of indeterminacy in such assertions, that self-deception is possible. In Part II, (...) I analyze the conditions of self-deception, and discuss the grounds on which it is regarded as irrational, even when particular instances may be beneficial. I consider some of the classical analyses of the motives for self-deception, and attempt to give an account of the occasions in which it is likely to occur. In the final section, I discuss the complex organization of the self that is presupposed by the phenomena of self-deception. (shrink)
In this note I argue that although Rorty's programme (Inquiry, Vol. 15, No. 4) to bring into focus the role that belief plays in self?deception is a salutary one, her actual claims obscure that role. It is also contended that Rorty fails to de?mythologize self?deception, since her account is either paradox?ridden or else describes a concept recognizably distinct from the concept of self?deception.
In addressing the metaphysical question of what colours are, a consideration that is commonly appealed to is how colours are represented—typically in perceptual experiences, but also in beliefs and linguistic utterances. Although representations need not accurately reflect the nature of what they represent—indeed, they need not represent anything that actually exists at all—the way colours are represented is often taken to provide at least a defeasible guide to the metaphysics: all else being equal, it seems we should prefer a theory (...) of what colours that is consistent with the way that they appear; otherwise, our theory of the nature of colour entails a potentially unattractive error theory about ordinary colour ascriptions. (shrink)
How can self-locating propositions be integrated into normal patterns of belief revision? Puzzles such as Sleeping Beauty seem to show that such propositions lead to violation of ordinary principles for reasoning with subjective probability, such as Conditionalization and Reflection. I show that sophisticated forms of Conditionalization and Reflection are not only compatible with self-locating propositions, but also indispensable in understanding how they can function as evidence in Sleeping Beauty and similar cases.
How should our beliefs change over time? Much has been written about how our beliefs should change in the light of new evidence. But that is not the question I’m asking. Sometimes our beliefs change without new evidence. I previously believed it was Sunday. I now believe it’s Monday. In this paper I discuss the implications of such beliefs for philosophy of language. I will argue that we need to allow for ‘dynamic’ beliefs, that we need new norms of (...) class='Hi'>belief change to model how they function, and that this gives Perry’s (1977) two tier account the advantage over Lewis’s (1979) theory -/- . (shrink)
Two lines of investigation into the nature of mental content have proceeded in parallel until now. The first looks at thoughts that are attributable to collectives, such as bands' beliefs and teams' desires. So far, philosophers who have written on collective belief, collective intentionality, etc. have primarily focused on third-personal attributions of thoughts to collectives. The second looks at de se, or self-locating, thoughts, such as beliefs and desires that are essentially about oneself. So far, philosophers who have (...) written on the de se have primarily focused on de se thoughts of individuals. This paper looks at where these two lines of investigations intersect: collective de se thoughts, such as bands' and teams' beliefs and desires that are essentially about themselves. There is a surprising problem at this intersection: the most prominent framework for modeling de se thoughts, the framework of centered worlds, cannot model a special class of collective de se thoughts. A brief survey of this problem's solution space shows that collective de se thoughts pose a new challenge for modeling mental content. (shrink)
The Simulation Argument and the Doomsday Argument share certain structural similarities, and hence are often discussed together (Bostrom 2003, Aranyosi 2004, Richmond 2008, Bostrom and Kulczycki 2011). Both are cases where reflecting on one’s location among a set of possibilities yields a counter-intuitive conclusion—in one case that the end of humankind is closer than you initially thought, and in the second case that it is more likely than you initially thought that you are living in a computer simulation. Indeed, the (...) two arguments do share strong structural similarities. But there are also some disanalogies between the two arguments, and I argue that these disanalogies mean that the Simulation Argument succeeds and the Doomsday Argument fails. (shrink)
Nick Bostrom’s ‘Simulation Argument’ purports to show that, unless we are confident that advanced ‘posthuman’ civilizations are either extremely rare or extremely rarely interested in running simulations of their own ancestors, we should assign significant credence to the hypothesis that we are simulated. I argue that Bostrom does not succeed in grounding this constraint on credence. I first show that the Simulation Argument requires a curious form of selective scepticism, for it presupposes that we possess good evidence for claims about (...) the physical limits of computation and yet lack good evidence for claims about our own physical constitution. I then show that two ways of modifying the argument so as to remove the need for this presupposition fail to preserve the original conclusion. Finally, I argue that, while there are unusual circumstances in which Bostrom’s selective scepticism might be reasonable, we do not currently find ourselves in such circumstances. There is no good reason to uphold the selective scepticism the Simulation Argument presupposes. There is thus no good reason to believe its conclusion. (shrink)
On the self-locating response to the knowledge argument Content Type Journal Article DOI 10.1007/s11098-010-9612-2 Authors Daniel Stoljar, Philosophy Program, Research School of Social Sciences, The Australian National University, Canberra ACT, 0200 Australia Journal Philosophical Studies Online ISSN 1573-0883 Print ISSN 0031-8116.
Moore's paradox arises from the logicaloddity of sentences of the form`P and I do not believe that P'or `P and I believe that not-P'. Thiskind of sentence is logically peculiarbecause it is absurd to assert it, although it isnot a logical contradiction. In this paperI offer a new proposal. I argue that Moore's paradox arises because there is a defaultprocedure for evaluating a self-ascribed belief sentence and one is presumptivelyjustified in believing that one believes a sentence when one sincerely (...) assents to it. (shrink)
This paper considers two accounts of the way that colours are represented in perception, thought, and language that are consistent with relationalist theories of colour: Jonathan Cohen’s contextualist semantics for colour ascriptions, and Andy Egan’s suggestion that colour ascriptions have self-locating contents. I argue that colours are not represented in perception, thought, or language as mind-dependent relational properties.
David Lewis’s property-centered account of belief falls prey to the problem of egocentric omniscience: In self-ascribing the property of being an eye doctor, an agent is thereby self-ascribing the property of being an oculist. It is argued that the problem of egocentric omniscience can be made palatable for Lewis’s property-centered account of belief, at least for the case of linguistic beliefs. Roughly, my solution is as follows: An agent can believe that he or she has the property of (...) being an eye doctor/oculist under the description ‘eye doctor’ without believing that he or she has this property under the description ‘oculist’. Believing that one has a property P under a description D involves the additional self-ascription of the propositional property of inhabiting a world with respect to which that description denotes the property P. This is not the same sort of solution as the one proposed for singular beliefs by Nathan Salmon. Unlike Salmon’s account, belief on the account I am defending is regarded as a two place-relation rather than a three-place relation. Since, on Lewis’s account, self-ascriptive belief subsumes de dicto belief, my solution also sheds light on the problem of logical omniscience. (shrink)
How does a subject who is competent to detect the irrationality of a belief that p, form her belief against weighty or even conclusive evidence to the contrary? The phenomenon of self-deception threatens a widely shared view of beliefs according to which they do not regularly correspond to emotions and evaluative attitudes. Accordingly, the most popular answer to this question is that the belief formed in self-deception is caused by an intention to form that belief. On (...) this view, the state of self-deception is taken to be a calculated outcome involving a person's intentional manipulation of her own thoughts. I argue that this answer is false and forms an impediment towards making sense of self-deception. I show that, contrary to philosophical prejudice, emotions and desires exert vast and systematic effects on the formation of beliefs. In this, and other, sections of the article, the results of experimental work are brought forward. Self-deception is portrayed here as resembling numerous instances of belief formation which are regularly affected by motivational factors. I argue that self-deceptive beliefs are direct expressions of the subject's wishes, fears and hopes. Qua beliefs which mostly correspond to such factors (rather than to evidence), self-deceptive states are a kind of fantasy. (shrink)
Mental content and the problem of De Se belief -- Cognitive attitudes and content -- The doctrine of propositions -- The problem of De Se belief -- The property theory of content -- In favor of the property theory -- Perry's messy shopper and the argument from explanation -- Lewis's case of the two Gods -- Arguments from internalism and physicalism -- An inference to the best explanation -- Alternatives to the property theory -- The triadic view of (...)belief -- How the property theory and the triadic view are rivals -- Dyadic propositionalism reconsidered -- Arguments against the property theory -- Self-ascription and self-awareness -- Nonexistence and impossible contents -- Stalnaker's argument -- Propositionalist arguments from inference -- The property theory and De Re belief -- Lewis's account of De Re belief -- McKay's objection to Lewis -- Mistaken identity and the case of the shy secret admirer -- Some other worries and concluding remarks -- The property theory, rationality, and Kripke's puzzle about belief -- Kripke's puzzle about belief -- The puzzle argument -- A solution to the puzzle -- Puzzles with empty names and kind terms -- The property theory, twin earth, and belief about kinds -- Twin earth and two kinds of internalism -- The twin earth argument -- An internalist response (stage one) -- An internalist response (stage two) -- Self-ascription and belief about kinds. (shrink)
Words such as selfish and altruistic that describe conduct toward self and others are notoriously ambiguous in everyday language. I argue that the ambiguity is caused, in part, by the coexistence of multiple belief systems that use the same words in different ways. Each belief system is a relatively coherent linguistic entity that provides a guide for human behavior. It is therefore a functional entity with design features that dictate specific word meaning. Since different belief systems guide (...) human behavior in different directions, specific word meanings cannot be maintained across belief systems. Other sources of linguistic ambiguity include i) functional ambiguity that increases the effectiveness of a belief system, ii) ambiguity between belief systems that are functionally identical but historically distinct, and iii) active interference between belief systems. I illustrate these points with a natural history study of the word selfish and related words in everyday language. In general, language and the thought that it represents should be studied in the same way that ecologists study multi-species communities. (shrink)
Stubborn belief, like self-deception, is a species of motivated irrationality. The nature of stubborn belief, however, has not been investigated by philosophers, and it is something that poses a challenge to some prominent accounts of self-deception. In this paper, I argue that the case of stubborn belief constitutes a counterexample to Alfred Mele’s proposed set of sufficient conditions for self-deception, and I attempt to distinguish between the two. The recognition of this phenomenon should force an amendment in (...) this account, and should also make a Mele-style deflationist think more carefully about the kinds of motivational factors operating in self-deception. (shrink)
In this paper, I argue that the method of transparency --determining whether I believe that p by considering whether p -- does not explain our privileged access to our own beliefs. Looking outward to determine whether one believes that p leads to the formation of a judgment about whether p, which one can then self-attribute. But use of this process does not constitute genuine privileged access to whether one judges that p. And looking outward will not provide for access to (...) dispositional beliefs, which are arguably more central examples of belief than occurrent judgments. First, one’s dispositional beliefs as to whether p may diverge from the occurrent judgments generated by the method of transparency. Second, even in cases where these are reliably linked — e.g., in which one’s judgment that p derives from one’s dispositional belief that p — using the judgment to self-attribute the dispositional belief requires an ‘inward’ gaze. (shrink)
The Sleeping Beauty problem is test stone for theories about self- locating belief, i.e. theories about how we should reason when data or theories contain indexical information. Opinion on this problem is split between two camps, those who defend the “1/2 view” and those who advocate the “1/3 view”. I argue that both these positions are mistaken. Instead, I propose a new “hybrid” model, which avoids the faults of the standard views while retaining their attractive properties. This model appears (...) to violate Bayesian conditionalization, but I argue that this is not the case. By paying close attention to the details of conditionalization in contexts where indexical information is relevant, we discover that the hybrid model is in fact consistent with Bayesian kinematics. If the proposed model is correct, there are important lessons for the study of self-location, observation selection theory, and anthropic reasoning. (shrink)
Self-deception poses tantalizing conceptual conundrums and provides fertile ground for empirical research. Recent interdisciplinary volumes on the topic feature essays by biologists, philosophers, psychiatrists, and psychologists (Lockard & Paulhus 1988, Martin 1985). Self-deception's location at the intersection of these disciplines is explained by its significance for questions of abiding interdisciplinary interest. To what extent is our mental life present--or even accessible--to consciousness? How rational are we? How is motivated irrationality to be explained? To what extent are our beliefs subject to (...) our control? What are the determinants of belief, and how does motivation bear upon belief? In what measure are widely shared psychological propensities products of evolution? (shrink)
Self-deception is a special kind of motivational dominance in belief-formation. We develop criteria which set paradigmatic self-deception apart from related phenomena of automanipulation such as pretense and motivational bias. In self-deception rational subjects defend or develop beliefs of high subjective importance in response to strong counterevidence. Self-deceivers make or keep these beliefs tenable by putting prima-facie rational defense-strategies to work against their established standards of rational evaluation. In paradigmatic self-deception, target-beliefs are made tenable via reorganizations of those belief-sets (...) that relate relevant data to target-beliefs. This manipulation of the evidential value of relevant data goes beyond phenomena of motivated perception of data. In self-deception belief-defense is pseudo-rational. Self-deceivers will typically apply a dual standard of evaluation that remains intransparent to the subject. The developed model of self-deception as pseudo-rational belief-defense is empirically anchored. (shrink)
The self-notion is an essential constituent of any self-belief or self-knowledge. But what is the self-notion? In this paper, I tie together several themes from the philosophy of John Perry to explain how he answers this question. The self-notion is not just any notion that happens to be about the person in whose mind that notion appears, because it's possible to have ways of thinking about oneself that one doesn't realize are about oneself. Characterizing the self-notion properly (and hence (...) self-belief and self-knowledge) requires understanding the role of that notion in tracking agent-relative information and motivating normally self-effecting ways of acting. [Note: the file here is an uncorrected proof. Please see the final book, now called _Identity, Language, and Mind_, for citation purposes.]. (shrink)
I raise the question of what cognitive attitude self-deception brings about. That is: what is the product of self-deception? Robert Audi and Georges Rey have argued that self-deception does not bring about belief in the usual sense, but rather “avowal” or “avowed belief.” That means a tendency to affirm verbally (both privately and publicly) that lacks normal belief-like connections to non-verbal actions. I contest their view by discussing cases in which the product of self-deception is implicated in (...) action in a way that exemplifies the motivational role of belief. Furthermore, by applying independent criteria of what it is for a mental state to be a belief, I defend the more intuitive view that being self-deceived that p entails believing that p. Beliefs (i) are the default for action relative to other cognitive attitudes (such as imagining and hypothesis) and (ii) have cognitive governance over the other cognitive attitudes. I explicate these two relations and argue that they obtain for the product of self-deception. (shrink)
The paper defends the view that there is a constitutive relation between believing something and believing that one believes it. This view is supported by the incoherence of affirming something while denying that one believes it, and by the role awareness of the contents one’s belief system plays in the rational regulation of that system. Not all standing beliefs are accompanied by higher-order beliefs that self-ascribe them; those that are so accompanied are ones that are “available” in the sense (...) that their subjects are poised to assent to their contents, to use them as premises in reasoning, and to be guided by them in their behavior. The account is compatible with the possibility of negative self-deception—mistakenly believing that one does not believe something—but the closest thing to positive self-deception it allows is believing falsely that a belief with a certain content is one’s dominant belief on a certain matter through failure to realize that one has a stronger belief that contradicts it. The view has implications about Moore’s paradox that contradict widely held views. On this view self-ascriptions of beliefs can be warranted and grounded on reasons—but the reasons are not phenomenally conscious mental states (as held by Christopher Peacocke) but rather available beliefs. (shrink)
Among recent theories of the nature of self-knowledge, the rationalistic view, according to which self-knowledge is not a cognitive achievement—perceptual or inferential—has been prominent. Upon this kind of view, however, self-knowledge becomes a bit of a mystery. Although the rationalistic conception is defended in this article, it is argued that it has to be supplemented by an account of the transparency of belief: the question whether to believe that P is settled when one asks oneself whether P.
This paper is about what is distinctive about first-person beliefs. I discuss several sets of puzzling cases of first-person belief. The first focus on the relation between belief and action, while the second focus on the relation of belief to subjectivity. I argue that in the absence of an explanation of the dispositional difference, individuating such beliefs more finely than truth conditions merely marks the difference. I argue that the puzzles reveal a difference in the ways that (...) I am disposed to revise my beliefs about myself. This point develops the insight that Anscombe and others had that those of an agent's beliefs about himself that manifest that special self-consciousness are not based on observation, testimony or inference. The puzzles show that this kind of self-consciousness involves, not a special kind of belief or even a special kind of self-reference, but a special kind of belief revision policy. (shrink)
A major worry in self-deception research has been the implication that people can hold a belief that something is true and false at the same time: a logical as well as a psychological impossibility. However, if beliefs are held with imperfect confidence, voluntary self-deception in the sense of seeking evidence to reject an unpleasant belief becomes entirely plausible and demonstrably real.
Although the extent to which motivational factors are involved in the production and sustaining of biased or 'irrational' beliefs continues to be a controversial issue in social psychology, even those who urge that such beliefs are often explained by non-motivational tendencies admit that biased beliefs sometimes have motivational sources. Sometimes toe are influenced by motivational pressures in ways proscribed by principles that we accept for belief-acquisition or belief-revision ('doxastic' principles). Many garden-variety instances of self-deception are cases in point. (...) We are not always helpless victims of those pressures, however. This paper examines the nature of doxastic self-control (roughly, a capacity to counteract motivational pressures that incline us to acquire or retain beliefs that would violate our doxastic principles) and explores our prospects for avoiding motivationally biased believing by exercising self-control. (shrink)
Abstract Moral judgement stage in 69 adult students was investigated in relation to the cognitive articulation and content of their moral belief systems, the content and structure of their self?identity systems, and perceived favouritism by their parents in child?rearing. Articulation of the moral belief system was not related to moral stage; however, belief content was related to stage, with both pre?conventional and post?conventional subjects tending to reject orthodox moral values. The study failed to confirm earlier claims for (...) greater self?ideal disparity with increasing moral maturity, and cross?cultural comparisons with an Irish sample suggest that such progression was an artefact of the a priori measures used. Pre? and post?conventional subjects shared strong patterns of identification with siblings, with no distinctive pattern for conventional subjects. Finally, moral stage, in interaction with sex, was related to perceived differences in favouritism of like? and cross?sex parents, making sense of a number of reported anomalies in the moral development literature. (shrink)