Most forms of virtue ethics are characterized by two attractive features. The first is that proponents of virtue ethics acknowledge the need to describe how moral agents acquire or develop the traits and abilities necessary to become morally able agents. The second attractive feature of most forms of virtue ethics is that they are forms of moral realism. The two features come together in the attempt to describe virtue as a personal ability to distinguish morally good reasons for action. It (...) follows from the general picture of virtue ethics presented here that we cannot evaluate ethical judgment independently of the viewpoint of the ideal of a virtuous person. We will examine how this ideal unfolds in the realistic form of virtue ethics advanced by John McDowell. McDowell offers a compelling description of virtue as a natural ability grounded in human nature, while at the same time insisting that we cannot understand the judgment resulting from virtue without drawing on that very perspective. However, McDowell’s focus on the passive taking in of reasons in ethical experience and his idea of the silencing of wrong reasons lead us to three related problems. The first is that he cannot account for certain features of the phenomenology of such experience; the second is that he cannot provide any relevant epistemological criteria for correct moral judgment; and the third is that he gives a morally objectionable characterization of the ideal of being a virtuous person. All of these problems arise because McDowell does not take into account the particular nature of ethical experience. If we try to resolve this problem by dropping McDowell’s idea of silencing, we then have to offer another substantial description of our ideal of a virtuous person that includes active and interpersonal ways of evaluating concrete judgments. Proponents of virtue ethics still have to lift this task and develop a position that does not limit ethical experience to the passive intake of reasons. (shrink)
Most commentators working on Wittgenstein’s remarks on ethics note that he rejects the very possibility of traditional normative ethics, that is, a philosophically justified normative guide for right conduct. In this article, Wittgenstein’s view of ethical reflection as presented in his notebooks from 1936 to 1938 is investigated, and the question of whether it involves ethical guidance is addressed. In Wittgenstein’s remarks, we can identify three requirements inherent in ethical reflection. The first two is revealed in the realisation that ethical (...) reflection presupposes both a clear understanding of oneself and a normative ideal of how one ought to live and reason. The third source of normativity springs from the fact that ethical reflection involves a relationship with the other, not as judge, but as example and addressee. In this way, ethical reflection is essentially relational. In the article, we unfold how these three normative sources figure in Wittgenstein’s remarks, especially how the third requirement, the relationship with the other, shows both a point of conversion and a difference between his view of ethics and religious faith. It will also be argued that even if Wittgenstein thus presents ethical reflection as a normatively guided activity, the content of the guidance is personal, springing solely from the reflecting individual. (shrink)
Many still seem confident that the kind of semantic theory Putnam once proposed for natural kind terms is right. This paper seeks to show that this confidence is misplaced because the general idea underlying the theory is incoherent. Consequently, the theory must be rejected prior to any consideration of its epistemological, ontological or metaphysical acceptability. Part I sets the stage by showing that falsehoods, indeed absurdities, follow from the theory when one deliberately suspends certain devices Putnam built into it , (...) presumably in order to block such entailments. Part II then raises the decisive issue of at what cost these devices do the job they need to do. It argues that - apart from possessing no other motivation than their capacity to block the consequences derived in Part I - they only fulfil this blocking function if they render the theory unable to deal with fiction and related 'make-believe' activities. Part III indicates the affinity Putnam's account has with the classically 'denotative' view of meaning, and thus how its weaknesses may be seen as a variant of the classical weakness of 'denotative' approaches. It concludes that the theory is a conceptual muddle. (shrink)
Donald Campbell has long advocated a naturalist epistemology based on a general selection theory, with the scope of knowledge restricted to vicarious adaptive processes. But being a vicariant is problematic because it involves an unexplained epistemic relation. We argue that this relation is to be explicated organizationally in terms of the regulation of behavior and internal state by the vicariant, but that Campbell's selectionist approach can give no satisfactory account of it because it is opaque to organization. We show how (...) organizational constraints and capacities are crucial to understanding both evolution and cognition and conclude with a proposal for an enriched, generalized model of evolutionary epistemology that places high-order regulatory organization at the center. (shrink)
Jarrett Leplin’s paper is multifaceted; it’s rich with ideas, and I won’t even try to touch on all of them. Instead, I’d like to raise three questions about the paper: one about its definition of reliable method, one about its solution to the generality problem, and one about its answer to clairvoyance-type objections.
The basal and reciprocal models of the relationship between androgen secretion and dominance are not mutually exclusive. Individuals may differ in basal levels of androgen secretion, reactivity to experiences, and androgen sensitivity. Early experiences might affect any of these parameters.
Formally-inclined epistemologists often theorize about ideally rational agents--agents who exemplify rational ideals, such as probabilistic coherence, that human beings could never fully realize. This approach can be defended against the well-know worry that abstracting from human cognitive imperfections deprives the approach of interest. But a different worry arises when we ask what an ideal agent should believe about her own cognitive perfection (even an agent who is in fact cognitively perfect might, it would seem, be uncertain of this fact). Consideration (...) of this question reveals an interesting feature of the structure of our epistemic ideals: for agents with limited information, our epistemic ideals turn out to conflict with one another. (shrink)
Responding rationally to the information that others disagree with one’s beliefs requires assessing the epistemic credentials of the opposing beliefs. Conciliatory accounts of disagreement flow in part from holding that these assessments must be independent from one’s own initial reasoning on the disputed matter. I argue that this claim, properly understood, does not have the untoward consequences some have worried about. Moreover, some of the difficulties it does engender must be faced by many less conciliatory accounts of disagreement (and, more (...) generally, by accounts of rationally responding to evidence of one’s epistemic malfunction). (shrink)
Much contemporary epistemology is informed by a kind of confirmational holism, and a consequent rejection of the assumption that all confirmation rests on experiential certainties. Another prominent theme is that belief comes in degrees, and that rationality requires apportioning one's degrees of belief reasonably. Bayesian confirmation models based on Jeffrey Conditionalization attempt to bring together these two appealing strands. I argue, however, that these models cannot account for a certain aspect of confirmation that would be accounted for in any adequate (...) holistic confirmation theory. I then survey the prospects for constructing a formal epistemology that better accommodates holistic insights. (shrink)
Formal theories, as in logic and mathematics, are sets of sentences closed under logical consequence. Philosophical theories, like scientific theories, are often far less formal. There are many axiomatic theories of the truth predicate for certain formal languages; on analogy with these, some philosophers (most notably Paul Horwich) have proposed axiomatic theories of the property of truth. Though in many ways similar to logical theories, axiomatic theories of truth must be different in several nontrivial ways. I explore what an axiomatic (...) theory of truth would look like. Because Horwich’s is the most prominent, I examine his theory and argue that it fails as a theory of truth. Such a theory is adequate if, given a suitable base theory, every fact about truth is a consequence of the axioms of the theory. I show, using an argument analogous to Gödel’s incompleteness proofs, that no axiomatic theory of truth could ever be adequate. I also argue that a certain class of generalizations cannot be consequences of the theory. (shrink)
According to Hubert L. Dreyfus, Heidegger's central innovation is his rejection of the idea that intentional activity and directedness is always and only a matter of having representational mental states. This paper examines the central passages to which Dreyfus appeals in order to motivate this claim. It shows that Dreyfus misconstrues these passages significantly and that he has no grounds for reading Heidegger as anticipating contemporary anti-representationalism in the philosophy of mind. The misunderstanding derives from lack of sensitivity to Heidegger's (...) own intellectual context. The otherwise laudable strategy of reading Heidegger as a philosopher of mind becomes an exercise in finding a niche for Heidegger in Dreyfus's own unquestioned present. Heidegger is thereby mapped on to an intellectual context which, given its naturalistic commitments, is foreign to him. The paper concludes by indicating the direction in which a more historically sensitive, and thus accurate, interpretation of Heidegger must move. (shrink)
What role, if any, does formal logic play in characterizing epistemically rational belief? Traditionally, belief is seen in a binary way - either one believes a proposition, or one doesn't. Given this picture, it is attractive to impose certain deductive constraints on rational belief: that one's beliefs be logically consistent, and that one believe the logical consequences of one's beliefs. A less popular picture sees belief as a graded phenomenon.
Sterelny’s Thought in a Hostile World ([ 2003 ]) presents a complex, systematically structured theory of the evolution of cognition centered on a concept of decoupled representation. Taking Godfrey-Smith’s ([ 1996 ]) analysis of the evolution of behavioral flexibility as a framework, the theory describes increasingly complex grades of representation beginning with simple detection and culminating with decoupled representation, said to be belief-like, and it characterizes selection forces that drive evolutionary transformations in these forms of representation. Sterelny’s ultimate explanatory target (...) is the evolution of human agency. This paper develops a detailed analysis of the main cognitive aspects. It is argued that some of the major claims are not correct: decoupled representation as defined doesn’t capture belief-like representation, and, properly understood, decoupled representation turns out to be ubiquitous among multicellular animals. However, some of the key ideas are right, or along the right lines, and suggestions are made for modifying and expanding the conceptual framework. (shrink)
It is obvious that we would not want to demand that an agent' s beliefs at different times exhibit the same sort of consistency that we demand from an agent' s simultaneous beliefs; there' s nothing irrational about believing P at one time and not-P at another. Nevertheless, many have thought that some sort of coherence or stability of beliefs over time is an important component of epistemic rationality.
Heidegger's central concern is the question of being (Seinsfrage). The paper reconstructs this question at least for the young (pre- Kehre) Heidegger in the light of two interconnected hypotheses: (1) the substantial content of the question of being can be identified by seeing it as a response to (Marburg) neo-Kantianism; and (2) this content centres around the claim that, pace the neo-Kantians, 'epistemological' concerns are grounded in 'ontological' ones, for which reason 'ontology' must precede 'epistemology' as a form of philosophical (...) inquiry. In section I the general position of (Marburg) neo-Kantianism is sketched. In section II the implications of the neo-Kantian position for the concepts of truth and reality, reason, and experience, are outlined; significant similarities to Sellars, Davidson, and Brandom are revealed. Finally, in section III Heidegger's analysis of everydayness is shown to yield a distinct critique of the neo-Kantian relativization of the concept of the real to the theoretically knowable. From this critique it emerges why Heidegger thinks that 'ontology' precedes 'epistemology'. The project of fundamental ontology marked by the question of being thus shows itself to be at least in part a response to the aporia of Marburg neo-Kantianism. (shrink)
I propose that an adequate name for a proposition will be (1) rigid, in Kripke’s sense of referring to the same thing in every world in which it exists, and (2) transparent, which means that it would be possible, if one knows the name, to know which object the name refers. I then argue that the Standard Way of naming propositions—prefixing the word ‘that’ to a declarative sentence—does not allow for transparent names of every proposition, and that no alternative naming (...) convention does better. I explore the implications of this failure for deflationism about truth, arguing that any theory that requires the T biconditional to be a priori cannot succeed. (shrink)
Lei Zhong (2012. Counterfactuals, regularity and the autonomy approach. Analysis 72: 75–85) argues that non-reductive physicalists cannot establish the autonomy of mental causation by adopting a counterfactual theory of causation since such a theory supports a so-called downward causation argument which rules out mental-to-mental causation. We respond that non-reductive physicalists can consistently resist Zhong's downward causation argument as it equivocates between two familiar notions of a physical realizer.
At least phenomenologically the way communicative acts reveal intentions is different from the way non-communicative acts do this: the former have an "addressed" character which the latter do not. The paper argues that this difference is a real one, reflecting the irreducibly "conventional" character of human communication. It attempts to show this through a critical analysis of the Gricean programme and its methodologically individualist attempt to explain the "conventional" as derivative from the "non-conventional". It is shown how in order to (...) eliminate certain counterexamples the Gricean analysis of utterer's meaning must be made self-referential. It is then shown how this in turn admits an "ontological difference" which undercuts all methodological individualism: meaning something by an utterance must then have a certain intrinsic, irreducible "conventionality" and "intersubjectivity". Objections to this claim are raised and dealt with. It is suggested that any problem of origin might be resolvable by rejecting the semantic reductionism of Grice's programme. An internal relation between self-consciousness, intersubjectivity and language is suggested. The paper ends by speculating that the self-conscious subject is intrinsically embodied and related to other subjects in that for it its body is essentially a medium of signs with which to express its "inner states" to others. (shrink)
The main appeal of the currently popular "bootstrap" account of confirmation developed by Clark Glymour is that it seems to provide an account of evidential relevance. This account has, however, had severe problems; and Glymour has revised his original account in an attempt to solve them. I argue that this attempt fails completely, and that any similar modifications must also fail. If the problems can be solved, it will only be by radical revisions which involve jettisoning bootstrapping's basic approach to (...) theories. Finally, I argue that there is little reason to think that even such drastic modifications will lead to a satisfactory account of relevance. (shrink)
It is commonly acknowledged that, in order to test a theoretical hypothesis, one must, in Duhem' s phrase, rely on a "theoretical scaffolding" to connect the hypothesis with something measurable. Hypothesis-confirmation, on this view, becomes a three-place relation: evidence E will confirm hypothesis H only relative to some such scaffolding B. Thus the two leading logical approaches to qualitative confirmation--the hypothetico-deductive (H-D) account and Clark Glymour' s bootstrap account--analyze confirmation in relative terms. But this raises questions about the philosophical interpretation (...) of the technical conditions these accounts describe. What does it mean to say that E confirms H "relative to B"? How should we interpret the relation we are trying to analyze? (shrink)
Glymour's "bootstrap" account of confirmation is designed to provide an analysis of evidential relevance, which has been a serious problem for hypothetico-deductivism. As set out in Theory and Evidence, however, the "bootstrap" condition allows confirmation in clear cases of evidential irrelevance. The difficulties with Glymour's account seem to be due to a basic feature which it shares with hypothetico-deductive accounts, and which may explain why neither can give a satisfactory analysis of evidential relevance.
This paper outlines an original interactivist-constructivist (I-C) approach to modelling intelligence and learning as a dynamical embodied form of adaptiveness and explores some applications of I-C to understanding the way cognitive learning is realized in the brain. Two key ideas for conceptualizing intelligence within this framework are developed. These are: (1) intelligence is centrally concerned with the capacity for coherent, context-sensitive, self-directed management of interaction; and (2) the primary model for cognitive learning is anticipative skill construction. Self-directedness is a capacity (...) for integrative process modulation which allows a system to "steer" itself through its world by anticipatively matching its own viability requirements to interaction with its environment. Because the adaptive interaction processes required of intelligent systems are too complex for effective action to be prespecified (e.g. genetically) learning is an important component of intelligence. A model of self-directed anticipative learning (SDAL) is formulated based on interactive skill construction, and argued to constitute a central constructivist process involved in cognitive development. SDAL illuminates the capacity of intelligent learners to start with the vague, poorly defined problems typically posed in realistic learning situations and progressively refine them, transforming them into problems with sufficient structure to guide the construction of a solution. Finally, some of the implications of I-C for modelling of the neuronal basis of intelligence and learning are explored; in particular, Quartz and Sejnowski's recent neural constructivism paradigm, enriched by Montague and Sejnowski's dopaminergic model of anticipative-predictive neural learning, is assessed as a promising, but incomplete, contribution to this approach. The paper concludes with a fourfold reflection on the divergence in cognitive modelling philosophy between the I-C and the traditional computational information processing approaches. (shrink)
For the purpose of contributing to a clarification of the term process, different kinds of musical processes are investigated: A rule-determined phase shifting process in Steve Reich's Piano Phase (1966), a model for an indeterminate composition process in John Cage's Variations II (1961), a number of evolution processes in György Ligeti's In zart fliessender Bewegung (1976), and a generative process of fractal nature in Per Nørgård's Second Symphony (1970). In conclusion I propose that six process categories should be included in (...) a typology of processes: Rule-determined, goal-directed and indeterminate transformation processes, and rule-determined, goal-directed and indeterminate generative processes. (shrink)
Abstract Current interpretations of Heidegger's notion of das Man are caught in a dilemma: either they cannot accommodate the ontological status Heidegger accords it or they cannot explain his negative evaluation of it, in which it is treated as ontic. This paper uses Simmel's agonistic account of human sociality to integrate the ontological and the ontic, indeed perjorative aspects of Heidegger's account. Section I introduces the general problem, breaks the exclusive link of Heidegger's account to Kierkegaard and delineates the general (...) form of a solution. Section II then sketches Simmel's conception of sociology and sociality. Section III determines what Heidegger is trying to do in Chapter Four of Division I in Being and Time in order to formulate a strictly ontological account of das Man. Section IV uses Simmel's account of sociality to build into this ontological account an inherent tendency to display the negative features Heidegger ascribes to das Man. In conclusion, section V points to how the proposed account of das Man intimates the character of fundamental ontology as nascently a form of critical theory. It also explains the extent to which Heidegger's perjorative characterisations of das Man and the Man-selbst are legitimate. (shrink)
The general structure of Steels & Belpaeme's (S&B's) central premise is appealing. Theoretical stances that focus on one type of mechanism miss the fact that multiple mechanisms acting in concert can provide convergent constraints for a more robust capacity than any individual mechanism might achieve acting in isolation. However, highlighting the significance of complex constraint interactions raises the possibility that some of the relevant constraints may have been left out of S&B's own models. Although abstract modeling can help clarify issues, (...) it also runs the risk of oversimplification and misframing. A more subtle implication of the significance of interacting constraints is that it calls for a close relationship between theoretical and empirical research. (shrink)
Ethics research literature often uses Rest’s Four Component Model of ethical behavior as a framework to teach business and accounting ethics. Moral motivation, including resolve to have moral courage, is the third component of the model and is the least-tested component in ethics research. Using a quasi-experimental design with pretest and posttest measurements, we compare the effectiveness of several methods (traditional, exhortation, reflection, moral exemplar) for developing resolve to have moral courage in 211 accounting students during one semester. Results show (...) that traditional, reflection, moral exemplar methods increased resolve to have moral courage, and that the reflection and moral exemplar methods were more effective than the other methods. (shrink)
This book draws upon the phenomenological tradition of Husserl and Heidegger to provide an alternative elaboration of John McDowell’s thesis that in order to understand how self-conscious subjectivity relates to the world, perception must be understood as a genuine unity of spontaneity (‘concept’) and receptivity (‘intuition’). Thereby it clarifies McDowell’s critique of Donald Davidson and develops an alternative conception of perceptual experience which gives sense to McDowell’s claim that self-conscious subjectivity is so inherently in touch with its world that scepticism (...) about the latter must be incoherent. It also develops a more accurate, historically oriented critique of the metaphysics constraining one to construe perceptual experience in ways which misrepresent how self-conscious subjectivity bears upon the world. It shows that many of McDowell’s meta-philosophical views are implicitly Husserlian and that had McDowell developed them further, he would have avoided the paradoxical meta-philosophy he adopts from Wittgenstein. In conclusion, it intimates the central weakness in Husserl’s position which takes one from Husserl to Heidegger. The book is written in terms accessible to analytic philosophers and will thus enable them to see the central differences between analytic and phenomenological approaches to intentionality and self-consciousness. (shrink)
Carleton B. Christensen, Self and World: From Analytic Philosophy to Phenomenology Content Type Journal Article DOI 10.1007/s10743-010-9078-2 Authors Morten S. Thaning, Department of Philosophy, Politics, and Management, Copenhagen Business School, Solbjerg Plads 3, 2000 Frederiksberg, Denmark Journal Husserl Studies Online ISSN 1572-8501 Print ISSN 0167-9848 Journal Volume Volume 26 Journal Issue Volume 26, Number 3.
Tyler Burge defends the idea that memory preserves beliefswith their justifications, so that memory's role in inferenceadds no new justificatory demands. Against Burge's view,Christensen and Kornblith argue that memory is reconstructiveand so introduces an element of a posteriori justificationinto every inference. I argue that Burge is right,memory does preserve content, but to defend this viewwe need to specify a preservative mechanism. Toward thatend, I develop the idea that there is something worthcalling anaphoric thinking, which preserves content inBurge's sense of (...) ``content preservation.'' I providea model on which anaphoric thought is a fundamentalfeature of cognitive architecture, consequentlyrejecting the idea that there are mental pronounsin a Language of Thought. Since preservativememory is a matter of anaphoric thinking, thereare limits on the analogy of memory and testimony. (shrink)
In Belief and the Will, van Fraassen employed a diachronic Dutch Book argument to support a counterintuitive principle called Reflection. There and subsequently van Fraassen has put forth Reflection as a linchpin for his views in epistemology and the philosophy of science, and for the voluntarism (first-person reports of subjective probability are undertakings of commitments) that he espouses as an alternative to descriptivism (first-person reports of subjective probability are merely self-descriptions). Christensen and others have attacked Reflection, taking it to (...) have unpalatable consequences. We prescind from the question of the cogency of diachronic Dutch Book arguments, and focus on Reflection's proper interpretation. We argue that Reflection is not as counterintuitive as it appears — that once interpreted properly the status of the counterexamples given by Christensen and others is left open. We show also that descriptivism can make sense of Reflection, while voluntarism is not especially well suited to do so. (shrink)
Some philosophers believe that when epistemic peers disagree, each has an obligation to accord the other's assessment the same weight as her own. I first make the antecedent of this Equal-Weight View more precise, and then I motivate the View by describing cases in which it gives the intuitively correct verdict. Next I introduce some apparent counterexamples – cases of apparent peer disagreement in which, intuitively, one should not give equal weight to the other party's assessment. To defuse these apparent (...) counterexamples, an advocate of the View might try to explain how they are not genuine cases of peer disagreement. I examine David Christensen's and Adam Elga's explanations and find them wanting. I then offer a novel explanation, which turns on a distinction between knowledge from reports and knowledge from direct acquaintance. Finally, I extend my explanation to provide a handy and satisfying response to the charge of self-defeat. (shrink)
In a recent article, David Christensen casts aspersions on a restricted version of van Fraassen's Reflection principle, which he dubs ‘Self-Respect’(sr). Rejecting two possible arguments for sr, he concludes that the principle does not constitute a requirement of rationality. In this paper we argue that not only has Christensen failed to make a case against the aforementioned arguments, but that considerations pertaining to Moore's paradox indicate that sr, or at the very least a mild weakening thereof, is indeed (...) a plausible normative principle. (shrink)
This paper explores how the Bayesian program benefits from allowing for objective chance as well as subjective degree of belief. It applies David Lewis’s Principal Principle and David Christensen’s principle of informed preference to defend Howard Raiffa’s appeal to preferences between reference lotteries and scaling lotteries to represent degrees of belief. It goes on to outline the role of objective lotteries in an application of rationality axioms equivalent to the existence of a utility assignment to represent preferences in Savage’s (...) famous omelet example of a rational choice problem. An example motivating causal decision theory illustrates the need for representing subjunctive dependencies to do justice to intuitive examples where epistemic and causal independence come apart. We argue to extend Lewis’s account of chance as a guide to epistemic probability to include De Finetti’s convergence results. We explore Diachronic Dutch book arguments as illustrating commitments for treating transitions as learning experiences. Finally, we explore implications for Martingale convergence results for motivating commitment to objective chances. (shrink)
An amended bootstrapping can avoid Christensen's counterexamples. Earman and Edidin argue that Christensen's examples to bootstrapping rely on his failure to analyze background knowledge. I add an additional condition to bootstrapping that is motivated by Glymour's remarks on variety of evidence. I argue that it avoids the problems that the examples raise. I defend the modification against the charge that it is holistic, and that it collapses into Bayesianism.
S. Adams, W. Ambrose, A. Andretta, H. Becker, R. Camerlo, C. Champetier, J.P.R. Christensen, D.E. Cohen, A. Connes. C. Dellacherie, R. Dougherty, R.H. Farrell, F. Feldman, A. Furman, D. Gaboriau, S. Gao, V. Ya. Golodets, P. Hahn, P. de la Harpe, G. Hjorth, S. Jackson, S. Kahane, A.S. Kechris, A. Louveau,, R. Lyons, P.-A. Meyer, C.C. Moore, M.G. Nadkarni, C. Nebbia, A.L.T. Patterson, U. Krengel, A.J. Kuntz, J.-P. Serre, S.D. Sinel'shchikov, T. Slaman, Solecki, R. Spatzier, J. Steel, D. Sullivan, (...) S. Thomas, A. Valette, V.S. Varadarajan, B. Velickovic, B. Weiss, J.D.M. Wright, R.J. Zimmer. (shrink)
Christensen [Philosophy of Science, 50: 471–481, 1983] and [Philosophy of Science, 57: 644–662, 1990] provides two sets of counter-examples to the versions of bootstrap confirmation for standard first-order languages presented in Glymour [Theory and Evidence, Princeton University Press, Princeton, 1980] and [Philosophy of Science, 50: 626–629, 1983]. This paper responds to the counter-examples of Christensen [Philosophy of Science, 50: 471–481, 1983] by utilizing a new notion of content introduced in Gemes [Journal of Philosophical Logic, 26, 449–476, 1997]. (...) It is claimed that this response is better motivated and more effective than that presented in Glymour [Philosophy of Science, 50: 626–629, 1983]. It is then argued that while this response meets some of the counter-examples of Christensen [Philosophy of Science, 57: 644–662, 1990] two of those counter-examples, though not unanswerable, suggest the need for a substantial reformulation of the formal versions of bootstrapping. The essay proceeds with such a reformulation, arguing that this new formulation better fits the philosophical insights that originally motivated bootstrapping than do Glymour’s earlier formulations. In the concluding sections some alternative solutions to the problem posed by the Christensen counter-examples are discussed. (shrink)
My goal in this paper is to generalize Kirsh and Maglio’s (1994) distinction between pragmatic and epistemic actions from the level of individuals to the level of groups. I use the concept of a collective epistemic action to refer to the ways in which groups of people actively change the structure of their social organization, with the epistemic goal of reshaping and augmenting their cognitive performance as integrated collectivities. By placing a renewed emphasis on the interactions between people, rather than (...) between people and their tools I hope to reconnect the cognitive-scientifically-driven “extended mind” thesis (Clark and Chalmers 1998; Clark 2008) with complementary areas of social-scientific research in which groups are analyzed as the seats of action and cognition in their own right. In particular, the literature to which I aim to build a bridge in this paper are certain segments of social and organizational psychology on the one hand (Larsen and Christensen 1993; Hinsz et al. 1997, Mohammed and Dumville 2001), and theories of collective and institutional action on the other hand (Ostrom 1990, List and Pettit 2011). (shrink)
Since Christensen refuted the Bootstrap theory of confirmation in 1990, there have been some trials to improve the Hypothetico-Deductive theory of confirmation. After some trials, Gemes (1998) declared that his revised version completely overcame the difficulties of Hypothetico-Deductivism without generating any new difficulties. In this paper, I will assert that Gemes's revised version encounters some new difficulties, so it cannot be a true alternative to the Bootstrap theory of confirmation and to classical Hypothetico-Deductivism. Also I will assert that, in (...) principle, such new difficulties cannot be overcome by any trials dependent only on formal logic. (shrink)
Slide #0 (Title). Before I get underway, I’d like to quickly thank a few people. First, Jonathan Vogel and John MacFarlane for working behind the scenes to make this thing happen. And, of course, David Christensen for chairing, and Patrick Maher and Jim Joyce for participating. I especially want to thank Patrick for his terriﬁc feedback on my work this term, which has helped me to get much clearer on my project. Before we get started, does everyone have a (...) handout? The handout contains all the slides I will be going through. That’s almost everything I’m going to say. The script from which I am reading today will only occasionally embellish what’s written on the slides (like right now, for instance). OK, now onto today’s agenda. (shrink)
We present a family of counter-examples to David Christensen's Independence Criterion, which is central to the epistemology of disagreement. Roughly, independence requires that, when you assess whether to revise your credence in P upon discovering that someone disagrees with you, you shouldn't rely on the reasoning that lead you to your initial credence in P. To do so would beg the question against your interlocutor. Our counter-examples involve questions where, in the course of your reasoning, you almost fall for (...) an easy-to-miss trick. We argue that you can use the step in your reasoning where you (barely) caught the trick as evidence that someone of your general competence level (your interlocutor) likely fell for it. Our cases show that it's permissible to use your reasoning about disputed matters to disregard an interlocutor's disagreement, so long as that reasoning is embedded in the right sort of explanation of why she finds the disputed conclusion plausible, even though it's false. (shrink)
The “puzzle of the unmarked clock” derives from a conflict between the following: (1) a plausible principle of epistemic modesty, and (2) “Rational Reflection”, a principle saying how one’s beliefs about what it is rational to believe constrain the rest of one’s beliefs. An independently motivated improvement to Rational Reflection preserves its spirit while resolving the conflict.
I defend and revise the systematic account of normative functions (teleofunctions), as recently developed by Gerhard Schlosser and by W. D. Christensen and M. H. Bickhard. This account proposes that teleofunctions are had by structures that play certain kinds of roles in complex systems. This theory is an alternative to the historical etiological account of teleofunctions, developed by Ruth Millikan and others. The historical etiological account is susceptible to a general ontological problem that has been under-appreciated, and that offers (...) important reasons to adopt the systematic account. However, the systematic account must be revised to allow for two distinct kinds of teleofunctions in order to avoid another ontological problem. (shrink)