Sorensen here offers a unified solution to a large family of philosophical puzzles and paradoxes through a study of "blindspots": consistent propositions that cannot be rationally accepted by certain individuals even though they might by true.
Sorensen presents a general theory of thought experiments: what they are, how they work, what are their virtues and vices. On Sorensen's view, philosophy differs from science in degree, but not in kind. For this reason, he claims, it is possible to understand philosophical thought experiments by concentrating on their resemblance to scientific relatives. Lessons learned about scientific experimentation carry over to thought experiment, and vice versa. Sorensen also assesses the hazards and pseudo-hazards of thought experiments. Although he grants that (...) there are interesting ways in which the method leads us astray, he attacks most scepticism about thought experiments as arbitrary. They should be used, he says, as they generally are used--as part of a diversified portfolio of techniques. All of these devices are individually susceptible to abuse, fallacy, and error. Collectively, however, they provide a network of cross-checks that make for impressive reliability. (shrink)
In this book, Sorensen presents the first general theory of the thought experiment. He analyses a wide variety of thought experiments, ranging from aesthetics to zoology, and explores what thought experiments are, how they work, and what their positive and negative aspects are. Sorensen also sets his theory within an evolutionary framework and integrates recent advances in experimental psychology and the history of science.
The aim of this paper is to show how thought experiments help us learn about laws. After providing examples of this kind of nomic illumination in the first section, I canvass explanations of our modal knowledge and opt for an evolutionary account. The basic application is that the laws of nature have led us to develop rough and ready intuitions of physical possibility which are then exploited by thought experimenters to reveal some of the very laws responsible for those intuitions. (...) The good news is that natural selection ensures a degree of reliability for the intuitions. The bad news is that the evolutionary account seems to limit the range of reliable thought experiment to highly practical and concrete contexts. In the fifth section, I provide reasons for thinking that we are not as slavishly limited as a pessimistic construal of natural selection suggests. Nevertheless, I promote the idea that biology is a promising source of predictions and diagnoses of thought experiment failures. (shrink)
This is a defense and extension of Stephen Yablo's claim that self-reference is completely inessential to the liar paradox. An infinite sequence of sentences of the form 'None of these subsequent sentences are true' generates the same instability in assigning truth values. I argue Yablo's technique of substituting infinity for self-reference applies to all so-called 'self-referential' paradoxes. A representative sample is provided which includes counterparts of the preface paradox, Pseudo-Scotus's validity paradox, the Knower, and other enigmas of the genre. I (...) rebut objections that Yablo's paradox is not a genuine liar by constructing a sequence of liars that blend into Yablo's paradox. I rebut objections that Yablo's liar has hidden self-reference with a distinction between attributive and referential self-reference and appeals to Gregory Chaitin's algorithmic information theory. The paper concludes with comments on the mystique of self-reference. (shrink)
The argument proceeds by exploiting the gradually decreasing vagueness of a certain sequence of predicates. the vagueness of 'vague' is then used to show that the thesis that all vague predicates are incoherent is self-defeating. a second casualty is the view that the probems of vagueness can be avoided by restricting the scope of logic to nonvague predicates.
(1984). Conditional blindspots and the knowledge squeeze: A solution to the prediction paradox. Australasian Journal of Philosophy: Vol. 62, No. 2, pp. 126-135.
Stereotypically, computation involves intrinsic changes to the medium of representation: writing new symbols, erasing old symbols, turning gears, flipping switches, sliding abacus beads. Perspectival computation leaves the original inscriptions untouched. The problem solver obtains the output by merely alters his orientation toward the input. There is no rewriting or copying of the input inscriptions; the output inscriptions are numerically identical to the input inscriptions. This suggests a loophole through some of the computational limits apparently imposed by physics. There can be (...) symbol manipulation without inscription manipulation because symbols are complex objects that have manipulatable elements besides their inscriptions. Since a written symbol is an ordered pair of consisting of a shape and the reader's orientation to that inscription, the symbol can be changed by changing the orientation rather than inscription. Although there are the usual physical limits associated with reading the answer, the computation is itself instantaneous. This is true even when the sub-calculations are algorithmically complex, exponentially increasing or even infinite. (shrink)
The aim of this paper is to show how thought experiments help us learn about laws. After providing examples of this kind of nomic illumination in the first section, I canvass explanations of our modal knowledge and opt for an evolutionary account. The basic application is that the laws of nature have led us to develop rough and ready intuitions of physical possibility which are then exploited by thought experimenters to reveal some of the very laws responsible for those intuitions. (...) The good news is that natural selection ensures a degree of reliability for the intuitions. The bad news is that the evolutionary account seems to limit the range of reliable thought experiment to highly practical and concrete contexts. In the fifth section, I provide reasons for thinking that we are not as slavishly limited as a pessimistic construal of natural selection suggests. Nevertheless, I promote the idea that biology is a promising source of predictions and diagnoses of thought experiment failures. (shrink)
Vagueness theorists tend to think that evolutionary theory dissolves the riddle "Which came first, the chicken or the egg?". After all, 'chicken' is vague. The idea is that Charles Darwin demonstrated that the chicken was preceded by borderline chickens and so it is simply indeterminate as to where the pre-chickens end and the chickens begin.
Peter Slezak and William Boos have independently advanced a novel interpretation of Descartes's "cogito". The interpretation portrays the "cogito" as a diagonal deduction and emphasizes its resemblance to Godel's theorem and the Liar. I object that this approach is flawed by the fact that it assigns 'Buridan sentences' a legitimate role in Descartes's philosophy. The paradoxical nature of these sentences would have the peculiar result of undermining Descartes's "cogito" while enabling him to "disprove" God's existence.
Stepping into the other guy's shoes works best when you resemble him. After all, the procedure is to use yourself as a model: in goes hypothetical beliefs and desires, out comes hypothetical actions and revised beliefs and desires. If you are structurally analogous to the empathee, then accurate inputs generate accurate outputs-just as with any other simulation. The greater the degree of isomorphism, the more dependable and precise the results. This sensitivity to degrees of resemblance suggests that the method of (...) empathy works best for average people. The advantage of being a small but representative sample of the population will create a bootstrap effect. For as average people prosper, there will be more average descendants and so the degree of resemblance in subsequent generations will snowball. Each increment in like-mindedness further enhances the reliability and validity of mental simulation. With each circuit along the spiral, there is tighter and tighter bunching and hence further empowerment of empathy. The method is self-strengthening and eventually molds a population of hyper-similar individuals-which partly solves the problem of other minds. (shrink)
Drawing inspiration from the ethical pluralism of G. E. Moore's Principia Ethica, I contend that one empty world can be morally better than another. By ?empty? I mean that it is devoid of concrete entities (things that have a position in space or time). These worlds have no thickets or thimbles, no thinkers, no thoughts. Infinitely many of these worlds have laws of nature, abstract entities, and perhaps, space and time. These non-concrete differences are enough to make some of them (...) better than others. 1I thank Walter Sinnott-Armstrong, John Carroll, and Gideon Rosen for their comments and suggestions. (shrink)
My thesis is that ‘rational’ is an absolute concept like ‘flat’ and ‘clean’. Absolute concepts are best defined as absences. In the case of flatness, the absence of bumps, curves, and irregularities. In the case of cleanliness, the absence of dirt. Rationality, then, is the absence of irrationalities such as bias, circularity, dogmatism, and inconsistency.
Stepping into the other guy's shoes works best when you resemble him. After all, the procedure is to use yourself as a model: in goes hypothetical beliefs and desires, out comes hypothetical actions and revised beliefs and desires. If you are structurally analogous to the empathee, then accurate inputs generate accurate outputs-just as with any other simulation. The greater the degree of isomorphism, the more dependable and precise the results. This sensitivity to degrees of resemblance suggests that the method of (...) empathy works best for average people. The advantage of being a small but representative sample of the population will create a bootstrap effect. For as average people prosper, there will be more average descendants and so the degree of resemblance in subsequent generations will snowball. Each increment in like-mindedness further enhances the reliability and validity of mental simulation. With each circuit along the spiral, there is tighter and tighter bunching and hence further empowerment of empathy. The method is self-strengthening and eventually molds a population of hyper-similar individuals-which partly solves the problem of other minds. (shrink)
In the twentieth century, philosophers tackled many of the philosophical problems of previous generations by dissolving them--attacking them as linguistic illusions and showing that the problems, when closely inspected, were not problems at all. Roy A. Sorensen takes the most important and interesting examples from one hundred years of analytic philosophy to consolidate a different theory of dissolution. Pseudo-Problems offers a fascinating alternative history of twentieth century analytic philosophy. It seeks to outline a unified account of dissolution that can consolidate (...) the piecemeal insights of analytic philosophers. An accessible account of questionable questions, the book represents an important contribution to the debates about creativity and problem solving. (shrink)
Poindexter points and asserts `That is Clinton''. But it is vague as to whether he pointed at Clinton or pointed at the more salient man, Gore. Since the vagueness only occurs at the level of reference fixing, the content of the identity proposition is precise. Indeed, it is either a necessary truth or a necessary falsehood. Since Poindexter''s utterance has a hidden truth value by virtue of vagueness, it increases the plausibility of epistemicism. Epistemicism says that vague statements have hidden (...) truth values. If a precise statement can have a hidden truth value conferred indirectly by vaguesness, then a vague statement can have a hidden truth value directly by its own vagueness. (shrink)
This paper is devoted to a solution to Moore's problem. After explaining what Moore's problem is and after considering the main approaches toward solving the problem, I provide a definition of Moorean sentences in terms of pure Moorean propositions. My solution to Moore's problem essentially involves a description of how one can contradict oneself without uttering a contradiction, and a set of definitions that exactly determines which sentences are Moorean and which are close relatives of Moorean sentences.