G.E. Moore, more than either Bertrand Russell or Ludwig Wittgenstein, was chiefly responsible for the rise of the analytic method in twentieth-century philosophy. This selection of his writings shows Moore at his very best. The classic essays are crucial to major philosophical debates that still resonate today. Amongst those included are: * A Defense of Common Sense * Certainty * Sense-Data * External and Internal Relations * Hume's Theory Explained * Is Existence a Predicate? * Proof of an External World (...) In addition, this collection also contains the key early papers in which Moore signals his break with idealism, and three important previously unpublished papers from his later work which illustrate his relationship with Wittgenstein. (shrink)
One hypothesis concerning the human dorsal anterior cingulate cortex (ACC) is that it functions, in part, to signal the occurrence of conflicts in information processing, thereby triggering compensatory adjustments in cognitive control. Since this idea was first proposed, a great deal of relevant empirical evidence has accrued. This evidence has largely corroborated the conflict-monitoring hypothesis, and some very recent work has provided striking new support for the theory. At the same time, other findings have posed specific challenges, especially concerning the (...) way the theory addresses the processing of errors. Recent research has also begun to shed light on the larger function of the ACC, suggesting some new possibilities concerning how conflict monitoring might fit into the cingulate's overall role in cognition and action. (shrink)
Cognitive control has long been one of the most active areas of computational modeling work in cognitive science. The focus on computational models as a medium for specifying and developing theory predates the PDP books, and cognitive control was not one of the areas on which they focused. However, the framework they provided has injected work on cognitive control with new energy and new ideas. On the occasion of the books' anniversary, we review computational modeling in the study of cognitive (...) control, with a focus on the influence that the PDP approach has brought to bear in this area. Rather than providing a comprehensive review, we offer a framework for thinking about past and future modeling efforts in this domain. We define control in terms of the optimal parameterization of task processing. From this vantage point, the development of control systems in the brain can be seen as responding to the structure of naturalistic tasks, through the filter of the brain systems with which control directly interfaces. This perspective lays open a set of fascinating but difficult research questions, which together define an important frontier for future computational research. (shrink)
Is God's foreknowledge compatible with human freedom? One of the most attractive attempts to reconcile the two is the Ockhamistic view, which subscribes not only to human freedom and divine omniscience, but retains our most fundamental intuitions concerning God and time: that the past is immutable, that God exists and acts in time, and that there is no backward causation. In order to achieve all that, Ockhamists distinguish ‘hard facts’ about the past which cannot possibly be altered from ‘soft facts’ (...) about the past which are alterable, and argue that God's prior beliefs about human actions are soft facts about the past. (shrink)
An important contribution to the foundations of probability theory, statistics and statistical physics has been made by E. T. Jaynes. The recent publication of his collected works provides an appropriate opportunity to attempt an assessment of this contribution.
What is a natural kind ? As we shall see, the concept of a natural kind has a long history. Many of the interesting doctrines can be detected in Aristotle, were revived by Locke and Leibniz, and have again become fashionable in recent years. Equally there has been agreement about certain paradigm examples: the kinds oak, stickleback and gold are natural kinds, and the kinds table, nation and banknote are not. Sadly agreement does not extend much further. It is impossible (...) to discover a single consistent doctrine in the literature, and different discussions focus on different doctrines without writers or readers being aware of the fact. In this paper I shall attempt to find a defensible distinction between natural and non-natural kinds. (shrink)
In this philosophy classic, which was first published in 1951, E. R. Dodds takes on the traditional view of Greek culture as a triumph of rationalism. Using the analytical tools of modern anthropology and psychology, Dodds asks, "Why should we attribute to the ancient Greeks an immunity from 'primitive' modes of thought which we do not find in any society open to our direct observation?" Praised by reviewers as "an event in modern Greek scholarship" and "a book which it would (...) be difficult to over-praise," _The Greeks and the Irrational _was Volume 25 of the Sather Classical Lectures series. (shrink)
The strong weak truth table (sw) reducibility was suggested by Downey, Hirschfeldt, and LaForte as a measure of relative randomness, alternative to the Solovay reducibility. It also occurs naturally in proofs in classical computability theory as well as in the recent work of Soare, Nabutovsky, and Weinberger on applications of computability to differential geometry. We study the sw-degrees of c.e. reals and construct a c.e. real which has no random c.e. real (i.e., Ω number) sw-above it.
How could the self be a substance? There are various ways in which it could be, some familiar from the history of philosophy. I shall be rejecting these more familiar substantivalist approaches, but also the non-substantival theories traditionally opposed to them. I believe that the self is indeed a substance—in fact, that it is a simple or noncomposite substance—and, perhaps more remarkably still, that selves are, in a sense, self-creating substances. Of course, if one thinks of the notion of substance (...) as an outmoded relic of prescientific metaphysics—as the notion of some kind of basic and perhaps ineffable stuff —then the suggestion that the self is a substance may appear derisory. Even what we ordinarily call ‘stuffs’—gold and water and butter and the like—are, it seems, more properly conceived of as aggregates of molecules or atoms, while the latter are not appropriately to be thought of as being ‘made’ of any kind of ‘stuff’ at all. But this only goes to show that we need to think in terms of a more sophisticated notion of substance—one which may ultimately be traced back to Aristotle's conception of a ‘primary substance’ in the Categories , and whose heir in modern times is W. E. Johnson's notion of the ‘continuant’. It is the notion, that is, of a concrete individual capable of persisting identically through qualitative change, a subject of alterable predicates that is not itself predicable of any further subject. (shrink)
O presente texto procura acompanhar alguns aspectos da reconstrução sartreana das relações entre indivíduo e história, tentando mostrar que a fenomenologia e o materialismo dialético comparecem nessa proposta de conhecimento e que é a convergência das duas perspectivas que permite, contemplando adequadamente a universalidade e a singularidade, descrever e compreender dialeticamente o modo histórico de produção da identidade individual.
The dynamical systems approach in cognitive science offers a potentially useful perspective on both brain and behavior. Indeed, the importation of formal tools from dynamical systems research has already paid off for our field in many ways. However, like some other theoretical perspectives in cognitive science, the dynamical systems approach comes in both moderate or pragmatic and “fundamentalist” varieties (Jones & Love, 2011). In the latter form, dynamical systems theory can rise to some stirring rhetorical heights. However, as argued here, (...) it also triggers a number of serious and specific reservations. (shrink)
In this paper I shall venture into an area with which I am not very familiar and in which I feel far from confident; namely into phenomenology. My main motive is not to get away from standard, boring, methodological questions like those of induction and demarcation; but the conviction that a phenomenological account of the empirical basis forms a necessary complement to Popper's falsificationism. According to the latter, a scientific theory is a synthetic and universal, hence unverifiable proposition. In fact, (...) in order to be technologically useful, a scientific hypothesis must refer to future states-of-affairs; it ought therefore to remain unverified. But in order to be empirical, a theory must bear some kind of relation to factual statements. According to Popper, such a relation can only be one of potential conflict. Thus a theory T will be termed scientific if and only if T is logically incompatible with a so-called basic statement b, where b is both empirically verifiable and empirically falsifiable. In other words: T is scientific if it entails ¬b; where b, hence also ¬b, is an empirically decidable proposition. (shrink)
Why does the problem of free will seem so intractable? I surmise that in large measure it does so because the free will debate, at least in its modern form, is conducted in terms of a mistaken approach to causality in general. At the heart of this approach is the assumption that all causation is fundamentally event causation. Of course, it is well-known that some philosophers of action want to invoke in addition an irreducible notion of agent causation, applicable only (...) in the sphere of intelligent agency. But such a view is generally dismissed as incompatible with the naturalism that has now become orthodoxy amongst mainstream analytical philosophers of mind. What I want to argue is that substances, not events, are the primary relata of causal relations and that agent causation should properly be conceived of as a species of substance causation. I shall try to show that by thus reconceiving the nature of causation and of agency, the problem of free will can be made more tractable. I shall also argue for a contention that may seem even less plausible at first sight, namely, that such a view of agency is perfectly compatible with a volitionist theory of action. (shrink)
Brette contends that the neural coding metaphor is an invalid basis for theories of what the brain does. Here, we argue that it is an insufficient guide for building an artificial intelligence that learns to accomplish short- and long-term goals in a complex, changing environment.
If one is an egalitarian, what should one want to equalize? Opportunities or outcomes? Resources or welfare? These positions are usually conceived to be very different. I argue in this paper that the distinction is misconceived: the only coherent conception of resource equality implies welfare equality, in an appropriately abstract description of the problem. In this section, I motivate the program which the rest of the paper carries out.
E-Z Reader 7 is a processing model of eye-movement control. One constraint imposed on the model is that high-level cognitive processes do not influence eye movements unless normal reading processes are disturbed. I suggest that this constraint is unnecessary, and that the model provides a sensible architecture for explaining how both low- and high-level processes influence eye movements.
Hume's famous discussion of miracles in the Enquiry Concerning Human Understanding is curious both on account of the arguments he does deploy and on account of the arguments he does not deploy, but might have been expected to. The first and second parts of this paper will be devoted to examining, respectively, these two objects of curiosity. The second part I regard as the more important, because I shall there try to show that the fact that Hume does not deploy (...) an argument that he might have been expected to deploy in fact reflects a weakness in the view of natural laws that has come to be associated with Hume's name. I shall argue, in fact, that it is a symptom of the defectiveness of the ‘Humean’ view of natural laws that on that view it is only too easy to rule out the possibility of a miracle ever occurring. In the third part of the paper, I shall show how another view of laws can overcome this problem. (shrink)