Amongst philosophers and cognitive scientists, modularity remains a popular choice for an architecture of the human mind, primarily because of the supposed explanatory value of this approach. Modular architectures can vary both with respect to the strength of the notion of modularity and the scope of the modularity of mind. We propose a dilemma for modular architectures, no matter how these architectures vary along these two dimensions. First, if a modular architecture commits to the informational encapsulation of modules, as (...) it is the case for modularity theories of perception, then modules are on this account impenetrable. However, we argue that there are genuine cases of the cognitive penetrability of perception and that these cases challenge any strong, encapsulated modular architecture of perception. Second, many recent massive modularity theories weaken the strength of the notion of module, while broadening the scope of modularity. These theories do not require any robust informational encapsulation, and thus avoid the incompatibility with cognitive penetrability. However, the weakened commitment to informational encapsulation significantly weakens the explanatory force of the theory and, ultimately, is conceptually at odds with the core of modularity. We then propose a non-modular notion of functionally independent system that, we argue, achieves the explanatory force sought by modularity theorists. (shrink)
Since social skills are highly significant to the evolutionary success of humans, we should expect these skills to be efficient and reliable. For many Evolutionary Psychologists efficiency entails encapsulation: the only way to get an efficient system is via information encapsulation. But encapsulation reduces reliability in opaque epistemic domains. And the social domain is darkly opaque: people lie and cheat, and deliberately hide their intentions and deceptions. Modest modularity [Currie and Sterelny (2000) Philos Q 50:145–160] attempts to (...) combine efficiency and reliability. Reliability is obtained by placing social skills in un-encapsulated central cognition; efficiency by having the social system sensitive to encapsulated socially tagged cues. In this paper, I argue that this approach fails. I focus on eye-gaze as a plausible example of a socially significant encapsulated cue. I demonstrate contra modest modularity that eye-gaze is subject to influence from central cognition. (shrink)
Is vision informationally encapsulated from cognition or is it cognitively penetrated? I shall argue that intentions penetrate vision in the experience of visual spatial constancy: the world appears to be spatially stable despite our frequent eye movements. I explicate the nature of this experience and critically examine and extend current neurobiological accounts of spatial constancy, emphasizing the central role of motor signals in computing such constancy. I then provide a stringent condition for failure of informational encapsulation that emphasizes a (...) computational condition for cognitive penetration: cognition must serve as an informational resource for visual computation. This requires proposals regarding semantic information transfer, a crucial issue in any model of informational encapsulation. I then argue that intention provides an informational resource for computation of visual spatial constancy. Hence, intention penetrates vision. (shrink)
My aim in this paper is to defend the view that the processes underlying early vision are informationally encapsulated. Following Marr (1982) and Pylyshyn (1999) I take early vision to be a cognitive process that takes sensory information as its input and produces the so-called primal sketches or shallow visual outputs: informational states that represent visual objects in terms of their shape, location, size, colour and luminosity. Recently, some researchers (Schirillo 1999, Macpherson 2012) have attempted to undermine the idea of (...) the informational encapsulation of early vision by referring to experiments that seem to show that colour recognition is affected by the subject's beliefs about the typical colour of objects. In my view, however, one can reconcile the results of these experiments with the position that early vision is informationally encapsulated. Namely, I put fort a hypothesis according to which the early vision system has access to a local database that I call the mental palette and define as a network of associative links whose nodes stands for shapes and colours. The function of the palette is to facilitate colour recognition without employing central processes. I also describe two experiments by which the mental palette hypothesis can be tested. (shrink)
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
The quantum dynamics of a hydrogen molecule encapsulated inside the cage of a C60 fullerene molecule is investigated using inelastic neutron scattering (INS). The emphasis is on the temperature dependence of the INS spectra which were recorded using time-of-flight spectrometers. The hydrogen endofullerene system is highly quantum mechanical, exhibiting both translational and rotational quantization. The profound influence of the Pauli exclusion principle is revealed through nuclear spin isomerism. INS is shown to be exceptionally able to drive transitions between ortho-hydrogen and (...) para-hydrogen which are spin-forbidden to photon spectroscopies. Spectra in the temperature range 1.6≤T≤280 K are presented, and examples are given which demonstrate how the temperature dependence of the INS peak amplitudes can provide an effective tool for assigning the transitions. It is also shown in a preliminary investigation how the temperature dependence may conceivably be used to probe crystal field effects and inter-fullerene interactions. (shrink)
Churchland's paper "Perceptual Plasticity and Theoretical Neutrality" offers empirical, semantical and epistemological arguments intended to show that the cognitive impenetrability of perception "does not establish a theory-neutral foundation for knowledge" and that the psychological account of perceptual encapsulation that I set forth in The Modularity of Mind "[is] almost certainly false". The present paper considers these arguments in detail and dismisses them.
Language is at the core of the cognitive revolution that has transformed that discipline over the last forty years or so, and it is also the central paradigm for the most prominent attempt to synthesise psychology and evolutionary theory. A single and distinctively modular view of language has emerged out of both these perspectives, one that encourages a certain idealisation. Linguistic competence is uniform, independent of other cognitive capacities, and with a developmental trajectory that is largely independent of environmental input (...) (Pinker 1994; Pinker 1997). Thus language is seen as a paradigm of John Tooby and Leda Cosmides’ concept of “evoked culture”: linguistic experience serves only to select a specific item from a menu of innately available options (Tooby and Cosmides 1992). In explaining this concept, they appeal to the metaphor of a jukebox. The human genome pre-stores a set of options, and the different experiences provided by different cultures select different elements out of this option set. I think an appropriate evolutionary perspective on language substantially undercuts this idealisation and the evoked culture model of language. Variability between speakers; the sensitivity of linguistic development to environmental input; and the limits of encapsulation are not noise. They are central to the language and its evolution. (shrink)
Paul Churchland has recently argued that empirical evidence strongly suggests that perception is penetrable to the beliefs or theories held by individual perceivers (1988). While there has been much discussion of the sorts of psychological cases he presents, little has been said about his arguments from neurology. I offer a critical examination of his claim that certain efferents in the brain are evidence against perceptual encapsulation. I argue that his neurological evidence is inadequate to his philosophical goals, both by (...) itself and taken in concert with his psychological evidence. (shrink)
The direct reading emission spectrometer was developed during the1940s. By substituting photo-multiplier tubes and electronics forphotographic film spectrograms, the interpretation of special lineswith a densitometer was avoided. Instead, the instrument providedthe desired information concerning percentage concentration ofelements of interest directly on a dial. Such instruments `de-skill' the job of making such measurements. They do this by encapsulatingin the instrument the skills previously employed by the analyst,by `skilling' the instrument. This paper presents a history of thedevelopment of the Dow Chemical/Baird Associates (...) direct reader. Thishistory is used to argue for a materialist conception of knowledge.The instrument is a material form of knowledge, knowledge of aspectsof spectroscopy, analytical spectrochemistry, electronics, instrumentdesign and construction, and metal production industry economics. (shrink)
One major idea within the great epic of the Mahabharata is the concept of fate. Daiva, literally 'of the gods', could be said to direct or even manipulate every character and theme throughout the entire epic. The story of Nala and Damayanti offers us an opportunity for insight into Daiva within the epic as a whole. The short story, when placed in the Mahabharata, results in an interesting encapsulation of a love story, numerous metaphors and a (...) tale of initial loss and eventual redemption. Through the investigation of each character's specific dharma, we will see that actions and consequences seemingly blend together, with an arguable disregard for the passage of time. Throughout the story of Nala and Damayanti, we will notice the overarching theme of fate. Human choice and divine authority are questioned as people and gods are unable to escape from what must be. (shrink)
The view that moral cognition is subserved by a two-tieredarchitecture is defended: Moral reasoning is the result both ofspecialized, informationally encapsulated modules which automaticallyand effortlessly generate intuitions; and of general-purpose,cognitively penetrable mechanisms which enable moral judgment in thelight of the agent's general fund of knowledge. This view is contrastedwith rival architectures of social/moral cognition, such as Cosmidesand Tooby's view that the mind is wholly modular, and it is argued thata two-tiered architecture is more plausible.
One of the most foundational and continually contested questions in the cognitive sciences is the degree to which the functional organization of the brain can be understood as modular. In its classic formulation, a module was defined as a cognitive sub-system with (all or most of) nine specific properties; the classic module is, among other things, domain specific, encapsulated (i.e. maintains proprietary representations to which other modules have no access), and implemented in dedicated neural substrates. Most of the examinations—and especially (...) the criticisms—of the modularity thesis have focused on these properties individually, for instance by finding counterexamples in which otherwise good candidates for cognitive modules are shown to lack domain specificity or encapsulation. The current paper goes beyond the usual approach by asking what some of the broad architectural implications of the modularity thesis might be, and attempting to test for these. The evidence does not favor a modular architecture for the cortex. Moreover, the evidence suggests that best way to approach the understanding of cognition is not by analyzing and modelling different functional domains (visual perception, attention, language, motor control, etc.) in isolation from the others, but rather by looking for points of overlap in their neural implementations, and exploiting these to guide the analysis and decomposition of the functions in question. This has significant implications for the question of how to approach the design and implementation of intelligent artifacts in general, and language-using robots in particular. (shrink)
In Computer Science stepwise refinement of algebraic specifications is a well-known formal methodology for rigorous program development. This paper illustrates how techniques from Algebraic Logic, in particular that of interpretation, understood as a multifunction that preserves and reflects logical consequence, capture a number of relevant transformations in the context of software design, reuse, and adaptation, difficult to deal with in classical approaches. Examples include data encapsulation and the decomposition of operations into atomic transactions. But if interpretations open such a (...) new research avenue in program refinement, (conceptual) tools are needed to reason about them. In this line, the paper’s main contribution is a study of the correspondence between logical interpretations and morphisms of a particular kind of coalgebras. This opens way to the use of coalgebraic constructions, such as simulation and bisimulation, in the study of interpretations between (abstract) logics. (shrink)
We respond to Farah (1994) by making some general remarks about information encapsulation and locality and asking how these are violated in her computational models. Our point is not that we disagree, but rather that Farah's treatment of the issues is not sufficiently rigorous to allow an evaluation of her claims.
Thomas & Karmiloff-Smith (T&K-S) raise the excellent and, in retrospect, obvious point that in a dynamic learning environment where feedback is possible, we should expect networks to adapt to damage by altering details of their behavior. We should therefore not expect that developmental disorders should result in “normal” modules. The implications of this point go much further, since interprocess dependency in the brain does not rely only on learned neural connections. This argues strongly against behavioral and process-related definitions, as (...) opposed to structural and architecture-related definitions, of mental modularity. (shrink)
According to Pylyshyn, the early visual system is able to categorize perceptual inputs into shape classes based on visual similarity criteria; it is also suggested that written words may be categorized within early vision. This speculation is contradicted by the fact that visually unrelated exemplars of a given letter (e.g., a/A) or word (e.g., read/READ) map onto common visual categories.
The target article argues for the modularity of language interpretive processes without the usual criterion that a module be informationally encapsulated. It is the encapsulation criterion, however, that gives modularity most of its testability. Without the criterion of encapsulation, testing whether relatively automatic comprehension processes use their own unique resource is a very tricky matter.
Inspired by the thinking of authors such as Andrew Feenberg, Tim Ingold and Richard Sennett, this article sets forth substantial criticism of the ‘social uprooting of technology’ paradigm, which deterministically considers modern technology an autonomous entity, independent and indifferent to the social world (practices, skills, experiences, cultures, etc.). In particular, the authors’ focus on demonstrating that the philosophy,methodology and experience linked to open source technological development represent an emblematic case of re-encapsulation of the technical code within social relations (reskilling (...) practices). Open source is discussed as a practice, albeit not unique, of community empowerment aimed at the participated and shared rehabilitation of technological production ex-ante. Furthermore, the article discusses the application of open source processes in the agro-biotechnological field, showing how they may support a more democratic endogenous development, capable of binding technological innovation to the objectives of social (reducing inequalities) and environmental sustainability to a greater degree. (shrink)
Medical Humanities the journal started life in 2000 as a special edition of the JME. However, the intellectual taproots of the medical humanities as a field of enquiry can be traced to two developments: calls made in the 1920s for the development of multidisciplinary perspectives on the sciences that shed historical light on their assumptions, methods and practices; refusals to assimilate all medical phenomena to a biomedical worldview. Medical humanities the term stems from a desire to situate the significance of (...) medicine as a product of culture. But despite growing usage over half a century the term defies a unifying encapsulation and continues to conjure up a multitude of discourse communities, including scholars working at the interfaces of health and humanities, arts and health, and medical education and bioethics. The field is intellectually capacious and polymorphous, forming and reforming around critical new research questions and teaching tasks spanning disciplines. (shrink)
We report in-situ measurement of both (200) and (002) diffraction profiles (parallel and perpendicular to the tensile axis) and of the lattice mismatch of the AM1 superalloy during a tensile creep experiment (150 MPa; 1080°C). The measurements were made by high-resolution high-energy X-ray diffraction at the ID 15A beam line of the European Synchrotron Radiation Facility. Peak shape and lattice mismatch have well-defined non-monotonic behaviours clearly related to the evolution of the microstructure (rafting, ripening and encapsulation of the ? (...) phase) and the different stages of the creep curve. Modelling of the raft microstructure as a multilayer gives a good description of the experimental parameters (peak shape and lattice mismatch) during stage II, and a qualitative explanation of their behaviour during stage III. (shrink)
We have investigated the structure and nuclear magnetic resonance (NMR) spectroscopic properties of some dihydrogen endofullerene nitroxides by means of density-functional theory (DFT) calculations. Quantum versus classical roto-translational dynamics of H2 have been characterized and compared. Geometrical parameters and hyperfine couplings calculated by DFT have been input to the Solomon–Bloembergen equations to predict the enhancement of the NMR longitudinal relaxation of H2 due to coupling with the unpaired electron. Estimating the rotational correlation time via computed molecular volumes leads to a (...) fair agreement with experiment for the simplest derivative; the estimate is considerably improved by recourse to the calculation of the diffusion tensor. For the other more flexible congeners, the agreement is less good, which may be due to an insufficient sampling of the conformational space. In all cases, relaxation by Fermi contact and Curie mechanisms is predicted to be negligible. (shrink)
This paper presents use cases for modular development of ontologies using the OWL imports mechanism. Many of the methods are inspired by work in modular development in software engineering. The approach is aimed at developers of large ontologies covering multiple subdomains that make use of OWL reasoners for inference. Such ontologies are common in biomedical sciences, but nothing in the paper is specific to biomedicine. There are four groups of use cases: (i) organisation and factoring of ontologies; (ii) maintaining stable (...) interfaces and bindings between ontologies and between ontologies and software; (iii) localization of ontologies to the requirements of specific sites and (iv) extension of ontologies and encapsulation of modifications. OWL's axiom-oriented import mechanism has many similarities with import mechanisms in object-oriented software but also important differences – in particular, the effects of OWL imports are global, and the order in which modules are imported is irrelevant. The advantages and disadvantages of OWL's axiom-oriented approach are discussed, and suggestions are made for extensions to allow axioms to be filtered out as well as added – a mechanism that we term “adaptation” to distinguish it from the standard import mechanism. Finally we discuss possible alternatives and practical experience with the approaches presented. (shrink)
In the Philosophy of Cognitive Science, it is a common held view that the modularity hypothesis for cognitive mechanisms and the innateness hypothesis for mental contents are conceptually independent. In this paper I distinguish between substantial and deflationist modularity as well as between substantial and deflationist innatism, and I analyze whether the conceptual independence between substantial modularity and innatism holds. My conclusion will be that if what is taken into account are the essential properties of the substantial modules, i.e. domain (...) specificity and informational encapsulation, then it seems to be such independence. However, if what is taken into account is the function of the substantial modules, then it seems to be a conceptual connection from modularity to substantial innateness. (shrink)
Since polymeric micelles are promising and have potential in drug delivery systems, people have become more interested in studying the compatibility of polymeric carriers and drugs, which might help them to simplify the preparation method and increase the micellar stability. In this article, we report that cationic amphiphilic drugs can be easily encapsulated into PEGylated phospholipid (PEG–PE) micelles by self-assembly method and that they show high encapsulation efficiency, controllable drug release and better micellar stability than empty micelles. The representative (...) drugs are doxorubicin and vinorelbine. However, gemcitabine and topotecan are not suitable for PEG–PE micelles due to lack of positive charge or hydrophobicity. Using a series of experiments and molecular modelling, we figured out the assembly mechanism, structure and stability of drug-loaded micelles, and the location of drugs in micelles. Integrating the above information, we explain the effect of the predominant force between drugs and polymers on the assembly mechanism and drug release behaviour. Furthermore, we discuss the importance of pKa and to evaluate the compatibility of drugs with PEG–PE in self-assembly preparation method. In summary, this work provides a scientific understanding for the reasonable designing of PEG–PE micelle-based drug encapsulation and might enlighten the future study on drug–polymer compatibility for other polymeric micelles. (shrink)
The broad objective of this paper is to examine the evolution of gendered aspects of livelihood strategies and their interaction with various development interventions. Central to this is an empirical analysis of gendered divisions of labor in the context of rapidly changing pastoralist livelihoods. The paper begins with a literature review on gender roles in pastoralist societies. Two important gaps in the existing literature are identified. First, studies on gender roles are too often studies on women’s roles as men’s roles (...) are rarely included. Secondly, despite a recognition that pastoral livelihoods are rapidly changing, much of the research has ignored the gendered impacts of this change. The study area is Loitokitok Division, Kajiado District, Kenya. Field data were collected in an extensive household survey, key informant interviews, and group discussions held in two field seasons between 2001 and 2004. Results indicate that development interventions led to land use encapsulation, sedentarization, new ways of accessing dry season grazing areas, new land uses, new livestock breeds, and increased school enrollment. In the context of these livelihood changes and increasing drought, a fundamental shift in gendered roles in livestock production has occurred. Maasai women in the study area contribute more labor to livestock production than men do. Various efforts to modernize the livestock sector are leading to a loss of women’s control of milk resources. This finding has important implications for current and future development interventions in pastoralist communities and their ability to improve livelihoods of the most vulnerable sections of the population. (shrink)
Abstract. In Dynamics of Reason Michael Friedman proposes a kind of synthesis between the neokantianism of Ernst Cassirer, the logical empiricism of Rudolf Carnap, and the historicism of Thomas Kuhn. Cassirer and Carnap are to take care of the Kantian legacy of modern philosophy of science, encapsulated in the concept of a relativized a priori and the globally rational or continuous evolution of scientific knowledge,while Kuhn´s role is to ensure that the historicist character of scientific knowledge is taken seriously. More (...) precisely, Carnapian linguistic frameworks, guarantee that the evolution of science procedes in a rational manner locally,while Cassirer’s concept of an internally defined conceptual convergence of empirical theories provides the means to maintain the global continuity of scientific reason. In this paper it is argued that Friedman’s neokantian account of scientific reason based on the concept of the relativized a priori underestimates the pragmatic aspects of the dynamics of scientific reason. To overcome this short-coming, I propose to reconsider C.I. Lewis’s account of a pragmatic the priori, recently modernized and elaborated by Hasok Chang. This may be<br><br><br><br><br><br><br><br><br><br&g t;<br><br><br><br><br><br>Keywords: Dynamics of reason, Paradigms, Logical Empiricism,Neokantianism, Pragmatism, Mathematics, Communicative Rationality. (shrink)
Inferentialism claims that expressions are meaningful by virtue of rules governing their use. In particular, logical expressions are autonomous if given meaning by their introduction-rules, rules specifying the grounds for assertion of propositions containing them. If the elimination-rules do no more, and no less, than is justified by the introduction-rules, the rules satisfy what Prawitz, following Lorenzen, called an inversion principle. This connection between rules leads to a general form of elimination-rule, and when the rules have this form, they may (...) be said to exhibit “general-elimination” harmony. Ge-harmony ensures that the meaning of a logical expression is clearly visible in its I-rule, and that the I- and E-rules are coherent, in encapsulating the same meaning. However, it does not ensure that the resulting logical system is normalizable, nor that it satisfies the conservative extension property, nor that it is consistent. Thus harmony should not be identified with any of these notions. (shrink)
In my book How the Mind Works, I defended the theory that the human mind is a naturally selected system of organs of computation. Jerry Fodor claims that 'the mind doesn't work that way'(in a book with that title) because (1) Turing Machines cannot duplicate humans' ability to perform abduction (inference to the best explanation); (2) though a massively modular system could succeed at abduction, such a system is implausible on other grounds; and (3) evolution adds nothing to our understanding (...) of the mind. In this review I show that these arguments are flawed. First, my claim that the mind is a computational system is different from the claim Fodor attacks (that the mind has the architecture of a Turing Machine); therefore the practical limitations of Turing Machines are irrelevant. Second, Fodor identifies abduction with the cumulative accomplishments of the scientific community over millennia. This is very different from the accomplishments of human common sense, so the supposed gap between human cognition and computational models may be illusory. Third, my claim about biological specialization, as seen in organ systems, is distinct from Fodor's own notion of encapsulated modules, so the limitations of the latter are irrelevant. Fourth, Fodor's arguments dismissing of the relevance of evolution to psychology are unsound. (shrink)
Variabilism is the view that proper names (like pronouns) are semantically represented as variables. Referential names, like referential pronouns, are assigned their referents by a contextual variable assignment (Kaplan 1989). The reference parameter (like the world of evaluation) may also be shifted by operators in the representation language. Indeed verbs that create hyperintensional contexts, like ‘think’, are treated as operators that simultaneously shift the world and assignment parameters. By contrast, metaphysical modal operators shift the world of assessment only. Names, being (...) variables, refer rigidly in the latter merely intensional contexts, but may vary their reference in hyperintensional contexts. This conforms to the intuition that the content of attitude ascriptions encapsulates referential uncertainty. Furthermore, names in hyperintensional contexts are ambiguous between de re* and de dicto* interpretations. This fact is used to account for asymmetric mistaken identity attributions (for example, Biron thinks Katherine is Rosaline, but he doesn’t think Rosaline is Katherine). -/- The variable theory compares favourably with its alternatives, including Millianism and descriptivism. Millians cannot account for the behaviour of names in hyperintensional contexts, while descriptivists cannot generate a necessary contrast between intensional and hyperintensional contexts. No other theory can capture the facts pertaining to the existentially bound use of names. (shrink)
I present an argument that encapsulates the view that theory is underdetermined by evidence. I show that if we accept Williamson's equation of evidence and knowledge, then this argument is question-begging. I examine ways of defenders of underdetermination may avoid this criticism. I also relate this argument and my critique to van Fraassen's constructive empiricism.
Chakravartty claims that science does not imply any specific metaphysical theory of the world. In this sense, science is consistent with both neo-Aristotelianism and neo-Humeanism. But, along with many others, he thinks that a neo-Aristotelian outlook best suits science. In other words, neo-Aristotelianism is supposed to win on the basis of an inference to the best explanation (IBE). I fail to see how IBE can be used to favour neo-Aristotelianism over neo-Humeanism. In this essay, I aim to do two things. (...) Firstly, I explain why this failure is not idiosyncratic: it should be there even by Chakravartty's lights. Secondly, I raise some critical worries about Chakravartty's semirealism, especially in connection with the concept of a 'concrete structure' and the detection/auxiliary distinction. The essay ends with a dilemma: an exclusive disjunction encapsulated in its title. (shrink)
The idea that there is a “Number Sense” (Dehaene, 1997) or “Core Knowledge” of number ensconced in a modular processing system (Carey, 2009) has gained popularity as the study of numerical cognition has matured. However, these claims are generally made with little, if any, detailed examination of which modular properties are instantiated in numerical processing. In this article, I aim to rectify this situation by detailing the modular properties on display in numerical cognitive processing. In the process, I review literature (...) from across the cognitive sciences and describe how the evidence reported in these works supports the hypothesis that numerical cognitive processing is modular. I outline the properties that would suffice for deeming a certain processing system a modular processing system. Subsequently, I use behavioral, neuropsychological, philosophical, and anthropological evidence to show that the number module is domain specific, informationally encapsulated, neurally localizable, subject to specific pathological breakdowns, mandatory, fast, and inaccessible at the person level; in other words, I use the evidence to demonstrate that some of our numerical capacity is housed in modular casing. (shrink)
The notion of the absolute time-constituting flow plays a central role in Edmund Husserl’s analysis of our consciousness of time. I offer a novel reading of Husserl’s remarks on the absolute flow, on which Husserl can be seen to be grappling with two key intuitions that are still at the centre of current debates about temporal experience. One of them is encapsulated by what is sometimes referred to as an intentionalist (as opposed to an extensionalist) approach to temporal experience. The (...) other centres on the thought that temporal experience itself necessarily unfolds over time. I show how some of Husserl’s more enigmatic-sounding remarks about the absolute flow become intelligible if they are read as attempts to accommodate both these intuitions at the same time. However, I also question whether Husserl ultimately provides good reasons for preferring his intentionalist approach to a rival extensionalist one. (shrink)
This paper explores Paul Feyerabend's (1924-1994) skeptical arguments for "anarchism" in his early writings between 1960 to 1975. Feyerabend's position is encapsulated by his well-known suggestion that the only principle for scientific method that can be defended under all circumstances is: "anything goes." I present Feyerabend's anarchism as a recommendation for pluralism that assumes a realist view of scientific theories. The aims of this paper are threefold: (1) to present a defensible view of Feyerabend's anarchism and its motivations, (2) to (...) articulate the minimal form of realism that such a view presupposes, and (3) to consider the implications and limitations of such a perspective in contemporary philosophy of science. (shrink)
It is widely accepted that the ethical supervenes on the natural, where this is roughly the claim that it is impossible for two circumstances to be identical in all natural respects, but different in their ethical respects. This chapter refines and defends the traditional thought that this fact poses a significant challenge to ethical non-naturalism, a view on which ethical properties are fundamentally different in kind from natural properties. The challenge can be encapsulated in three core claims which the chapter (...) defends: that a defensible non-naturalism is committed to the supervenience of the ethical, that this commits the non-naturalist to a brute necessary connection between properties of distinct kinds, and that commitment to such brute connections counts against the non-naturalist’s view. Each of these claims has recently been challenged. Against Nicholas Sturgeon’s recent doubts about the dialectical force of supervenience, this chapter defends a supervenience thesis as deserving to be common ground among ethical realists. It is then argued that attempts to explain supervenience on behalf of the non-naturalist either fail as explanations, generate near-identical explanatory burdens elsewhere, or appeal to commitments that are inconsistent with core motivations for non-naturalism. The chapter concludes that, suitably refined, the traditional argument against nonnaturalism from supervenience is alive and well. (shrink)
What are the brain and cognitive systems that allow humans to play baseball, compute square roots, cook soufflés, or navigate the Tokyo subways? It may seem that studies of human infants and of non-human animals will tell us little about these abilities, because only educated, enculturated human adults engage in organized games, formal mathematics, gourmet cooking, or map-reading. In this chapter, we argue against this seemingly sensible conclusion. When human adults exhibit complex, uniquely human, culture-specific skills, they draw on a (...) set of psychological and neural mechanisms with two distinctive properties: they evolved before humanity and thus are shared with other animals, and they emerge early in human development and thus are common to infants, children, and adults. These core knowledge systems form the building blocks for uniquely human skills. Without them we wouldn’t be able to learn about different kinds of games, mathematics, cooking, or maps. To understand what is special about human intelligence, therefore, we must study both the core knowledge systems on which it rests and the mechanisms by which these systems are orchestrated to permit new kinds of concepts and cognitive processes. What is core knowledge? A wealth of research on non-human primates and on human infants suggests that a system of core knowledge is characterized by four properties (Hauser, 2000; Spelke, 2000). First, it is domain-specific: each system functions to represent particular kinds of entities such as conspecific agents, manipulable objects, places in the environmental layout, and numerosities. Second, it is task-specific: each system uses its representations to address specific questions about the world, such as “who is this?” [face recognition], “what does this do?” [categorization of artifacts], “where am I?” [spatial orientation], and “how many are here?” [enumeration]. Third, it is relatively encapsulated: each uses only a subset of the information delivered by an animal’s input systems and sends information only to a subset of the animal’s output systems. (shrink)
Many philosophers favour the simple knowledge account of assertion, which says you may assert something only if you know it. The simple account is true but importantly incomplete. I defend a more informative thesis, namely, that you may assert something only if your assertion expresses knowledge. I call this 'the express knowledge account of assertion', which I argue better handles a wider range of cases while at the same time explaining the simple knowledge account's appeal. §1 introduces some new data (...) that a knowledge account of assertion well explains. §2 explains the simple knowledge account's advantage over two of its main competitors. §3 presents a problem for the simple account and offers a solution, which is to adopt the express knowledge account. §4 encapsulates the case for the express knowledge account, and offers a unifying vision for the epistemology of belief and assertion. §5 answers an objection. §6 briefly sums up. Even those who ultimately reject my conclusion can still benefit from the new data presented in §1, and learn an important lesson from the problem discussed in §3, which demonstrates a general constraint on an acceptable account of the norm of assertion. (shrink)
Cognitive science is, more than anything else, a pursuit of cognitive mechanisms. To make headway towards a mechanistic account of any particular cognitive phenomenon, a researcher must choose among the many architectures available to guide and constrain the account. It is thus fitting that this volume on contemporary debates in cognitive science includes two issues of architecture, each articulated in the 1980s but still unresolved:
• Just how modular is the mind? (section 1) – a debate initially pitting encapsulated (...) mechanisms (Fodorian modules that feed their ultimate outputs to a nonmodular central cognition) against highly interactive ones (e.g., connectionist networks that continuously feed streams of output to one another). • Does the mind process language-like representations according to formal rules? (this section) – a debate initially pitting symbolic architectures (such as Chomsky’s generative grammar or Fodor’s language of thought) against less language-like architectures (such as connectionist or dynamical ones).
Our project here is to consider the second issue within the broader context of where cognitive science has been and where it is headed. The notion that cognition in general—not just language processing—involves rules operating on language-like representations actually predates cognitive science. In traditional philosophy of mind, mental life is construed as involving propositional attitudes—that is, such attitudes towards propositions as believing, fearing, and desiring that they be true—and logical inferences from them. On this view, if a person desires that a proposition be true and believes that if she performs a certain action it will become true, she will make the inference and (absent any overriding consideration) perform the action. (shrink)
This is an age of naturalization projects. Much epistemological work has been done toward naturalizing theoretical reason. One might view Hume as seeking to naturalize reason in both the theoretical (roughly, epistemological) and the practical realms. I suggest that whatever else underlies the vitality of Hume's instrumentalism - encapsulated in his view that 'reason is and ought only to be the slave of the passions' - one incentive is the hope of naturalizing practical reason. This paper explores some broadly Humean (...) versions of instrumentalism that are among the most plausible contenders to represent instrumentalism as a contemporary naturalistic position. It first offers a taxonomy of reasons for action and, in that light, formulates a plausible version of instrumentalism. It then raises difficulties for the view, some of them concerning the nature of desire. It also develops an epistemologically significant comparison of desires with beliefs. Given the magnitude of the difficulties, it outlines an alternative account of practical reason. (shrink)
As part of the widespread turn to narrative in contemporary philosophy, several commentators have recently attempted to sign Kierkegaard up for the narrative cause, most notably in John Davenport and Anthony Rudd's recent collection Kierkegaard After MacIntyre: Essays on Freedom, Narrative and Virtue. I argue that the aesthetic and ethical existence-spheres in Either/Or cannot adequately be distinguished in terms of the MacIntyre-inspired notion of 'narrative unity'. Judge William's argument for the ethical life contains far more in the way of substantive (...) normative content than can be encapsulated by the idea of 'narrative unity', and the related idea that narratives confer intelligibility will not enable us to distinguish Kierkegaardian aesthetes from Kierkegaardian ethicists. 'MacIntyrean Kierkegaardians' also take insufficient notice of further problems with MacIntyre's talk of 'narrative unity', such as his failure to distinguish between literary narratives and the 'enacted dramatic narratives' of which he claims our lives consist; the lack of clarity in the idea of a 'whole life'; and the threat of self-deception. Finally, against the connections that have been drawn between Kierkegaardian choice and Harry Frankfurt's work on volitional identification, I show something of the dangers involved in putting too much stress on unity and wholeheartedness. (shrink)
In The Philosophy of Information, Luciano Floridi presents a theory of “strongly semantic information”, based on the idea that “information encapsulates truth” (the so-called “veridicality thesis”). Starting with Popper, philosophers of science have developed different explications of the notion of verisimilitude or truthlikeness, construed as a combination of truth and information. Thus, the theory of strongly semantic information and the theory of verisimilitude are intimately tied. Yet, with few exceptions, this link has virtually pass unnoticed. In this paper, we briefly (...) survey both theories and offer a critical comparison of strongly semantic information and related notions, like truth, verisimilitude, and partial truth. (shrink)
’m not really sure what they were after when they asked me to talk to you about Augustine and the Platonists. Maybe they wanted me to talk about some specific Platonists, and the elements of Augustine’s views that he adopts or adapts. And no doubt I should at least mention a couple of names. There’s Plato himself, of course (428-348 BC). The thing is, it’s pretty clear that Augustine had never read Plato directly, whether in Greek (which Augustine couldn’t actually (...) handle very well) or in Latin translation. The best he could do was to read what other people said about what Scotus said. Then there were two followers of Plato whose work Augustine did read in Latin translation: Plotinus (204-270) and his student Porphyry (233-305). He probably read them in the translation of Marius Victorinus, who is discussed in Book 8 of the Confessions. There’s a lot of debate, though, about exactly what he read and exactly how it influenced him. I have a somewhat non-standard view about this. I call it the “Who cares what Augustine read?” view. My view is that even though Augustine read Plotinus and Porphyry rather than Plato, his version of Platonism is actually much closer to Plato himself than it is to Plotinus and Porphyry. So knowing the details of Plotinus and Porphyry doesn’t really matter much for understanding Augustine, because Augustine’s kind of Platonism doesn’t really depend on those details. In spirit, it’s much closer to the real Plato, because it adopts the overall outlook of Plato without a lot of the additions and complications of later Platonists. And that’s why I’m going to start with a story. I’m going to use this story to get across what I think is the essence of this Platonic outlook. Then I’ll show you how various Platonists put the insights that this story encapsulates to work in three different aspects of philosophy. After I’ve laid all that out, I’ll talk about how Augustine transforms this Platonic picture in the light of his Christian faith.. (shrink)
Education stands at the intersection of Noam Chomsky's two lives as scholar and social critic: As a linguist he is keenly interested in how children acquire language, and as a political activist he views the education system as an important lever of social change. Chomsky on Democracy and Education gathers for the first time his impressive range of writings on these subjects, some previously unpublished and not readily available to the general public. Raised in a progressive school where his father (...) was principal, Chomsky outlines a philosophy of education steeped in the liberal tradition of John Dewey, more concerned with cultivating responsible citizens than feeding children facts. The goal of education, Chomsky argues, is to produce free human beings whose values are not accumulation and domination, but rather free association on terms of equality. Spanning issues of language, power, policy and method, this collection includes seminal theoretical works like Language and Freedom , a social analysis of the role of schools and universities in the American polity, and specific critiques of language instruction in America's classrooms today, along with new interviews conducted by Carlos Otero that serve to encapsulate Chomsky's views. Engaging and incisive, Chomsky on Democracy and Education makes accessible the key insights that have earned Chomsky such a committed following. (shrink)