According to the knowledge argument, physicalism fails because when physically omniscient Mary first sees red, her gain in phenomenal knowledge involves a gain in factual knowledge. Thus not all facts are physical facts. According to the ability hypothesis, the knowledge argument fails because Mary only acquires abilities to imagine, remember and recognise redness, and not new factual knowledge. I argue that reducing Mary’s new knowledge to abilities does not affect the issue of whether she also learns factually: I show (...) that gaining specific new phenomenal knowledge is required for acquiring abilities of the relevant kind. Phenomenal knowledge being basic to abilities, and not vice versa, it is left an open question whether someone who acquires such abilities also learns something factual. The answer depends on whether the new phenomenal knowledge involved is factual. But this is the same question we wanted to settle when first considering the knowledge argument. The ability hypothesis, therefore, has offered us no dialectical progress with the knowledge argument, and is best forgotten. (shrink)
David Lewis (1983, 1988) and Laurence Nemirow (1980, 1990) claim that knowing what an experience is like is knowing-how, not knowing-that. They identify this know-how with the abilities to remember, imagine, and recognize experiences, and Lewis labels their view ‘the Ability Hypothesis’. The Ability Hypothesis has intrinsic interest. But Lewis and Nemirow devised it specifically to block certain anti-physicalist arguments due to Thomas Nagel (1974, 1986) and Frank Jackson (1982, 1986). Does it?
According to the Ability Hypothesis, knowing what it is like to have experience E is just having the ability to imagine or recognize or remember having experience E. I examine various versions of the Ability Hypothesis and point out that they all face serious objections. Then I propose a new version that is not vulnerable to these objections: knowing what it is like to experience E is having the ability todiscriminate imagining or having experience E from imagining or (...) having any other experience. I argue that if we replace the ability to imagine or recognize with the ability to discriminate, the Ability Hypothesis can be salvaged. (shrink)
The Perceptual Hypothesis is that we sometimes see, and thereby have non-inferential knowledge of, others' mental features. The Perceptual Hypothesis opposes Inferentialism, which is the view that our knowledge of others' mental features is always inferential. The claim that some mental features are embodied is the claim that some mental features are realised by states or processes that extend beyond the brain. The view I discuss here is that the Perceptual Hypothesis is plausible if, but only if, (...) the mental features it claims we see are suitably embodied. Call this Embodied Perception Theory. I argue that Embodied Perception Theory is false. It doesn't follow that the Perceptual Hypothesis is implausible. The considerations which serve to undermine Embodied Perception Theory serve equally to undermine the motivations for assuming that others' mental lives are always imperceptible. (shrink)
Enactive approaches foreground the role of interpersonal interaction in explanations of social understanding. This motivates, in combination with a recent interest in neuroscientific studies involving actual interactions, the question of how interactive processes relate to neural mechanisms involved in social understanding. We introduce the Interactive Brain Hypothesis (IBH) in order to help map the spectrum of possible relations between social interaction and neural processes. The hypothesis states that interactive experience and skills play enabling roles in both the development (...) and current function of social brain mechanisms, even in cases where social understanding happens in the absence of immediate interaction. We examine the plausibility of this hypothesis against developmental and neurobiological evidence and contrast it with the widespread assumption that mindreading is crucial to all social cognition. We describe the elements of social interaction that bear most directly on this hypothesis and discuss the empirical possibilities open to social neuroscience. We propose that the link between coordination dynamics and social understanding can be best grasped by studying transitions between states of coordination. These transitions form part of the self-organization of interaction processes that characterize the dynamics of social engagement. The patterns and synergies of this self-organization help explain how individuals understand each other. Various possibilities for role-taking emerge during interaction, determining a spectrum of participation. This view contrasts sharply with the observational stance that has guided research in social neuroscience until recently. We also introduce the concept of readiness to interact to describe the practices and dispositions that are summoned in situations of social significance (even if not interactive). This latter idea links interactive factors to more classical observational scenarios. (shrink)
This paper introduces a new family of cases where agents are jointly morally responsible for outcomes over which they have no individual control, a family that resists standard ways of understanding outcome responsibility. First, the agents in these cases do not individually facilitate the outcomes and would not seem individually responsible for them if the other agents were replaced by non-agential causes. This undermines attempts to understand joint responsibility as overlapping individual responsibility; the responsibility in question is essentially joint. Second, (...) the agents involved in these cases are not aware of each other's existence and do not form a social group. This undermines attempts to understand joint responsibility in terms of actual or possible joint action or joint intentions, or in terms of other social ties. Instead, it is argued that intuitions about joint responsibility are best understood given the Explanation Hypothesis, according to which a group of agents are seen as jointly responsible for outcomes that are suitably explained by their motivational structures: something bad happened because they didn’t care enough; something good happened because their dedication was extraordinary. One important consequence of the proposed account is that responsibility for outcomes of collective action is a deeply normative matter. (shrink)
The ‘Knobe effect’ is the name given to the empirical finding that judgments about whether an action is intentional or not seem to depend on the moral valence of this action. To account for this phenomenon, Scaife and Webber have recently advanced the ‘Consideration Hypothesis’, according to which people’s ascriptions of intentionality are driven by whether they think the agent took the outcome in consideration when taking his decision. In this paper, I examine Scaife and Webber’s hypothesis and (...) conclude that it is supported neither by the existing literature nor by their own experiments, whose results I did not replicate, and that the ‘Consideration Hypothesis’ is not the best available account of the ‘Knobe Effect’. (shrink)
Many philosophers and psychologists now argue that emotions play a vital role in reasoning. This paper explores one particular way of elucidating how emotions help reason which may be dubbed ?the search hypothesis of emotion?. After outlining the search hypothesis of emotion and dispensing with a red herring that has marred previous statements of the hypothesis, I discuss two alternative readings of the search hypothesis. It is argued that the search hypothesis must be construed as (...) an account of what emotions typically do, rather than as a definition of emotion. Even as an account of what emotions typically do, the search hypothesis can only be evaluated in the context of a specific theory of what emotions are. 1 Introduction 2 The search hypothesis of emotion 3 A red herring: the frame problem 4 The search problem 5 Two readings of the search hypothesis 6 Two final remarks 7 Conclusion. (shrink)
This paper challenges arguments that systematic patterns of intelligent behavior license the claim that representations must play a role in the cognitive system analogous to that played by syntactical structures in a computer program. In place of traditional computational models, I argue that research inspired by Dynamical Systems theory can support an alternative view of representations. My suggestion is that we treat linguistic and representational structures as providing complex multi-dimensional targets for the development of individual brains. This approach acknowledges the (...) indispensability of the intentional or representational idiom in psychological explanation without locating representations in the brains of intelligent agents. (shrink)
Thanks to all the people who responded to my enquiry about the status of the Continuum Hypothesis. This is a really fascinating subject, which I could waste far too much time on. The following is a summary of some aspects of the feeling I got for the problems. This will be old hat to set theorists, and no doubt there are a couple of embarrassing misunderstandings, but it might be of some interest to non professionals.
Several theories claim that dreaming is a random by-product of REM sleep physiology and that it does not serve any natural function. Phenomenal dream content, however, is not as disorganized as such views imply. The form and content of dreams is not random but organized and selective: during dreaming, the brain constructs a complex model of the world in which certain types of elements, when compared to waking life, are underrepresented whereas others are over represented. Furthermore, dream content is consistently (...) and powerfully modulated by certain types of waking experiences. On the basis of this evidence, I put forward the hypothesis that the biological function of dreaming is to simulate threatening events, and to rehearse threat perception and threat avoidance. To evaluate this hypothesis, we need to consider the original evolutionary context of dreaming and the possible traces it has left in the dream content of the present human population. In the ancestral environment human life was short and full of threats. Any behavioral advantage in dealing with highly dangerous events would have increased the probability of reproductive success. A dream-production mechanism that tends to select threatening waking events and simulate them over and over again in various combinations would have been valuable for the development and maintenance of threat-avoidance skills. Empirical evidence from normative dream content, children's dreams, recurrent dreams, nightmares, post traumatic dreams, and the dreams of hunter-gatherers indicates that our dream-production mechanisms are in fact specialized in the simulation of threatening events, and thus provides support to the threat simulation hypothesis of the function of dreaming. Key Words: dream content; dream function; evolution of consciousness; evolutionary psychology; fear; implicit learning; nightmares; rehearsal; REM; sleep; threat perception. (shrink)
The centerpiece of the first volume of Michel Foucault’s History of Sexuality is the analysis of what Foucault terms the “repressive hypothesis,” the nearly universal assumption on the part of twentieth-century Westerners that we are the heirs to a Victorian legacy of sexual repression. The supreme irony of this belief, according to Foucault, is that the whole time that we have been announcing and denouncing our repressed, Victorian sexuality, discourses about sexuality have actually proliferated. Paradoxically, as Victorian as we (...) allegedly are, we cannot stop talking about sex. Much of the analysis of the first volume of the History of Sexuality consists in an unmasking and debunking of the repressive hypothesis. This unmasking does not take the simple form of a counter-claim that we are not, in fact, repressed; rather, Foucault contends that understanding sexuality solely or even primarily in terms of repression is inaccurate and misleading. As he said in an interview published in 1983, “it is not a question of denying the existence of repression. It’s one of showing that repression is always a part of a much more complex political strategy regarding sexuality. Things are not merely repressed.”1 Foucault makes this extremely clear in the introduction to the History of Sexuality, Volume 1, when he writes. (shrink)
The dynamical hypothesis is the claim that cognitive agents are dynamical systems. It stands opposed to the dominant computational hypothesis, the claim that cognitive agents are digital computers. This target article articulates the dynamical hypothesis and defends it as an open empirical alternative to the computational hypothesis. Carrying out these objectives requires extensive clarification of the conceptual terrain, with particular focus on the relation of dynamical systems to computers.
1 *Common Sense Conception of Beliefs and Other Propositional Attitudes2 What is the Language of Thought Hypothesis?3 Status of LOTH4 Scope of LOTH5 *Natural Language as Mentalese?6 *Nativism and LOTH7 Naturalism and LOTH.
A new position in the philosophy of mind has recently appeared: the extended mind hypothesis (EMH). Some of its proponents think the EMH, which says that a subject's mental states can extend into the local environment, shows that internalism is false. I argue that this is wrong. The EMH does not refute internalism; in fact, it necessarily does not do so. The popular assumption that the EMH spells trouble for internalists is premised on a bad characterization of the internalist (...) thesis—albeit one that most internalists have adhered to. I show that internalism is entirely compatible with the EMH. This view should prompt us to reconsider the characterization of internalism, and in conclusion I make some brief remarks about how that project might proceed. (shrink)
Locke denied that ideas of secondary qualities resemble their causes. It has been suggested that Locke denied this because he accepted a mechanical corpuscular hypothesis about the constitution of objects. This paper shows that this and other usual explanations of Locke's denial are mistaken. Further, it suggests an alternative relationship between the scientific account and Locke's philosophical views, and finally it provides Locke's real justification for his claim that ideas of secondary qualities do not resemble their causes.
The Language of Thought Hypothesis (LOTH) is an empirical thesis about thought and thinking. For their explication, it postulates a physically realized system of representations that have a combinatorial syntax (and semantics) such that operations on representations are causally sensitive only to the syntactic properties of representations. According to LOTH, thought is, roughly, the tokening of a representation that has a syntactic (constituent) structure with an appropriate semantics. Thinking thus consists in syntactic operations defined over representations. Most of the (...) arguments for LOTH derive their strength from their ability to explain certain empirical phenomena like productivity, systematicity of thought and thinking. (shrink)
The purpose of this article is to explain why I believe that the Continuum Hypothesis (CH) is not a definite mathematical problem. My reason for that is that the concept of arbitrary set essential to its formulation is vague or underdetermined and there is no way to sharpen it without violating what it is supposed to be about. In addition, there is considerable circumstantial evidence to support the view that CH is not definite.
The purpose of this paper is to defend what I call the action-oriented coding theory (ACT) of spatially contentful visual experience. Integral to ACT is the view that conscious visual experience and visually guided action make use of a common subject-relative or 'egocentric' frame of reference. Proponents of the influential two visual systems hypothesis (TVSH), however, have maintained on empirical grounds that this view is false (Milner & Goodale, 1995/2006; Clark, 1999; 2001; Campbell, 2002; Jacob & Jeannerod, 2003; Goodale (...) & Milner, 2004). One main source of evidence for TVSH comes from behavioral studies of the comparative effects of size-contrast illusions on visual awareness and visuo- motor action. This paper shows that not only is the evidence from illusion studies inconclusive, there is a better, ACT-friendly interpretation of the evidence that avoids serious theoretical difficulties faced by TVSH. (shrink)
In historical claims for nativism, mathematics is a paradigmatic example of innate knowledge. Claims by contemporary developmental psychologists of elementary mathematical skills in human infants are a legacy of this. However, the connection between these skills and more formal mathematical concepts and methods remains unclear. This paper assesses the current debates surrounding nativism and mathematical knowledge by teasing them apart into two distinct claims. First, in what way does the experimental evidence from infants, nonhuman animals and neuropsychology support the nativist (...)hypothesis? Second, granting that infants have some elementary mathematical skills, does this mean that such skills play an important role in the development of mathematical knowledge? (shrink)
Evolutionary psychologists tend to view the mind as a large collection of evolved, functionally specialized mechanisms, or modules. Cosmides and Tooby (1994) have presented four arguments in favor of this model of the mind: the engineering argument, the error argument, the poverty of the stimulus argument, and combinatorial explosion. Fodor (2000) has discussed each of these four arguments and rejected them all. In the present paper, we present and discuss the arguments for and against the massive modularity hypothesis. We (...) conclude that Cosmides and Tooby's arguments have considerable force and are too easily dismissed by Fodor. (shrink)
I begin with a characterization of neurolinguistic theories, trying to pinpoint some general properties that an account of brain/language relations should have. I then address specific criticisms made in the commentaries regarding the syntactic theory assumed in the target article, properties of the Trace Deletion Hypothesis (TDH) and the Tree-Pruning Hyothesis (TPH), other experimental results from aphasia, and findings from functional neuroimaging. Despite the criticism, the picture of the limited role of Broca's area remains unchanged.
This paper examines the justification for the hypothesis of extended cognition (HEC). HEC claims that human cognitive processes can, and often do, extend outside our heads to include objects in the environment. HEC has been justified by inference to the best explanation (IBE). Both advocates and critics of HEC claim that we should infer the truth value of HEC based on whether HEC makes a positive, or negative, explanatory contribution to cognitive science. I argue that IBE cannot play this (...) epistemic role. A serious rival to HEC exists with a differing truth value, and this invalidates IBEs for both the truth and falsity of HEC. Explanatory value to cognitive science cannot be used as a guide to the truth value of HEC. (shrink)
Sherri Roush () and I (, ) have each argued independently that the most significant challenge to scientific realism arises from our inability to consider the full range of serious alternatives to a given hypothesis we seek to test, but we diverge significantly concerning the range of cases in which this problem becomes acute. Here I argue against Roush's further suggestion that the atomic hypothesis represents a case in which scientific ingenuity has enabled us to overcome the problem, (...) showing how her general strategy is undermined by evidence I have already offered in support of what I have called the 'problem of unconceived alternatives'. I then go on to show why her strategy will not generally (if ever) allow us to formulate and test exhaustive spaces of hypotheses in cases of fundamental scientific theorizing. (shrink)
I’m going to argue for a set of restricted skeptical results: roughly put, we don’t know that fire engines are red, we don’t know that we sometimes have pains in our lower backs, we don’t know that John Rawls was kind, and we don’t even know that we believe any of those truths. However, people unfamiliar with philosophy and cognitive science do know all those things. The skeptical argument is traditional in form: here’s a skeptical hypothesis; you can’t epistemically (...) neutralize it, you have to be able to neutralize it to know P; so you don’t know P. But the skeptical hypotheses I plug into it are “real, live” scientific-philosophical hypotheses often thought to be actually true, unlike any of the outrageous traditional skeptical hypotheses (e.g., ‘You’re a brain in a vat’). So I call the resulting skepticism Live Skepticism. Notably, the Live Skeptic’s argument goes through even if we adopt the clever anti-skeptical fixes thought up in recent years such as reliabilism, relevant alternatives theory, contextualism, and the rejection of epistemic closure. Furthermore, the scope of Live Skepticism is bizarre: although we don’t know the simple facts noted above, many of us do know that there are black holes and other amazing facts. (shrink)
The Narrative Practice Hypothesis (NPH) is a recently conceived, late entrant into the contest of trying to understand the basis of our mature folk psychological abilities, those involving our capacity to explain ourselves and comprehend others in terms of reasons. This paper aims to clarify its content, importance and scientific plausibility by: distinguishing its conceptual features from those of its rivals, articulating its philosophical significance, and commenting on its empirical prospects. I begin by clarifying the NPH's target explanandum and (...) the challenge it presents to theory theory (TT), simulation theory (ST) and hybrid combinations of these theories. The NPH competes with them directly for the same explanatory space insofar as these theories purport to explain the core structural basis of our folk psychological (FP)-competence (those of the sort famously but not exclusively deployed in acts of third-personal mindreading). (shrink)
In an effort to account for our a priori knowledge of synthetic necessary truths, Kant proposes to extend the successful method used in mathematics and the natural sciences to metaphysics. In this paper, a uniform account of that method is proposed and the particular contribution of the ‘Copernican hypothesis’ to our knowledge of necessary truths is explained. It is argued that, though the necessity of the truths is in a way owing to the object's relation to our cognition, the (...) truths we come to know are fully objective, expressing necessary relations between properties. Kant's distinction between ‘phenomena’ and ‘noumena’ is shown to serve to properly restrict the scope of the necessity claims so that they do express necessary connections between properties. (shrink)
At first sight, homosexuality has little to do with reproduction. Nevertheless, many neo-Darwinian theoreticians think that human homosexuality may have had a procreative value, since it enabled the close kin of homosexuals to have more viable offspring than individuals lacking the support of homosexual siblings. In this article, however, we will defend an alternative hypothesis - originally put forward by Freud in "A phylogenetic phantasy" - namely that homosexuality evolved as a means to strengthen social bonds. Consequently, from an (...) evolutionary point of view, homosexuality and heterosexuality have entirely distinct origins: there is no continuum from heterosexuality to homosexuality. Indeed, the natural history we propose shows that the intensity of the homosexual inclination has little or no predictive value with regard to the intensity of heterosexual tendencies. In fact, this may be a sound Darwinian way to understand sexual ambivalence. But if sexual ambivalence is a biological datum, one has to conclude that psychodynamic mechanisms are often needed in order to explain exclusive heterosexuality or exclusive homosexuality. (shrink)
This essay selectively reviews, from an historical and philosophical perspective, the dopamine (DA) hypothesis of schizophrenia (DHS; Table 1 lists the abbreviations used in this essay). Our goal is not to adjudicate the validity of the theory—although we arrive at a generally skeptical conclusion—but to focus on the process whereby the DHS has evolved over time and been evaluated. Since its inception, the DHS has been the most prominent etiologic theory in psychiatry and is still referred to widely in (...) current textbooks (e.g., Buchanan and Carpenter, Jr. 2005, 1336; Cohen 2003, 225; Gazzaniga 2004, 1257;Kandel et al. 2000, 1200). Understanding its origins and evolution should help to clarify the nature of modern .. (shrink)
Language and Ontology: Linguistic Relativism (Sapir-Whorf Hypothesis) vs. Universal Grammar Universal Ontology vs. Ontological Relativity Semiotics and Ontology: Annotated Bibliography of John Deely. First part: 1965-1998 Annotated Bibliography of John Deely. Second part: 1999-2010 The Rediscovery of John Poinsot (John of St. Thomas).
In recent years evolutionary psychologists have developed and defended the Massive Modularity Hypothesis, which maintains that our cognitive architecture—including the part that subserves ‘central processing’ —is largely or perhaps even entirely composed of innate, domain-specific computational mechanisms or ‘modules’. In this paper I argue for two claims. First, I show that the two main arguments that evolutionary psychologists have offered for this general architectural thesis fail to provide us with any reason to prefer it to a competing picture of (...) the mind which I call the Library Model of Cognition. Second, I argue that this alternative model is compatible with the central theoretical and methodological commitments of evolutionary psychology. Thus I argue that, at present, the endorsement of the Massive Modularity Hypothesis by evolutionary psychologists is both unwarranted and unmotivated. (shrink)
This essay introduces the massive redeployment hypothesis, an account of the functional organization of the brain that centrally features the fact that brain areas are typically employed to support numerous functions. The central contribution of the essay is to outline a middle course between strict localization on the one hand, and holism on the other, in such a way as to account for the supporting data on both sides of the argument. The massive redeployment hypothesis is supported by (...) case studies of redeployment, and compared and contrasted with other theories of the localization of function. (shrink)
Recent work by Joshua Knobe has established that people are far more likely to describe bad but foreseen side effects as intentionally performed than good but foreseen side effects (this is sometimes called the 'Knobe effect' or the 'side-effect effect.' Edouard Machery has proposed a novel explanation for this asymmetry: it results from construing the bad side effect as a cost that must be incurred to receive a benefit. In this paper, I argue that Machery's 'trade-off hypothesis' is wrong. (...) I do this by reproducing the asymmetry between judgments about good and bad side effects in cases that cannot plausibly be construed as trade-offs. (shrink)
b>. One major problem many hypotheses regarding the neural correlate of consciousness (NCC) face is what we might call “the why question”: _why _would this particular neural feature, rather than another, correlate with consciousness? The purpose of the present paper is to develop an NCC hypothesis that answers this question. The proposed hypothesis is inspired by the Cross-Order Integration (COI) theory of consciousness, according to which consciousness arises from the functional integration of a first-order representation of an external (...) stimulus and a second-order representation of that first-order representation. The proposal comes in two steps. The first step concerns the “general shape” of the NCC and can be directly derived from COI theory. The second step is a concrete hypothesis that can be arrived at by combining the general shape with empirical considerations. (shrink)
Over the last four decades arguments for and against the claim that creative hypothesis formation is based on Darwinian ‘blind’ variation have been put forward. This paper offers a new and systematic route through this long-lasting debate. It distinguishes between undirected, random, and unjustified variation, to prevent widespread confusions regarding the meaning of undirected variation. These misunderstandings concern Lamarckism, equiprobability, developmental constraints, and creative hypothesis formation. The paper then introduces and develops the standard critique that creative hypothesis (...) formation is guided rather than blind, integrating developments from contemporary research on creativity. On that basis, I discuss three compatibility arguments that have been used to answer the critique. These arguments do not deny guided variation but insist that an important analogy exists nonetheless. These compatibility arguments all fail, even though they do so for different reasons: trivialisation, conceptual confusion, and lack of evidence respectively. Revisiting the debate in this manner not only allows us to see where exactly a ‘Darwinian’ account of creative hypothesis formation goes wrong, but also to see that the debate is not about factual issues, but about the interpretation of these factual issues in Darwinian terms. (shrink)
According to John Haugeland, the capacity for “authentic intentionality” depends on a commitment to constitutive standards of objectivity. One of the consequences of Haugeland’s view is that a neurocomputational explanation cannot be adequate to understand “authentic intentionality”. This paper gives grounds to resist such a consequence. It provides the beginning of an account of authentic intentionality in terms of neurocomputational enabling conditions. It argues that the standards, which constitute the domain of objects that can be represented, reflect the statistical structure (...) of the environments where brain sensory systems evolved and develop. The objection that I equivocate on what Haugeland means by “commitment to standards” is rebutted by introducing the notion of “florid, self-conscious representing”. Were the hypothesis presented plausible, computational neuroscience would offer a promising framework for a better understanding of the conditions for meaningful representation. (shrink)
Abstract: Recently, the experimental philosopher Joshua Knobe has shown that the folk are more inclined to describe side effects as intentional actions when they bring about bad results. Edouard Machery has offered an intriguing new explanation of Knobe's work—the 'trade-off hypothesis'—which denies that moral considerations explain folk applications of the concept of intentional action. We critique Machery's hypothesis and offer empirical evidence against it. We also evaluate the current state of the debate concerning the concept of intentionality, and (...) argue that, given the number of variables at play, any parsimonious account of the relevant data is implausible. (shrink)
I attempt to get as clear as possible on the chain of reasoning by which irreversible macrodynamics is derivable from time-reversible microphysics, and in particular to clarify just what kinds of assumptions about the initial state of the universe, and about the nature of the microdynamics, are needed in these derivations. I conclude that while a “Past Hypothesis” about the early Universe does seem necessary to carry out such derivations, that Hypothesis is not correctly understood as a constraint (...) on the early Universe’s entropy. (shrink)
Jaakko Hintikka 1. How to Study Set Theory The continuum hypothesis (CH) is crucial in the core area of set theory, viz. in the theory of the hierarchies of infinite cardinal and infinite ordinal numbers. It is crucial in that it would, if true, help to relate the two hierarchies to each other. It says that the second infinite cardinal number, which is known to be the cardinality of the first uncountable ordinal, equals the cardinality 2 o of the (...) continuum. (Here o is the smallest infinite cardinal.). (shrink)
“There is a familiar trio of reactions by scientists to a purportedly radical hypothesis: (a) “You must be our of your mind!”, (b) “What else is new? Everybody knows _that_!”, and, later—if the hypothesis is still standing—(c) “Hmm. You _might _be on to something!” ((Dennett, 1995) p. 283).
The cheater-detection (CD) hypothesis suggests that people who otherwise perform poorly on the Wason selection task perform well when the task is couched in cheater-detection contexts. We report three studies with new selection problems that are similar to the originals but that question the CD hypothesis. The first two studies document a pattern heretofore attributed to CD mechanisms, namely good performance with “regular” rules and inferior performance with “switched” rules, all in problems that lack a cheater-detection context. The (...) final study finds an interaction: not only is good performance elicited on non-CD problems, but poor performance is found in the context of CD problems. Performance on the selection task cannot be predicted based on the presence or absence of cheater-detection contexts, which brings into question the need to invoke a specialised cheater-detection module. (shrink)
Entertaining diverse assumptions about empirical research, commentators give a wide range of verdicts on the NHSTP defence in Statistical significance. The null-hypothesis significance-test procedure (NHSTP) is defended in a framework in which deductive and inductive rules are deployed in theory corroboration in the spirit of Popper's Conjectures and refutations (1968b). The defensible hypothetico-deductive structure of the framework is used to make explicit the distinctions between (1) substantive and statistical hypotheses, (2) statistical alternative and conceptual alternative hypotheses, and (3) (...) making statistical decisions and drawing theoretical conclusions. These distinctions make it easier to show that (1) H0 can be true, (2) the effect size is irrelevant to theory corroboration, and (3) “strong” hypotheses make no difference to NHSTP. Reservations about statistical power, meta-analysis, and the Bayesian approach are still warranted. (shrink)
The point of psychotherapy has occasionally been associated with talk of ‘life’s meaning’. However, the literature on meaning in life written by contemporary philosophers has yet to be systematically applied to literature on the point of psychotherapy. My broad aim in this chapter is to indicate some plausible ways to merge these two tracks of material that have run in parallel up to now. More specifically, my hunch is that the connection between meaning as philosophers understand it and therapy as (...) psychotherapists ought to practice it is much closer than is suggested by the field of existential psychotherapy, which expressly addresses the topic of life’s meaning and appeals to ideas from classic philosophers such Søren Kierkegaard, Martin Heidegger, and the like. I instead proffer the claim that psychodynamic and humanistic therapy, clinical psychology, and counselling psychology as such, not a particular branch of them, are best understood as enterprises in search of meaning in life, in the way many present-day philosophers understand this phrase. In this chapter, I spell out what I mean by this bold hypothesis and provide some good reason to take it seriously. (shrink)
In his classic paper, “Delusional thinking and perceptual disorder,” Brendan Maher (1974) argues that psychiatric delusions are hypotheses designed to explain anomalous experiences, and are “developed through the operation of normal cognitive processes.” Consider, for instance, the Capgras delusion. Patients suffering from this particular delusion believe that someone close to them—such as a spouse, a sibling, a parent, or a child—has been replaced by an impostor: by someone who bears a striking resemblance to the “original” and who (for reasons unknown) (...) is intent on passing herself off as that individual. On Maher's view, the “Impostor Hypothesis” is the response of a rational agent to the anomalous experience it is invoked to explain. Recently, a number of philosophers have argued that Maher's analysis of delusion doesn't work when applied to the Capgras delusion. In this paper, I defend Maher's analysis against these arguments. However, my aim is not merely to defend Maher's analysis, but also to draw attention to some of the methodological problems that have led to its hasty dismissal. (shrink)
Grammar is now widely regarded as a substantially biological phenomenon, yet the problem of language evolution remains a matter of controversy among Linguists, Cognitive Scientists, and Evolutionary Theorists alike. In this paper, I present a new theoretical argument for one particular hypothesis—that a Language Acquisition Device of the sort first posited by Noam Chomsky might have evolved via the so-called Baldwin Effect . Close attention to the workings of that mechanism, I argue, helps to explain a previously mysterious feature (...) of the Language Acquisition Device—the sheer variety of languages it allows the child to learn—thereby revealing a far stronger case than adherents of the hypothesis have previously supposed. A further unheralded consequence of the hypothesis is a conceptual shift in the Chomskyan understanding of language, wherein the essentially public nature of language is freshly emphasised. This has the effect of bringing the Chomskyan view into closer accord with Saussurean accounts of language, as well as with recent trends in evolutionary theory. (shrink)
The subject of this essay is the dependence of evidential relations on background beliefs and assumptions. In Part I, two ways in which the relation between evidence and hypothesis is dependent on such assumptions are discussed and it is shown how in the context of appropriately differing background beliefs what is identifiable as the same state of affairs can be taken as evidence for conflicting hypotheses. The dependence of evidential relations on background beliefs is illustrated by discussions of the (...) Michelson-Morley experiment and the discovery of oxygen. In Part II, Hempel's analysis of confirmation and the contrasting model of theory acceptance provided by philosophers such as Kuhn and Feyerabend are discussed. It is argued that both are inadequate (on different grounds) and the problems addressed by each are shown to be more satisfactorily approached by means of the analysis developed in Part I. In Part III, it is argued that if there are objective criteria for deciding between competing theories, these cannot be simply that one theory has greater evidential support than another. Finally, some further methodological questions arising from the analysis are mentioned. (shrink)
The major point of contention among the philosophers and mathematicians who have written about the independence results for the continuum hypothesis (CH) and related questions in set theory has been the question of whether these results give reason to doubt that the independent statements have definite truth values. This paper concerns the views of G. Kreisel, who gives arguments based on second order logic that the CH does have a truth value. The view defended here is that although Kreisel's (...) conclusion is correct, his arguments are unsatisfactory. Later sections of the paper advance a different argument that the independence results do not show lack of truth values. (shrink)
According to one theory, the brain is a sophisticated hypothesis tester: perception is Bayesian unconscious inference where the brain actively uses predictions to <span class='Hi'>test</span>, and then refine, models about what the causes of its sensory input might be. The brain’s task is simply continually to minimise prediction error. This theory, which is getting increasingly popular, holds great explanatory promise for a number of central areas of research at the intersection of philosophy and cognitive neuroscience. I show how the (...) theory can help us understand striking phenomena at three cognitive levels: vision, sensory integration, and belief. First, I illustrate central aspects of the theory by showing how it provides a nice explanation of why binocular rivalry occurs. Then I suggest how the theory may explain the role of the unified sense of self in rubber hand and full body illusions driven by visuotactile conflict. Finally, I show how it provides an approach to delusion formation that is consistent with one-deficit accounts of monothematic delusions. (shrink)
This book, officially a contribution to the subject area of Charles Peirce’s semiotics, deserves a wider readership, including philosophers. Its subject matter is what might be termed the great question of how signification is brought about (what Peirce called the ‘riddle of the Sphinx’, who in Emerson’s poem famously asked, ‘Who taught thee me to name?’), and also Peirce’s answer to the question (what Peirce himself called his ‘guess at the riddle’, and Freadman calls his ‘sign hypothesis’).
The Past Hypothesis is the claim that the Boltzmann entropy of the universe was extremely low when the universe began. Can we make sense of this claim when *classical* gravitation is included in the system? I first show that the standard rationale for not worrying about gravity is too quick. If the paper does nothing else, my hope is that it gets the problems induced by gravity the attention they deserve in the foundations of physics. I then try to (...) make plausible a very weak claim: that there is a well-defined Boltzmann entropy that *can* increase in *some* interesting self-gravitating systems. More work is needed before we can say whether this claim answers the threat to the standard explanation of entropy increase. (shrink)
We briefly describe ways in which neuroeconomics has made contributions to its contributing disciplines, especially neuroscience, and a specific way in which it could make future contributions to both. The contributions of a scientific research programme can be categorized in terms of (1) description and classification of phenomena, (2) the discovery of causal relationships among those phenomena, and (3) the development of tools to facilitate (1) and (2). We consider ways in which neuroeconomics has advanced neuroscience and economics along each (...) line. Then, focusing on electrophysiological methods, we consider a puzzle within neuroeconomics whose solution we believe could facilitate contributions to both neuroscience and economics, in line with category (2). This puzzle concerns how the brain assigns reward values to otherwise incomparable stimuli. According to the common currency hypothesis, dopamine release is a component of a neural mechanism that solves comparability problems. We review two versions of the common currency hypothesis, one proposed by Read Montague and colleagues, the other by William Newsome and colleagues, and fit these hypotheses into considerations of rational choice. (shrink)
We are gratified at the largely positive comments on our essay on the dopamine hypothesis of schizophrenia (DHS) by these two distinguished commentators from the fields of biological psychiatry (Dr. Tamminga) and the philosophy of psychiatry (Dr. Murphy). There is little that they have said with which we disagree. Rather, we want to expand briefly on their commentaries.We found Dr. Tamminga's reactions to be particularly fascinating because she has been an "insider" to the story of the DHS as it (...) has unfolded. She provides substantial insight into the "extra-scientific" reasons for the persistence of the DHS despite its poor empirical record.She validates our impression that the DHS was in its first years of .. (shrink)
The survival and development of consciousness in biological evolution call for an explanation. An interactionistic mind-brain theory seems to have the greatest explanatory value in this context. An interpretation of an interactionistic hypothesis, recently proposed by Karl Popper, is discussed both theoretically and based on recent experimental data. In the interpretation, the distinction between the conscious mind and the brain is seen as a division into what is subjective and what is objective, and not as an ontological distinction between (...) something immaterial and something material. The interactionistic hypothesis is based on similarities between minds and physical forces. The conscious mind is understood to interact with randomly spontaneous spatio-temporal patterns of action potentials through an electromagnetic field. Consequences and suggestions for future studies are discussed. (shrink)
: As a working hypothesis for philosophy of science, the unity of science thesis has been decisively challenged in all its standard formulations; it cannot be assumed that the sciences presuppose an orderly world, that they are united by the goal of systematically describing and explaining this order, or that they rely on distinctively scientific methodologies which, properly applied, produce domain-specific results that converge on a single coherent and comprehensive system of knowledge. I first delineate the scope of arguments (...) against global unity theses. However implausible old-style global unity theses may now seem, I argue that unifying strategies of a more local and contingent nature do play an important role in scientific inquiry. This is particularly clear in archaeology where, to establish evidential claims of any kind, practitioners must exploit a range of inter-field and inter-theory connections. At the same time, the robustness of these evidential claims depends on significant disunity between the sciences from which archaeologists draw background assumptions and auxiliary hypotheses. This juxtaposition of unity with disunity poses a challenge to standard (polarized) positions in the debate about scientific unity. (shrink)
With Fermat’s Last Theorem finally disposed of by Andrew Wiles in 1994, it’s only natural that popular attention should turn to arguably the most outstanding unsolved problem in mathematics: the Riemann Hypothesis. Unlike Fermat’s Last Theorem, however, the Riemann Hypothesis requires quite a bit of mathematical background to even understand what it says. And of course both require a great deal of background in order to understand their significance. The Riemann Hypothesis was first articulated by Bernhard Riemann (...) in an address to the Berlin Academy in 1859. The address was called “On the Number of Prime Numbers Less Than a Given Quantity” and among the many interesting results and methods contained in that paper was Riemann’s famous hypothesis: all non-trivial zeros of the zeta function, ζ(s) = ∞ n=1 n−s, have real part 1/2. Although the zeta function as stated and considered as a real-valued function is defined only for s > 1, it can be suitably extended. It can, as a matter of fact, be extended to have as its domain all the complex numbers (numbers of the form x + yi, where x and y √ −1) with the exception of 1 + 0i (at which point are real numbers and i =. (shrink)
Is mental imagery pictorial? In Pylyshyn's view no empirical data provides convincing support to the “pictorial” hypothesis of mental imagery. Phenomenology, Pylyshyn says, is deeply deceiving and offers no explanation of why and how mental imagery occurs. We suggest that Pylyshyn mistakes phenomenology for what it never pretended to be. Phenomenological evidence, if properly considered, shows that mental imagery may indeed be pictorial, though not in the way that mimics visual perception. Moreover, Pylyshyn claims that the “pictorial hypothesis” (...) is flawed because the interpretation of “picture-like” objects in mental imagery takes a homunculus. However, the same point can be objected to Pylyshyn's own conclusion: if imagistic reasoning involves the same mechanisms and the same forms of representation as those that are involved in general reasoning, if they operate on symbol-based representations of the kind recommended by Pylyshyn (1984) and Fodor (1975), don't we need a phenomenological homunculus to tell an imagined bear from the real one? (shrink)
John Searle''s hypothesis of the Background seems to conflict with his initial representationalism according to which each Intentional state contains a particular content that determines its conditions of satisfaction. In Section I of this essay I expose Searle''s initial theory of Intentionality and relate it to Edmund Husserl''s earlier phenomenology. In Section II I make it clear that Searle''s introduction of the notion of Network, though indispensable, does not, by itself, force us to modify that initial theory. However, a (...) comparison of this notion to the notion of horizon from Husserl''s later phenomenology and an interpretation of Husserl''s conception of the determinable X as providing a solution to the problem of perceptual misidentification lead me to conclude that in his discussion of ''twin examples'' Searle had better modified his initial theory. Finally, I critically examine Searle''s claim that anyone who tries seriously to follow out the threads in the Network will eventually reach a bedrock of non-Intentional capacities. In Section III I show in detail, partly in a rather Husserlian vein, that Searle''s four official arguments for the Background thesis, though containing some very valuable contributions to a theory of linguistic skills, are not convincing at all if they are to be understood as going beyond the scope of (Hus)Searle''s ''content-cum-Network'' picture of Intentionality. The upshot of these considerations is that the Background thesis should be read as a thesis concerning the causal neurophysiological preconditions of human Intentionality rather than concerning the logical properties of Intentional states in general. Recently Searle himself has come to the same result, but he does not say for which reasons. The present essay makes it clear why Searle just had to arrive at this important result. (shrink)
Within philosophy of physics it is broadly accepted that presentism as an empirical hypothesis has been falsified by the development of special relativity. In this paper, I identify and reject an assumption common to both presentists and advocates of the block universe, and then offer an alternative version of presentism that does not begin from spatiotemporal structure, which is an empirical hypothesis, and which has yet to be falsified. I fear that labelling it “presentism” dooms the view, but (...) I don’t know what else to call it. (shrink)
Following Hume’s lead, Paul Draper argues that, given the biological role played by both pain and pleasure in goal-directed organic systems, the observed facts about pain and pleasure in the world are antecedently much more likely on the Hypothesis of Indifference than on theism. I examine one by one Draper’s arguments for this claim and show how they miss the mark.
Surveys in different countries (e.g. the UK, Belgium and The Netherlands) show a marked recent increase in the incidence of continuous deep sedation at the end of life (CDS). Several hypotheses can be formulated to explain the increasing performance of this practice. In this paper we focus on what we call the ‘natural death’ hypothesis, i.e. the hypothesis that acceptance of CDS has spread rapidly because death after CDS can be perceived as a ‘natural’ death by medical practitioners, (...) patients' relatives and patients.We attempt to show that the label ‘natural’ cannot be unproblematically applied to the nature of this end-of-life practice. We argue that the labeling of death following CDS as ‘natural’ death is related to a complex set of mechanisms which facilitate the use of this practice. However, our criticism does not preclude the view that CDS may be clinically and ethically justified in many cases. (shrink)
This paper explores how the Generalized Continuum Hypothesis (GCH) arose from Cantor's Continuum Hypothesis in the work of Peirce, Jourdain, Hausdorff, Tarski, and how GCH was used up to Gödel's relative consistency result.
The past hypothesis is that the entropy of the universe was very low in the distant past. It is put forward to explain the entropic arrow of time but it has been suggested (e.g. [Penrose, R. (1989a). The emperor’s new mind. London:Vintage Books; Penrose, R. (1989b). Annals of the New York Academy of Sciences, 571, 249–264; Price, H. (1995). In S. F. Savitt (Ed.), Times’s arrows today. Cambridge: Cambridge University Press; Price, H. (1996). Time’s arrow and Archimedes’ point. Oxford: (...) Oxford University Press; Price, H. (2004). In C. Hitchcock (Ed.), Contemporary debates in philosophy of science. Oxford: Blackwell]) that it is itself in need of explanation. It has also been suggested that cosmic inflation could provide the explanation, but Price (2004) raises a serious objection to this suggestion, which has otherwise received very little attention in the philosophical literature. Price points out that the standard inflationary explanation involves a double standard: although the evolution of the universe described by the inflationary model seems natural from the standard temporal perspective it looks highly unnatural from the reversed temporal perspective. The main purpose of this paper is to propose a novel form of the inflationary explanation that avoids this objection. It is argued that the inflationary model would not involve a double standard (but would still explain the past hypothesis) if we construct the model with a global “boundary” condition instead of a conventional boundary condition: if we assume that the universe is as generic as possible overall, rather than as generic as possible at some given point (e.g. the Big Bang) as is assumed in the standard inflationary model. This novel form of the inflationary explanation is then compared with Price’s 1996 preferred explanation, a version of the so-called “Weyl hypothesis”. (shrink)
In this paper I discuss the shapelessnesss hypothesis, which is often referred to and relied on by certain sorts of ethical and evaluative cognitivist, and which they use primarily in arguing against a certain, influential form of noncognitivism. I aim to (i) set out exactly what the hypothesis is; (ii) show that its original and traditional use is left wanting; and (iii) show that there is some rehabilitation on offer that might have a chance of convincing neutrals.
Unfortunately, reading Chow's work is likely to leave the reader more confused than enlightened. My preferred solutions to the “controversy” about null- hypothesis testing are: (1) recognize that we really want to test the hypothesis that an effect is “small,” not null, and (2) use Bayesian methods, which are much more in keeping with the way humans naturally think than are classical statistical methods.
In his recent book, Time and Chance, David Albert claims that by positing that there is a uniform probability distribution defined, on the standard measure, over the space of microscopic states that are compatible with both the current macrocondition of the world, and with what he calls the “past hypothesis”, we can explain the time asymmetry of all of the thermodynamic behavior in the world. The principal purpose of this paper is to dispute this claim. I argue that Albert's (...) proposal fails in his stated goal—to show how to use the time‐reversible dynamics of Newtonian physics to “underwrite the actual content of our thermodynamic experience” (Albert 2000, 159). Albert's proposal can satisfactorily explain why the overall entropy of the universe as a whole is increasing, but it does not and cannot explain the increasing entropy of relatively small, relatively short‐lived systems in energetic isolation without making use of a principle that leads to reversibility objections. (shrink)
Immanuel Kant’s three great Critiques stand among the bulkier monuments of Enlightenment thought. The first is best known; the last had until recently been rather less studied. But his final Critique contains, I contend, a remarkable development of Kant’s theory of how human beings use and create systems of knowledge. While Kant was not himself concerned with the neuronal substrates of cognition, I argue this development yields a novel empirical hypothesis susceptible of experimental investigation. Here I (...) present the Kantian motivation and describe experimental work aimed at testing predictions arising from the new hypothesis. (shrink)
Nowadays, it is a truism that hypotheses and theories play an essential role in scientific practice. This, however, was far from an obvious given in seventeenth-century British natural philosophy. Different natural philosophers had different views on the role and status of hypotheses and theories, ranging from fierce promotion to bold rejection, and to both they ascribed varying meanings and connotations. The guiding idea of this chapter is that, in seventeenth-century British natural philosophy, the terms ‘hypothesis’/‘hypothetical’ and ‘theory’/‘theoretical’ were imbedded (...) in a semantic network of interconnected epistemological and methodological notions – such as ‘knowledge’, ‘method’, ‘probability’, ‘certainty’, ‘induction’, ‘deduction’, ‘experimental philosophy’, ‘speculative philosophy’, and the like). As these semantic networks changed overtime, the meaning and significance of ‘hypothesis’ and ‘theory’ likewise shifted. Without pretence of completeness, this chapter highlights chronologically some of the defining moments in the semantic transformation of these two terms within the context of seventeenth-century natural philosophy. (shrink)
Universal Grammar (UG) can be interpreted as a constraint on the form of possible grammars (hypothesis space) or as a constraint on acquisition strategies (selection procedures). In this response to Herschensohn we reiterate the position outlined in Epstein et al. (1996a, r), that in the evaluation of L2 acquisition as a UG- constrained process the former (possible grammars/ knowledge states) is critical, not the latter. Selection procedures, on the other hand, are important in that they may have a bearing (...) on development in language acquisition. We raise the possibility that differences in first and second language acquisition pertaining to both attainment of the end-state and course of development may derive from differences in selection procedures. We further suggest that for these reasons age effects in the attainment of nativelike proficiency must necessarily be separated from UG effects. (shrink)
It is an assumed view in Chinese philosophy that the grammatical differences between English or Indo-European languages and classical Chinese explain some of the differences between the Western and Chinese philosophical discourses. Although some philosophers have expressed doubts about the general link between classical Chinese philosophy and syntactic form of classical Chinese, I discuss a specific hypothesis, i.e., the mass-noun hypothesis, in this essay. The mass-noun hypothesis assumes that a linguistic distinction such as between the singular terms (...) and the predicates is sufficient to justify or necessarily leads to a specific ontological distinction such as the distinction between the particular and the universal. I argue that one cannot read off semantic properties simply from syntactic ones and hence the syntactic differences do not automatically translate into the semantic differences between languages, that the syntactic features of Chinese nouns do not have explanatory significance in explaining why the particular-universal problem does not arise in the classical period of Chinese philosophy, and that the part-whole ontology allegedly informed by the mass-noun-like semantics does not provide a natural or intuitive picture of the language-world relation. (shrink)
If the NHSTP procedure is essential for controlling for chance, why is there little, if any, discussion of the nature of chance by Chow and other advocates of the procedure. Also, many criticisms that Chow takes to be aimed against the NHSTP (null-hypothesis significance-test) procedure are actually directed against the kind of theory that is tested by the procedure.
In this commentary, I agree with Chow's treatment of null hypothesis significance testing as a noninferential procedure. However, I dispute his reconstruction of the logic of theory corroboration. I also challenge recent criticisms of NHSTP based on power analysis and meta-analysis.
In a recent paper, Dylan Evans proposed that emotions could help solve what has been known as ?the frame problem?. In the process, he first questioned the utility of using the frame problem as a framework. After tackling this issue, he provided an alternative terminology to the frame problem?termed ?the search hypothesis of emotion??in order to re-examine how emotions aid rational agents. His new terminology, however, opens itself to other critiques. While accepting the basic tenets of his analysis, I (...) question (i) whether a single search theory of emotion is adequate, and (ii) whether his theory would have been better termed ?the search hypothesis of feeling?. Finally, I extend some of the ideas developed in Evans' paper. Introduction Emotion, reason and ends The search hypothesis of emotion revisited Conclusion. (shrink)
This article presents a new proposal for understanding the establishment and maintenance of cooperation: the cooperation afforder with framing hypothesis, producing what can be called cooperation from afforder-framing . Three key moves are present. First, a special variety of the Stag Hunt game, the Cooperation Afforder game, will reliably produce mutualistic cooperation through an evolutionary process. Second, cognitive framing is a credible candidate mechanism to meet the special conditions and requirements of the Cooperation Afforder game. Third, once mutualistic cooperation (...) is established in this way, it will plausibly lead to broader forms of cooperation, even to limited forms of altruism. (shrink)
The social brain hypothesis implies that humans and other primates evolved “modules“ for representing social knowledge. Alternatively, no such cognitive specializations are needed because social knowledge is already present in the world — we can simply monitor the dynamics of social interactions. Given the latter idea, what mechanism could account for coalition formation? We propose that statistical learning can provide a mechanism for fast and implicit learning of social signals. Using human participants, we compared learning of social signals with (...) arbitrary signals. We found that learning of social signals was no better than learning of arbitrary signals. While coupling faces and voices led to parallel learning, the same was true for arbitrary shapes and sounds. However, coupling versus uncoupling social signals with arbitrary signals revealed that faces and voices are treated with perceptual priority. Overall, our data suggest that statistical learning is a viable domain-general mechanism for learning social group structure. Keywords: social brain; embodied cognition; distributed cognition; situated cognition; multisensory; audiovisual speech; crossmodal; multimodal. (shrink)
Mealey argued that sociopathy is an evolutionary stable strategy subject to frequency-dependent selection – high levels of sociopathy being advantageous to the individual if population-wide frequencies of it are low, and vice versa. I argue that at least one alternative hypothesis exists that explains her data equally well. Alternative hypotheses must be formulated and tested before any theory can be validated.
Gilbert Ryle accused Descartes of advancing what he called the “paramechanical hypothesis,” according to which the structure and operations of the mind can be understood on the model of the structure and operations of a physical system. The body is a complex machine – “a bit of clockwork” – that operates according to laws governing the mechanical interactions of material things. The mind, on the other hand, according to Descartes (according to Ryle), is an immaterial machine that operates according (...) to formally analogous laws governing the paramechanical interactions of immaterial things – “a bit of not-clockwork.” In other words, mental processes are the same as physical processes, only you don’t have the matter. I don’t know whether Descartes actually thought this. But, surely, if he did, he was making some kind of logical or conceptual error. Mental processes can’t be the same as physical processes, minus the matter, since the matter matters. The properties of physical systems have physical explanations, which are explanations in terms of physical properties and physical laws. But it is absurd – a category mistake – to suppose that mechanical explanations could apply to immaterial things with no physical properties, subject to no physical laws. (If matters of mind.. (shrink)
Elliott Sober (1987, 1993) and Orzack and Sober (forthcoming) argue that adaptationism is a very general hypothesis that can be tested by testing various particular hypotheses that invoke natural selection to explain the presence of traits in populations of organisms. In this paper, I challenge Sobers claim that adaptationism is an hypothesis and I argue that it is best viewed as a heuristic (or research strategy). Biologists would still have good reasons for employing this research strategy even if (...) it turns out that natural selection is not the most important cause of evolution. (shrink)
Artificial life uses computer models to study the essential nature of the characteristic processes of complex adaptive systems proceses such as self-organization, adaptation, and evolution. Work in the field is guided by the working hypothesis that simple computer models can capture the essential nature of these processes. This hypothesis is illustrated by recent results with a simple population of computational agents whose sensorimotor functionality undergo open-ended adaptive evolution. These might illuminate three aspects of complex adaptive systems in general: (...) punctuated equilibrium dynamics of diversity, a transition separating genetic order and disorder, and a law of adaptive evolutionary activity. (shrink)
What new implications does the dynamical hypothesis have for cognitive science? The short answer is: None. The _Behavior and Brain Sciences _target article, “The dynamical hypothesis in cognitive science” by Tim Van Gelder is basically an attack on traditional symbolic AI and differs very little from prior connectionist criticisms of it. For the past ten years, the connectionist community has been well aware of the necessity of using (and understanding) dynamically evolving, recurrent network models of cognition.
Previous research has found no consistent relationship between measures of disconfirmatory evidence, alternative hypotheses, and people's success in rule-discovery tasks. The present paper explores falsification's inductive benefit under the ?context of discovery? in Wason's 2?4?6 task by developing a new type of alternative hypothesis, which we label the ?new-perspective hypothesis?. Experiment 1 found that falsification is effective only when a new-perspective hypothesis is generated, rather than a same-perspective hypothesis. The total number of alternative hypotheses was also (...) unrelated to rule-discovery success. Experiment 2 replicated Experiment 1 but included the addition of a different name-content task as well as two levels of task difficulty. The main findings were similar to those for Experiment 1, and the new-perspective hypothesis was observed to be most important for the difficult rule-discovery task. These results help to clarify the important ways new-perspective hypotheses and disconfirmatory evidence contribute to successful rule-discovery performance. (shrink)
In this note I test a specific thesis about the dependence of philosophy of science on science that Laudan presents in his Science and Hypothesis; namely, that the sciences were justificationally prior to the philosophy of science. I argue that Laudan's historical case studies show a justificational priority that goes the other way. I also argue that the justificational role that in Progress and Its Problems the history of science is alleged to play vis-à-vis competing conceptions of scientific rationality (...) is not apparent in Laudan's argumentation in favor of his suggested analysis in terms of problem-solving effectiveness. (shrink)
This paper inquires into the nexus between the Deleuzian critical-clinical hypothesis and its literary instantiation in Beckett, with a focus on How It Is (1964) and Worstward Ho (1983b). I propose to read the interruptions in style symptomatically, and stuttering language in Beckett as liminal expression, thus tracing the flows and breaks of desire which Deleuze theorises in the sense of a symptomatological unconscious. The schizoid style as liminal expression exemplified in Beckett's work will be read as marking transit (...) stages in the process of becoming which invites taking it as a proper language of the body without organs. (shrink)
What van Gelder calls the dynamical hypothesis is only a special case of what we here dub the general dynamical hypothesis. His terminology makes it easy to overlook important alternative dynamical approaches in cognitive science. Connectionist models typically conform to the general dynamical hypothesis, but not to van Gelder's.
Revonsuo's evolutionary theory of dream function is extremely interesting. However, although threat avoidance theory is well grounded in experimental data, it does not take other significant dream research data into account. The theory can be integrated into a more general hypothesis which takes these data into consideration. [Revonsuo].
Chow's (1996) defense of the null-hypothesis significance-test procedure (NHSTP) is thoughtful and compelling in many respects. Nevertheless, techniques such as meta-analysis, power analysis, effect size estimation, and confidence intervals can be useful supplements to NHSTP in furthering the cumulative nature of behavioral research, as illustrated by the history of research on the spontaneous recovery of verbal learning.
Challenges for extending the mirror system hypothesis include mechanisms supporting planning, conversation, motivation, theory of mind, and prosody. Modeling remains relevant. Co-speech gestures show how manual gesture and speech intertwine, but more attention is needed to the auditory system and phonology. The holophrastic view of protolanguage is debated, along with semantics and the cultural basis of grammars. Anatomically separated regions may share an evolutionary history.
We attempt to defend the species-as-individuals hypothesis by examining the logical role played by the binomials (e.g., "Homo sapiens," "Pinus ponderosa") in biological discourse about species. Those who contend that the binomials can be properly understood as functioning in biological theory as singular terms opt for an objectual account of species and view species as individuals. Those who contend that the binomials can in principle be eliminated from biological theory in favor of predicate expressions opt for a predicative account (...) of species and view species as kinds. We contend that biologists' talk about species is talk about species as individuals, and we conclude that the most plausible account of species is an objectual account. (shrink)
The Fitzgerald-Lorentz contraction hypothesis, proposed as an explanation of the Michelson-Morley result, fails to account for the Kennedy-Thorndike result. Hence, Grünbaum argues, the hypothesis has been falsified. However, the contraction hypothesis as formulated by Lorentz is false for the very fundamental reason that it entails a contradiction, namely, the consequence that light waves must have a variable velocity along what by definition is taken to be a rest length. Furthermore, the attempt to resolve this contradiction by coupling (...) the Fitzgerald-Lorentz contraction with the hypothesis that clock rates are a function of velocity, is open to a sound, methodological objection. The Michelson-Morley result is fully satisfied, provided only that the lengths of the interferometer arms, in the longitudinal and transverse positions, are thought to be related to one another in a certain ratio, and this ratio may be interpreted as a contraction in both arms. Since this twofold contraction hypothesis suffices to explain both the Michelson-Morley and the Kennedy-Thorndike results, and since it entails no contradiction, there is no need to correct both the length of rods and the rate of clocks. Therefore, the combined clock-rod hypothesis, and with it the Fitzgerald-Lorentz contraction hypothesis, must be rejected. (shrink)
Edouard Machery's paper, ‘The Folk Concept of Intentional Action: Philosophical and Psychological Issues,’ puts forth an intriguing new hypothesis concerning recent work in experimental philosophy on the concept of intentional action. As opposed to other hypotheses in the literature, Machery's 'trade-off hypothesis' claims not to rely on moral considerations in explaining folk uses of the concept. In this paper, we critique Machery's hypothesis and offer empirical evidence to reject (...) it. Finally, we evaluate the current state of the debate concerning the concept of intentional action, and motivate skepticism toward the plausibility of any parsimonious account of the relevant data. (shrink)
mar of a language? What are the consequences of these only the ‘tryer’ but also the ‘drinker’, even though the noun roles for syntactic structure, and why does it matter? We phrase Ozzie is not overtly an argument of the verb drink. sketch the Simpler Syntax Hypothesis, which holds that..
Although it could avoid some harmful effects of climate change, sulphate aerosol geoengineering (SAG), or injecting sulphate aerosols into the stratosphere in order to reflect incoming solar radiation, threatens substantial harm to humans and non-humans. I argue that SAG is prima facie ethically problematic from anthropocentric, animal liberationist, and biocentric perspectives. This might be taken to suggest that ethical evaluations of SAG can rely on Bryan Norton's convergence hypothesis, which predicts that anthropocentrists and non-anthropocentrists will agree to implement the (...) same or similar environmental policies. However, there are potential scenarios in which anthropocentrists and non-anthropocentrists would seem to diverge on whether a particular SAG policy ought to be implemented. This suggests that the convergence hypothesis should not be relied on in ethical evaluation of SAG. Instead, ethicists should consider the merits and deficiencies of both non-anthropocentric perspectives and the ethical evaluations of SAG such perspectives afford. (shrink)
Through a comparative case analysis regarding the Chinese language, it is discussed how the structure and functions of a natural language would bear upon the ways in which some philosophical problems are posed and some ontological insights shaped. Disagreeing with Chad Hansen's mass-noun hypothesis, a collective-noun hypothesis is argued for: (1) the denotational semantics and relevant grammatical features of Chinese nouns are like those of collective nouns; (2) their implicit ontology is a mereological ontology of collection-of-individuals with both (...) part-whole and member-class structure; and (3) encouraged and shaped by the folk semantics of Chinese nouns, classical Chinese theorists of language take this kind of mereological nominalism for granted. (shrink)
The tradeoff hypothesis in the speech–gesture relationship claims that (a) when gesturing gets harder, speakers will rely relatively more on speech, and (b) when speaking gets harder, speakers will rely relatively more on gestures. We tested the second part of this hypothesis in an experimental collaborative referring paradigm where pairs of participants (directors and matchers) identified targets to each other from an array visible to both of them. We manipulated two factors known to affect the difficulty of speaking (...) to assess their effects on the gesture rate per 100 words. The first factor, codability, is the ease with which targets can be described. The second factor, repetition, is whether the targets are old or new (having been already described once or twice). We also manipulated a third factor, mutual visibility, because it is known to affect the rate and type of gesture produced. None of the manipulations systematically affected the gesture rate. Our data are thus mostly inconsistent with the tradeoff hypothesis. However, the gesture rate was sensitive to concurrent features of referring expressions, suggesting that gesture parallels aspects of speech. We argue that the redundancy between speech and gesture is communicatively motivated. (shrink)
My responses to the observations and criticisms of 26 commentaries focus on the coregulated and affective nature of initial mother/infant interactions, the relationship between motherese and emergent linguistic skills and its implication for hominin evolution, the plausibility of the “putting the baby down” hypothesis, and details about specific neurological substrates that may have formed the basis for the evolution of prelinguistic behaviors and, eventually, protolanguage.