This paper is truly a joint effort and it could not have been written without the contribution of both authors. Garson, though, deserves credit (or blame) for first seeing the need for two kinds of quantifier scope, and also for devising essentials of the positive theory.
Natural deduction systems were motivated by the desire to define the meaning of each connective by specifying how it is introduced and eliminated from inference. In one sense, this attempt fails, for it is well known that propositional logic rules (however formulated) underdetermine the classical truth tables. Natural deduction rules are too weak to enforce the intended readings of the connectives; they allow non-standard models. Two reactions to this phenomenon appear in the literature. One is to try to restore the (...) standard readings, for example by adopting sequent rules with multiple conclusions. Another is to explore what readings the natural deduction rules do enforce. When the notion of a model of a rule is generalized, it is found that natural deduction rules express “intuitionistic” readings of their connectives. A third approach is presented here. The intuitionistic readings emerge when models of rules are defined globally, but the notion of a local model of a rule is also natural. Using this benchmark, natural deduction rules enforce exactly the classical readings of the connectives, while this is not true of axiomatic systems. This vindicates the historical motivation for natural deduction rules. One odd consequence of using the local model benchmark is that some systems of propositional logic are not complete for the semantics that their rules express. Parallels are drawn with incompleteness results in modal logic to help make sense of this. (shrink)
The binding problem is to explain how information processed by different sensory systems is brought together to unify perception. The problem has two sides. First, we want to explain phenomenal binding: the fact that we experience a single world rather than separate perceptual fields for each sensory modality. Second, we must solve a functional problem: to explain how a neural net like the brain links instances to types. I argue that phenomenal binding and functional binding require very different treatments. The (...) puzzle of phenomenal binding rests on a confusion and so can be dissolved. So only functional binding deserves explanation. The general solution to that problem is that information to be bound is arrayed along different dimensions. So sensory coding into separate topographic maps facilitates functional binding and there is no need based on the unity of perception for special mechanisms that bring "back together" information in different maps. (shrink)
The first use of the term “information” to describe the content of nervous impulse occurs in Edgar Adrian's The Basis of Sensation (1928). What concept of information does Adrian appeal to, and how can it be situated in relation to contemporary philosophical accounts of the notion of information in biology? The answer requires an explication of Adrian's use and an evaluation of its situation in relation to contemporary accounts of semantic information. I suggest that Adrian's concept of information can be (...) to derive a concept of arbitrariness or semioticity in representation. This in turn provides one way of resolving some of the challenges that confront recent attempts in the philosophy of biology to restrict the notion of information to those causal connections that can in some sense be referred to as arbitrary or semiotic. (shrink)
Fodor and Pylyshyn (1988) argue that any successful model of cognition must use classical architecture; it must depend upon rule-based processing sensitive to constituent structure. This claim is central to their defense of classical AI against the recent enthusiasm for connectionism. Connectionist nets, they contend, may serve as theories of the implementation of cognition, but never as proper theories of psychology. Connectionist models are doomed to describing the brain at the wrong level, leaving the classical view to account for the (...) mind.This paper considers whether recent results in connectionist research weigh against Fodor and Pylyshyn's thesis. The investigation will force us to develop criteria for determining exactly when a net is capable of systematic processing. Fodor and Pylyshyn clearly intend their thesis to affect the course of research in psychology. I will argue that when systematicity is defined in a way that makes the thesis relevant in this way, the thesis is challenged by recent progress in connectionism. (shrink)
This paper explores the possibility that chaos theory might be helpful in explaining free will. I will argue that chaos has little to offer if we construe its role as to resolve the apparent conflict between determinism and freedom. However, I contend that the fundamental problem of freedom is to find a way to preserve intuitions about rational action in a physical brain. New work on dynamic computation provides a framework for viewing free choice as a process that is sensitive (...) and unpredictable, while at the same time organized and intelligent. I conclude that this vision of a chaotic brain may make a modest contribution to an intuitively acceptable physicalist account of free will. (shrink)
Despite the voluminous literature on biological functions produced over the last 40 years, few philosophers have studied the concept of function as it is used in neuroscience. Recently, Craver (forthcoming; also see Craver 2001) defended the causal role theory against the selected effects theory as the most appropriate theory of function for neuroscience. The following argues that though neuroscientists do study causal role functions, the scope of that theory is not as universal as claimed. Despite the strong prima facie superiority (...) of the causal role theory, the selected effects theory (when properly developed) can handle many cases from neuroscience with equal facility. It argues this by presenting a new theory of function that generalizes the notion of a ‘selection process’ to include processes such as neural selection, antibody selection, and some forms of learning—that is, to include structures that have been differentially retained as well as those that have been differentially reproduced. This view, called the generalized selected effects theory of function, will be defended from criticism and distinguished from similar views in the literature. (shrink)
Designed for use by philosophy students, this book provides an accessible, yet technically sound treatment of modal logic and its philosophical applications. Every effort has been made to simplify the presentation by using diagrams in place of more complex mathematical apparatus. These and other innovations provide philosophers with easy access to a rich variety of topics in modal logic, including a full coverage of quantified modal logic, non-rigid designators, definite descriptions, and the de-re de-dictio distinction. Discussion of philosophical issues concerning (...) the development of modal logic is woven into the text. The book uses natural deduction systems and also includes a diagram technique that extends the method of truth trees to modal logic. This feature provides a foundation for a novel method for showing completeness, one that is easy to extend to systems that include quantifiers. (shrink)
The purpose of this paper is to explore the merits of the idea that dynamical systems theory (also known as chaos theory) provides a model of the mind that can vindicate the language of thought (LOT). I investigate the nature of emergent structure in dynamical systems to assess its compatibility with causally efficacious syntactic structure in the brain. I will argue that anyone who is committed to the idea that the brain's functioning depends on emergent features of dynamical systems should (...) have serious reservations about the LOT. First, dynamical systems theory casts doubt on one of the strongest motives for believing in the LOT: principle P, the doctrine that structure found in an effect must also be found in its cause. Second, chaotic emergence is a double-edged sword. Its tendency to cleave the psychological from the neurological undermines foundations for belief in the existence of causally efficacious representations. Overall, a dynamic conception of the brain sways us away from realist conclusions about the causal powers of representations with constituent structure. (shrink)
Simulation has emerged as an increasingly popular account of folk psychological (FP) talents at mind-reading: predicting and explaining human mental states. Where its rival (the theory-theory) postulates that these abilities are explained by mastery of laws describing the connections between beliefs, desires, and action, simulation theory proposes that we mind-read by "putting ourselves in another's shoes." This paper concerns connectionist architecture and the debate between simulation theory (ST) and the theory-theory (TT). It is only natural to associate TT with classical (...) architectures where rule governed operations apply to explicit propositional representations. On the other hand, ST would seem better tuned to procedurally oriented non-symbolic structures found in connectionist models. This paper explores the possible alignment between ST and connectionist architecture. Joe Cruz argues that connectionist models with distributed non-symbolic representations are particularly well suited to simulation theory. The purported linkage between connectionist architecture and simulation theory is criticized in this paper. The conclusion is that there are reasons for thinking that connectionist forms of representation are the enemy of both TT and ST. So the contribution of connectionism may be to suggest the need for an alternative to both views. (shrink)
This paper explores a line of argument against the classical paradigm in cognitive science that is based upon properties of non-linear dynamical systems, especially in their chaotic and near-chaotic behavior. Systems of this kind are capable of generating information-rich macro behavior that could be useful to cognition. I argue that a brain operating at the edge of chaos could generate high-complexity cognition in this way. If this hypothesis is correct, then the symbolic processing methodology in cognitive science faces serious obstacles. (...) A symbolic description of the mind will be extremely difficult, and even if it is achieved to some approximation, there will still be reasons for rejecting the hypothesis that the brain is in fact a symbolic processor. (shrink)
The first use of the term "information" to describe the content of nervous impulse occurs 20 years prior to Shannon`s (1948) work, in Edgar Adrian`s The Basis of Sensation (1928). Although, at least throughout the 1920s and early 30s, the term "information" does not appear in Adrian`s scientific writings to describe the content of nervous impulse, the notion that the structure of nervous impulse constitutes a type of message subject to certain constraints plays an important role in all of his (...) writings throughout the period. The appearance of the concept of information in Adrian`s work raises at least two important questions: (i) what were the relevant factors that motivated Adrian`s use of the concept of information? (ii) What concept of information does Adrian appeal to, and how can it be situated in relation to contemporary philosophical accounts of the notion of information in biology? The first question involves an account of the application of communications technology in neurobiology as well as the historical and scientific background of Adrian`s major scientific achievement, which was the recording of the action potential of a single sensory neuron. The response to the second question involves an explication of Adrian`s concept of information and an evaluation of how it may be situated in relation to more contemporary philosophical explications of a semantic concept of information. I suggest that Adrian`s concept of information places limitations on the sorts of systems that are referred to as information carriers by causal and functional accounts of information. (shrink)
The following describes one distinct sense of ‘mechanism’ which is prevalent in biology and biomedicine and which has important epistemic benefits. According to this sense, mechanisms are defined by the functions they facilitate. This construal has two important implications. Firstly, mechanisms that facilitate functions are capable of breaking. Secondly, on this construal, there are rigid constraints on the sorts of phenomena ‘for which’ there can be a mechanism. In this sense, there are no ‘mechanisms for’ pathology, and natural selection is (...) not a ‘mechanism of’ evolution, because it does not serve a function. (shrink)
A framework is presented in which the role ofdevelopmental rules in phenotypic evolution canbe studied for some simple situations. Usingtwo different implicit models of development,characterized by different developmental mapsfrom genotypes to phenotypes, it is shown bysimulation that developmental rules and driftcan result in directional phenotypic evolutionwithout selection. For both models thesimulations show that the critical parameterthat drives the final phenotypic distributionis the cardinality of the set of genotypes thatmap to each phenotype. Details of thedevelopmental map do not matter. If phenotypesare (...) randomly assigned to genotypes, the lastresult can also be proved analytically. (shrink)
Proponents of the language of thought (LOT) thesis are realists when it comes to syntactically structured representations, and must defend their view against instrumentalists, who would claim that syntactic structures may be useful in describing cognition, but have no more causal powers in governing cognition than do the equations of physics in guiding the planets. This paper explores what it will take to provide an argument for LOT that can defend its conclusion from instrumentalism. I illustrate a difficulty in this (...) project by discussing arguments for LOT put forward by Horgan and Tienson. When their evidence is viewed in the light of results in connectionist research, it is hard to see how a realist conception of syntax can be formulated and defended. (shrink)
We sketch a novel and improved version of Boorse’s biostatistical theory of functions. Roughly, our theory maintains that (i) functions are non-negligible contributions to survival or inclusive fitness (when a trait contributes to survival or inclusive fitness); (ii) situations appropriate for the performance of a function are typical situations in which a trait contributes to survival or inclusive fitness; (iii) appropriate rates of functioning are rates that make adequate contributions to survival or inclusive fitness (in situations appropriate for the performance (...) of that function); and (iv) dysfunction is the inability to perform a function at an appropriate rate in appropriate situations. Based on our theory, we sketch solutions to three problems that have afflicted Boorse’s theory of function, namely, Kingma’s () problem of the situation-specificity of functions, the problem of multi-functional traits, and the problem of how to distinguish between appropriate and inappropriate rates of functioning. 1 Functions Are Situation-Specific2 A General Account of Biostatistical Functions 2.1 Functions 2.2 Appropriate situations for the performance of a function 2.3 Appropriate rates of functioning 2.4 Dysfunction3 Performing Functions at Appropriate Rates in Appropriate Situations4 Conclusion. (shrink)
Another objection to the dynamical hypothesis is explored. To resolve it completely, one must focus more directly on an area not emphasized in van Gelder's discussion: the contributions of dynamical systems theory to understanding how cognition is neutrally implemented.
The computational theory of cognition (CTC) holds that the mind is akin to computer software. This article aims to show that CTC is incorrect because it is not able to distinguish the ability to solve a maze from the ability to solve its mirror image. CTC cannot do so because it only individuates brain states up to isomorphism. It is shown that a finer individuation that would distinguish left-handed from right-handed abilities is not compatible with CTC. The view is explored (...) that CTC correctly individuates in an autonomous domain of the mental, leaving discrimination between left and right to some non-cognitive component of psychology such as physiology. I object by showing that the individuation provided by CTC does not properly describe in any domain. An embodied computational taxonomy, rather than software alone, is required for an adequate science of the mind. (shrink)
This was originally written and presented at the National Endowment for the Humanities Summer Seminar for College Teachers on Folk Psychology vs. Mental Simulation: How Minds Understand Minds, run by Robert Gordon at the University of Missouri - St. Louis, June-July 1999. It has been only lightly revised since, and should be considered a rough draft. Needless to say, the ideas herein owe a lot to what I learned at the seminar from Robert Gordon and the other participants, particularly Jim (...)Garson. However, any errors are my responsibility alone. (shrink)
Until a few years ago, Cognitive Science was firmly wedded to the notion that cognition must be explained in terms of the computational manipulation of internal representations or symbols. Although many people still believe this, the consensus is no longer solid. Whether it is truly threatened by connectionism is, perhaps, controversial, but there are yet more radical approaches that explicitly reject it. Advocates of "embodied" or "situated" approaches to cognition (e.g., Smith, 1991; Varela _et al_ , 1991, Clancey, 1997) argue (...) that thought cannot be understood as entirely internal. Furthermore, it is argued that autonomous robots can be designed to behave more intelligently if representationalist programming techniques are avoided (Brooks, 1991), and that the way our brains control our behavior is better understood in terms of chaos and dynamical systems theory rather than as any sort computation (e.g., Freeman & Skarda, 1990; Van Gelder & Port, 1995; Van Gelder, 1995; Garson, 1996). (shrink)
If a certain semantic relation (which we call local consequence) is allowed to guide expectations about which rules are derivable from other rules, these expectations will not always be fulfilled, as we illustrate. An alternative semantic criterion (based on a relation we call global consequence), suggested by work of J.W. Garson, turns out to provide a much better — indeed a perfectly accurate — guide to derivability.
Philosophers of science take it as a datum that Mayor John's having syphilis explains why he, rather than certain nonsyphilitics, had paresis. Using a new hypothetical example, the case of the two dams, it is argued that three independent considerations invalidate these philosophers' starting point.
MacDougall (2010) has argued that Rawls‘ liberal social theory suggest that parents who hold certain religious convictions can refuse blood transfusion on their children’s behalf. This paper argues that this is wrong for at least five reasons. First, MacDougall neglects the possibility that true freedom of conscience entails the right to choose one’s own religion rather than have it dictated by one’s parents. Second, he conveniently ignores the fact that children in such situations are much more likely to die than (...) to survive without blood. Third, he relies on an ambiguous understanding of what is “rational” and treats children as mere extensions of their parents. Fourth, he neglects the fact that those in the original position would seek to protect themselves from persecution and enslavement, and thus would not allow categories of children who are killed because of their parents’ beliefs. Finally, Rawls makes it clear that we should choose for children as we would choose for ourselves in the original position, with no particular conception of the good (such as that held by JWs). (shrink)