This paper traces the application of information theory to philosophical problems of mind and meaning from the earliest days of the creation of the mathematical theory of communication. The use of information theory to understand purposive behavior, learning, pattern recognition, and more marked the beginning of the naturalization of mind and meaning. From the inception of information theory, Wiener, Turing, and others began trying to show how to make a mind from informational and computational materials. Over the last 50 years, (...) many philosophers saw different aspects of the naturalization of the mind, though few saw at once all of the pieces of the puzzle that we now know. Starting with Norbert Wiener himself, philosophers and information theorists used concepts from information theory to understand cognition. This paper provides a window on the historical sequence of contributions made to the overall project of naturalizing the mind by philosophers from Shannon, Wiener, and MacKay, to Dennett, Sayre, Dretske, Fodor, and Perry, among others. At some time between 1928 and 1948, American engineers and mathematicians began to talk about `Theory of Information' and `Information Theory,' understanding by these terms approximately and vaguely a theory for which Hartley's `amount of information' is a basic concept. I have been unable to find out when and by whom these names were first used. Hartley himself does not use them nor does he employ the term `Theory of Transmission of Information,' from which the two other shorter terms presumably were derived. It seems that Norbert Wiener and Claude Shannon were using them in the Mid-Forties. (shrink)
La autora presenta una critica a la concepcion clasica de los sentidos asumida por la mayoria de autores naturalistas que pretenden explicar el contenido mental. Esta crítica se basa en datos neurobiologicos sobre los sentidos que apuntan a que estos no parecen describir caracteristicas objetivas del mundo, sino que actuan de forma ŉarcisita', es decir, representan informacion en funcion de los intereses concretos del organismo.El articulo se encuentra también en: Bechtel, et al., Philosophy and the Neuroscience.
The influence of historical-causal theories of reference developed in the late sixties and early seventies by Donnellan, Kripke, Putnam and Devitt has been so strong that any semantic theory that has the consequence of assigning disjunctive representational content to the mental states of twins (e.g. [H2O or XYZ]) has been thereby taken to refute itself. Similarly, despite the strength of pre-theoretical intuitions that exact physical replicas like Davidson's Swampman have representational mental states, people have routinely denied that they have any (...) intentional/representational states. I want to focus on a particular brand of causal theory that is not historical, the so-called pure informational or nomic covariance theories, and examine how they propose to handle twin cases and replicas like Swampman. In particular, I will take up Fodor. (shrink)
Information is the fuel of cognition. At its most basic level, information is a matter of structures interacting under laws. The notion of information thus reflects the (relational) fact that a structure is created by the impact of another structure. The impacted structure is an encoding, in some concrete form, of the interaction with the impacting structure. Information is, essentially, the structural trace in some system of an interaction with another system; it is also, as a consequence, the structural fuel (...) which drives the impacted system's subsequent processes and behavior. Information takes various forms because the world has many levels of compositional and functional complexity, under different constraints. The key constraints that matter in the understanding of information are natural patterns of organization, or types, and systematic correlations among types, or laws. These level- sensitive constraints, in the form of types and laws, shape the very form in which information is tokened in some structure, that is, the very form in which it is encoded. As a result, the information-producing interactions bring about different sorts of structures, with various sorts of causal effects and functions, whence so many ways in which information is coded and utilized. (shrink)
What is it that one thinks or believes when one thinks or believes something? A mental formula? A sentence in some natural language? Its truth conditions? Or perhaps an abstract proposition? The current story of content is fairly ecumenical. It says that a number of aspects, some mental, other semantic, go into our understanding of content. Yet the current story is incomplete. It leaves out a very important aspect of content, one which I call incremental information. It is information in (...) a specific format, information as a limited or local increment, structured by a number of underlying parameters. It is in the form of such increments that information drives cognition and behavior. This is why, perhaps of all aspects of content, it is incremental information which matters most when we want to understand cognitive attitudes and performances. This in turn must have an impact on our philosophical notions of content, propositional attitudes, inference, justification and knowledge. (shrink)
In this paper, I argue that informational semantics, the most well-known and worked-out naturalistic account of intentional content, conflicts with a fundamental psychological principle about the conditions of belief-formation. Since this principle is an important premise in the argument for informational semantics, the upshot is that the view is self-contradictory??indeed, it turns out to be guilty of a sophisticated version of the fallacy famously committed by Euthyphro in the eponymous Platonic dialogue. Criticisms of naturalistic accounts of content typically proceed piecemeal (...) by narrowly constructed counterexamples, but I argue that the current result is more robust. It affects a broad family of accounts, and provokes a wider doubt about the possibility of successful execution of the naturalistic project. (shrink)
To commit Euthyphro’s fallacy is to endorse a pair of incompatible explanations, one constitutive and the other causal. Asked to explain the nature of piety, Euthyphro hazards that being pious consists in being an object of the gods’ love. But asked what causes the gods to love what they do, he holds with the commonsensical thought that the gods love pious people because they are pious. As Socrates points out (and for reasons we shall shortly rehearse), Euthyphro cannot have it (...) both ways. To hold that one’s god-belovedness is constitutive of one’s status as a pious person is to rule out its being one’s piety that prompts the gods’ affection. More generally, we commit the fallacy when we hold of two properties f and g both of the following: possession of f constitutes possession of g, and possession of g causes possession of f. (shrink)
Philosophers have worried that research on animal mind-reading faces a “logical problem”: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence for those states (e.g. line-of-gaze). The most impressive attempt to confront this problem has been mounted recently by Robert Lurz (2009, 2011). However, Lurz’ approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. Moreover, participants in this debate do (...) not appear to agree on criteria for representation. As such, future debate on this question should either abandon the representational idiom or confront differences in underlying semantics. (shrink)
Do psychologists and computer/cognitive scientists mean the same thing by the term `information'? In this essay, I answer this question by comparing information as understood by Gibsonian, ecological psychologists with information as understood in Barwise and Perry's situation semantics. I argue that, with suitable massaging, these views of information can be brought into line. I end by discussing some issues in (the philosophy of) cognitive science and artificial intelligence.
We offer a novel theory of information that differs from traditional accounts in two respects: (i) it explains information in terms of counterfactuals rather than conditional probabilities, and (ii) it does not make essential reference to doxastic states of subjects, and consequently allows for the sort of objective, reductive explanations of various notions in epistemology and philosophy of mind that many have wanted from an account of information.
Mental states differ from most other entities in the world in having semantic or intentional properties: they have meanings, they are about other things, they have satisfaction- or truth-conditions, they have representational content. Mental states are not the only entities that have intentional properties - so do linguistic expressions, some paintings, and so on; but many follow Grice, 1957 ] in supposing that we could understand the intentional properties of these other entities as derived from the intentional properties of mental (...) states (viz., the mental states of their producers). Of course, accepting this supposition leaves us with a puzzle about how the non-derivative bearers of intentional properties (mental states) could have these properties. In particular, intentional properties seem to some to be especially difficult to reconcile with a robust commitment to ontological naturalism - the view that the natural properties, events, and individuals are the only properties, events, and individuals that exist. Fodor puts this intuition nicely in this oft-quoted passage:
I suppose that sooner or later the physicists will complete the catalogue they've been compiling of the ultimate and irreducible properties of things. When they do, the likes of _spin_, _charm_, and _charge_ will perhaps appear upon their list. But _aboutness_ surely won't; intentionality simply doesn't go that deep.... If aboutness is real, it must be really something else ([ Fodor, 1987 ], 97).
Some philosophers have reacted to this clash by giving up one of the two views generating the tension. For example, Churchland, 1981 ] opts for intentional irrealism in order to save ontological naturalism, while. (shrink)
In Book II of the _Essay_, at the beginning of his discussion of language in Chapter II ("Of the Signification of Words"), John Locke writes that we humans have a variety of thoughts which might profit others, but that unfortunately these thoughts lie invisible and hidden from others. And so we use language to communicate these thoughts. As a result, "words, in their primary or immediate signification,stand for nothing but _the ideas in the mind of him that uses them_.
The concept of “information” is virtually ubiquitous in contemporary cognitive science. It is claimed to be “processed” (in cognitivist theories of perception and comprehension), “stored” (in cognitivist theories of memory and recognition), and otherwise manipulated and transformed by the human central nervous system. Fred Dretske's extensive philosophical defense of a theory of informational content (“semantic” information) based upon the Shannon-Weaver formal theory of information is subjected to critical scrutiny. A major difficulty is identified in Dretske's equivocations in the use of (...) the concept of a “signal” bearing informational content. Gibson's alternative conception of information (construed as analog by Dretske), while avoiding many of the problems located in the conventional use of “signal”, raises different but equally serious questions. It is proposed that, taken literally, the human CNS does not extract or process information at all; rather, whatever “information” is construed as locatable in the CNS is information only for an observer-theorist and only for certain purposes. (shrink)
This paper is about two kinds of mental content and how they are related. We are going to call them representation and indication. We will begin with a rough characterization of each. The differences, and why they matter, will, hopefully, become clearer as the paper proceeds.
In this paper I look at Fred Dretske’s account of information and knowledge as developed in Knowledge and The Flow of Information. In particular, I translate Dretske’s probabilistic definition of information to a modal logical framework and subsequently use this to explicate the conception of information and its flow which is central to his account, including the notions of channel conditions and relevant alternatives. Some key products of this task are an analysis of the issue of information closure and an (...) investigation into some of the logical properties of Dretske’s account of information flow. (shrink)
This collection of essays by eminent philosopher Fred Dretske brings together work on the theory of knowledge and philosophy of mind spanning thirty years. The two areas combine to lay the groundwork for a naturalistic philosophy of mind. The fifteen essays focus on perception, knowledge, and consciousness. Together, they show the interconnectedness of Dretske's work in epistemology and his more contemporary ideas on philosophy of mind, shedding light on the links which can be made between the two. The first section (...) of the book argues the point that knowledge consists of beliefs with the right objective connection to facts; two essays discuss this conception of knowledge's implications for naturalism. The next section articulates a view of perception, attempting to distinguish conceptual states from phenomenal states. A naturalized philosophy of mind, and thus a naturalized epistemology, is articulated in the third section. This collection will be a valuable resource for a wide range of philosophers and their students, and will also be of interest to cognitive scientists, psychologists, and philosophers of biology. (shrink)
This book presents an attempt to develop a theory of knowledge and a philosophy of mind using ideas derived from the mathematical theory of communication developed by Claude Shannon. Information is seen as an objective commodity defined by the dependency relations between distinct events. Knowledge is then analyzed as information caused belief. Perception is the delivery of information in analog form (experience) for conceptual utilization by cognitive mechanisms. The final chapters attempt to develop a theory of meaning (or belief content) (...) by viewing meaning as a certain kind of information-carrying role. (shrink)
This paper defends a reference-based approach to concept individuation against the objection that such an approach is unable to make sense of concepts that fail to refer. The main line of thought pursued involves clarifying how the referentialist should construe the relationship between a concept's (referential) content and its role in mental processes. While the central goal of the paper is to defend a view aptly titled Concept Referentialism , broader morals are drawn regarding reference-based approaches in general. The paper (...) closes by calling for a shift in the current debate between referentialists and their opponents. (shrink)
Questions concerning the nature of representation and what representations are about have been a staple of Western philosophy since Aristotle. Recently, these same questions have begun to concern neuroscientists, who have developed new techniques and theories for understanding how the locus of neurobiological representation, the brain, operates. My dissertation draws on philosophy and neuroscience to develop a novel theory of representational content.
There is no consensus yet on the definition of semantic information. This paper contributes to the current debate by criticising and revising the Standard Definition of semantic Information (SDI) as meaningful data, in favour of the Dretske-Grice approach: meaningful and well-formed data constitute semantic information only if they also qualify as contingently truthful. After a brief introduction, SDI is criticised for providing necessary but insufficient conditions for the definition of semantic information. SDI is incorrect because truth-values do not supervene on (...) semantic information, and misinformation (that is, false semantic information) is not a type of semantic information, but pseudo-information, that is not semantic information at all. This is shown by arguing that none of the reasons for interpreting misinformation as a type of semantic information is convincing, whilst there are compelling reasons to treat it as pseudo-information. As a consequence, SDI is revised to include a necessary truth-condition. The last section summarises the main results of the paper and indicates some interesting areas of application of the revised definition. (shrink)
The renowned philosopher Jerry Fodor, a leading figure in the study of the mind for more than twenty years, presents a strikingly original theory on the basic constituents of thought. He suggests that the heart of cognitive science is its theory of concepts, and that cognitive scientists have gone badly wrong in many areas because their assumptions about concepts have been mistaken. Fodor argues compellingly for an atomistic theory of concepts, deals out witty and pugnacious demolitions of rival theories, and (...) suggests that future work on human cognition should build upon new foundations. This lively, conversational, and superbly accessible book is the first volume in the Oxford Cognitive Science Series, where the best original work in this field will be presented to a broad readership. Concepts will fascinate anyone interested in contemporary work on mind and language. Cognitive science will never be the same again. (shrink)
In this paper I apply an old problem of Quine's (the inscrutability of reference in translation) to a new style of theory about mental content (causal/nomological/informational accounts of meaning) and conclude that no "naturalization" of content of the sort currently popular can solve Quine's "gavagai" enigma. I show how failure to solve the problem leads to absurd conclusions not about one's own mental life, but about the non-mental world. I discuss various ways of attempting to remedy the accounts so as (...) to avoid the problem and explain why each attempt at solving the problem would take the information theorists further from their self-assigned task of "naturalizing" semantics. (shrink)
In this paper I discuss Fred Dretske's account of knowledge critically, and try to bring out how his account of informational content leads to cases of extreme epistemic good luck in his treatment of knowledge. My main interest, however, is to establish that the cases of epistemic luck arise because Dretske's account of knowledge in a fundamental way fails to take into account the role our actual recognitional capacities and powers of discrimination play in perceptually based knowledge. This result is, (...) I believe, new. The paper has three sections. In Section 1 I give a short exposition of Dretske's theory, and make some necessary qualifications about how it is to be understood. In Section 2 I discuss in greater detail how the theory actually works, and provide some examples I think are very troublesome for Dretske. In Section 3 I argue that these cases establish my main claim. I also show that there are cases of epistemic bad luck due to Dretske's account of how information causes belief. (shrink)
The goal of philosophy of information is to understand what information is, how it operates, and how to put it to work. But unlike âinformationâ in the technical sense of information theory, what we are interested in is meaningful information. To understand the nature and dynamics of information in this sense we have to understand meaning. What we offer here are simple computational models that show emergence of meaning and information transfer in randomized arrays of neural nets. These we take (...) to be formal instantiations of a tradition of theories of meaning as use. What they offer, we propose, is a glimpse into the origin and dynamics of at least simple forms of meaning and information transfer as properties inherent in behavioral coordination across a community. (shrink)
In his Explaining Behavior, Fred Dretske uses a reliabilist theory of representation to try to vindicate the use of intentional explanation for behaviour against latter-day elitninativism. Although Dretske's indicator semantics turns on the notion of function, he himself never explicitly defines what function means. Dretske's reticence in discussing function may ultimately be an error, for, as I argue, his implicit understanding of what a function amounts to does not fit with data from op rant conditioning. Still, this need not be (...) a deep flaw in Dretske and I offer one way in which we may patch up the notion of function via the changes known to occur with learning in the brain. Ultimately, I conclude that the only facts needed to explain behaviour are (1) the conditions in the world that are paired with neuronal circuit activation (as picked out by goals in some circumstances); and (2) what motor output that condition triggers. (shrink)