A classic source for understanding the connections between informationtheory and physics, this text was written by one of the giants of 20th-century physics and is appropriate for upper-level undergraduates and graduate students. Topics include the principles of coding, coding problems and solutions, the analysis of signals, a summary of thermodynamics, thermal agitation and Brownian motion, and thermal noise in an electric circuit. A discussion of the negentropy principle of information introduces the author's renowned examination of Maxwell's (...) demon. Concluding chapters explore the associations between informationtheory, the uncertainty principle, and physical limits of observation, in addition to problems related to computing, organizing information, and inevitable errors. 1962 ed. 81 figures. 14 tables. (shrink)
Communication is an important feature of the living world that mainstream biology fails to adequately deal with. Applying two main disciplines can be contemplated to fill in this gap: semiotics and informationtheory. Semiotics is a philosophical discipline mainly concerned with meaning; applying it to life already originated in biosemiotics. Informationtheory is a mathematical discipline coming from engineering which has literal communication as purpose. Biosemiotics and informationtheory are thus concerned with distinct and (...) complementary possible meanings of the word ‘communication’. Since literal communication needs to be secured so as to enable semantics being communicated, informationtheory is a necessary prerequisite to biosemiotics. Moreover, heredity is a purely literal communication process of capital importance fully relevant to literal communication, hence to informationtheory. A short introduction to discrete informationtheory is proposed, which is centred on the concept of redundancy and its use in order to make sequences resilient to errors. Informationtheory has been an extremely active and fruitful domain of researches and the motor of the tremendous progress of communication engineering in the last decades. Its possible connections with semantics and linguistics are briefly considered. Its applications to biology are suggested especially as regards error-correcting codes which are mandatory for securing the conservation of genomes. Biology needs informationtheory so biologists and communication engineers should closely collaborate. (shrink)
This paper will examine the implications of an extended “field theory of information,” suggested by Wolfhart Pannenberg, specifically in the Christian understanding of creation. The paper argues that the Holy Spirit created the world as field, a concept from physics, and the creation is directed by the logos utilizing information. Taking into account more recent developments of informationtheory, the essay further suggests that present creation has a causal impact upon the information utilized in (...) creation. In order to adequately address Pannenberg's hypothesis that the logos utilizes information at creation the essay will also include an introductory examination of Pannenberg's Christology which shifts from a strict “from below” Christology, to a more open “third way” of doing Christology beyond “above” and “below.” The essay concludes with a brief section relating the implications of an extended “field theory of information” to creative inspiration, as well as parallels with human inspiration. (shrink)
The distinction between explanation and prediction has received much attention in recent literature, but the equally important distinction between explanation and description (or between prediction and description) remains blurred. This latter distinction is particularly important in the social sciences, where probabilistic models (or theories) often play dual roles as explanatory and descriptive devices. The distinction between explanation (or prediction) and description is explicated in the present paper in terms of informationtheory. The explanatory (or predictive) power of a (...) probabilistic model is identified with information taken from (or transmitted by) the environment (e.g., the independent, experimentally manipulated variables), while the descriptive power of a model reflects additional information taken from (or transmitted by) the data. Although information is usually transmitted by the data in the process of estimating parameters, it turns out that the number of free parameters is not a reliable index of transmitted information. Thus, the common practice of treating parameters as degrees-of-freedom in testing probabilistic models is questionable. Finally, this information-theoretic analysis of explanation, prediction, and description suggests ways of resolving some recent controversies surrounding the pragmatic aspects of explanation and the so-called structural identity thesis. (shrink)
Informationtheory offers a measure of "mutual information" which provides an appropriate measure of tracking efficiency for the naturalistic epistemologist. The statistical entropy on which it is based is arguably the best way of characterizing the uncertainty associated with the behavior of a system, and it is ontologically neutral. Though not appropriate for the naturalization of meaning, mutual information can serve as a measure of epistemic success independent of semantic maps and payoff structures. While not containing (...) payoffs as terms, mutual information places both upper and lower bounds on payoffs. This constitutes a non-trivial relationship to utility. (shrink)
My aim in this paper is a modest one. I do not have any particular thesis to advance about the nature of entanglement, nor can I claim novelty for any of the material I shall discuss. My aim is simply to raise some questions about entanglement that spring naturally from certain developments in quantum informationtheory and are, I believe, worthy of serious consideration by philosophers of science. The main topics I discuss are different manifestations of quantum nonlocality, (...) entanglement-assisted communication, and entanglement thermodynamics. (shrink)
This thesis is a contribution to the debate on the implications of quantum informationtheory for the foundations of quantum mechanics. In Part 1, the logical and conceptual status of various notions of information is assessed. It is emphasized that the everyday notion of information is to be firmly distinguished from the technical notions arising in informationtheory; however it is maintained that in both settings `information' functions as an abstract noun, hence does (...) not refer to a particular or substance (the worth of this point is illustrated in application to quantum teleportation). The claim that `Information is Physical' is assessed and argued to face a destructive dilemma. Accordingly, the slogan may not be understood as an ontological claim, but at best, as a methodological one. The reflections of Bruckner and Zeilinger (2001) and Deutsch and Hayden (2000) on the nature of information in quantum mechanics are critically assessed and some results presented on the characterization of entanglement in the Deutsch-Hayden formalism. Some philosophical aspects of quantum computation are discussed and general morals drawn concerning the nature of quantum informationtheory. In Part II, following some preliminary remarks, two particular information-theoretic approaches to the foundations of quantum mechanics are assessed in detail. It is argued that Zeilinger's (1999) Foundational Principle is unsuccessful as a foundational principle for quantum mechanics. The information-theoretic characterization theorem of Clifton, Bub and Halvorson (2003) is assessed more favourably, but the generality of the approach is questioned and it is argued that the implications of the theorem for the traditional foundational problems in quantum mechanics remains obscure. (shrink)
The Shannon information function (H) has been extensively used in ecology as a statistic of species diversity. Yet, the use of Shannon diversity index has also been criticized, mainly because of its ambiguous ecological interpretation and because of its relatively great sensitivity to the relative abundances of species in the community. In my opinion, the major shortcoming of the traditional perspective (on the possible relation of species diversity with informationtheory) is that species need for an external (...) receiver (the scientist or ecologist) to exist and transmit information. Because organisms are self-catalized replicating structures that can transmit genotypic information to offspring, it should be evident that any single species has two possible states or alternatives: to be or not to be. In other words, species have no need for an external receiver since they are their own receivers. Therefore, the amount of biological information (at the species scale) in a community with one only species would be species, and not bits as in the traditional perspective. Moreover, species diversity appears to be a monotonic increasing function of (or S) when all species are equally probable (S being species richness), and not a function of as in the traditional perspective. To avoid the noted shortcoming, we could use 2H (instead of H) for calculating species diversity and species evenness (= 2H/S). However, owing to the relatively great sensitivity of H to the relative abundances of species in the community, the value of species dominance (= 1 − 2H/S) is unreasonably high when differences between dominant and subordinate species are considerable, thereby lowering the value of species evenness and diversity. This unsatisfactory behaviour is even more evident for Simpson index and related algorithms. I propose the use of other statistics for a better analysis of community structure, their relationship being: species evenness + species dominance = 1; species diversity × species uniformity = 1; and species diversity = species richness × species evenness. (shrink)
Categorical logic has shown that modern logic is essentially the logic of subsets (or “subobjects”). In “subset logic,” predicates are modeled as subsets of a universe and a predicate applies to an individual if the individual is in the subset. Partitions are dual to subsets so there is a dual logic of partitions where a “distinction” [an ordered pair of distinct elements (u, u′) from the universe U] is dual to an “element”. A predicate modeled by a partition π on (...) U would apply to a distinction if the pair of elements was distinguished by the partition π, i.e., if u and u′ were in different blocks of π. Subset logic leads to finite probability theory by taking the (Laplacian) probability as the normalized size of each subset-event of a finite universe. The analogous step in the logic of partitions is to assign to a partition the number of distinctions made by a partition normalized by the total number of ordered |U|2 pairs from the finite universe. That yields a notion of “logical entropy” for partitions and a “logical informationtheory.” The logical theory directly counts the (normalized) number of distinctions in a partition while Shannon’s theory gives the average number of binary partitions needed to make those same distinctions. Thus the logical theory is seen as providing a conceptual underpinning for Shannon’s theory based on the logical notion of “distinctions.”. (shrink)
Recent suggestions to supply quantum mechanics (QM) with realistic foundations by reformulating it in light of quantum informationtheory (QIT) are examined and are found wanting by pointing to a basic conceptual problem that QIT itself ignores, namely, the measurement problem. Since one cannot ignore the measurement problem and at the same time pretend to be a realist, as they stand, the suggestions to reformulate QM in light of QIT are nothing but instrumentalism in disguise.
Giovanni Sommaruga (ed): Formal Theories of Information: From Shannon to Semantic InformationTheory and General Concepts of Information Content Type Journal Article Pages 35-40 DOI 10.1007/s11023-011-9250-2 Authors Sebastian Sequoiah-Grayson, Department of Theoretical Philosophy, Faculty of Philosophy, University of Groningen, Groningen, The Netherlands Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495 Journal Volume Volume 22 Journal Issue Volume 22, Number 1.
Quantum informationtheory has given rise to a renewed interest in, and a new perspective on, the old issue of understanding the ways in which quantum mechanics differs from classical mechanics. The task of distinguishing between quantum and classical theory is facilitated by neutral frameworks that embrace both classical and quantum theory. In this paper, I discuss two approaches to this endeavour, the algebraic approach, and the convex set approach, with an eye to the strengths of (...) each, and the relations between the two. I end with a discussion of one particular model, the toy theory devised by Rob Spekkens, which, with minor modifications, fits neatly within the convex sets framework, and which displays in an elegant manner some of the similarities and differences between classical and quantum theories. The conclusion suggested by this investigation is that Schrödinger was right to find the essential difference between classical and quantum theory in their handling of composite systems, though Schrödinger's contention that it is entanglement that is the distinctive feature of quantum mechanics needs to be modified. (shrink)
Giovanni Sommaruga (ed): Formal Theories of Information: From Shannon to Semantic InformationTheory and General Concepts of Information Content Type Journal Article Pages 119-122 DOI 10.1007/s11023-011-9228-0 Authors Giuseppe Primiero, Centre for Logic and Philosophy of Science, University of Ghent, Blandijnberg 2, Ghent, 9000 Belgium Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495 Journal Volume Volume 21 Journal Issue Volume 21, Number 1.
clusions are only probably correct. On the other hand, algorithmic informationtheory provides a precise mathematical definition of the notion of random or patternless sequence. In this paper we shall describe conditions under which if the sequence of coin tosses in the Solovay– Strassen and Miller–Rabin algorithms is replaced by a sequence of heads and tails that is of maximal algorithmic information content, i.e., has maximal algorithmic randomness, then one obtains an error-free test for primality. These results (...) are only of theoretical interest, since it is a manifestation of the G¨ odel incompleteness phenomenon that it is impossible to “certify” a sequence to be random by means of a proof, even though most sequences have this property. Thus by using certified random sequences one can in principle, but not in practice, convert probabilistic tests for primality into deterministic ones. (shrink)
Algorithmic informationtheory, or the theory of Kolmogorov complexity, has become an extraordinarily popular theory, and this is no doubt due, in some part, to the fame of Chaitin’s incompleteness results arising from this ﬁeld. Actually, there are two rather different results by Chaitin: the earlier one concerns the ﬁnite limit of the provability of complexity (see Chaitin, 1974a, 1974b, 1975a); and the later is related to random reals and the halting probability (see Chaitin, 1986, 1987a, (...) 1987b, 1988, 1989. (shrink)
This paper argues that Information Theoretic Redundancy (ITR) is fundamentally a composite concept that has been continually misinterpreted since the very inception of InformationTheory. We view ITR as compounded of true redundancy and partial redundancy. This demarcation of true redundancy illustrates a limiting case phenomenon: the underlying metric (number of alternatives) differs only by degree but the properties of this concept differ in kind from those of partial redundancy. Several other studies are instanced which also imply (...) the composite nature of ITR. We thus provide broadly based but particular support for earlier generalized suggestions that it is the underlying calculus of InformationTheory rather than the ill-named concepts themselves that provides something of a unitary language for the description of phenomena. (shrink)
We compare the elementary theories of Shannon information and Kolmogorov complexity, the extent to which they have a common purpose, and wherethey are fundamentally different. We discuss and relate the basicnotions of both theories: Shannon entropy, Kolmogorov complexity, Shannon mutual informationand Kolmogorov (``algorithmic'') mutual information. We explainhow universal coding may be viewed as a middle ground betweenthe two theories. We consider Shannon's rate distortion theory, whichquantifies useful (in a certain sense) information.We use the communication of (...) class='Hi'>information as our guiding motif, and we explain howit relates to sequential question-answer sessions. (shrink)
Hi everybody! It's a great pleasure for me to be back here at the new, improved Santa Fe Institute in this spectacular location. I guess this is my fourth visit and it's always very stimulating, so I'm always very happy to visit you guys. I'd like to tell you what I've been up to lately. First of all, let me say what algorithmic informationtheory is good for, before telling you about the new version of it I've got.
The notion of information has figured prominently in much modern evolutionary theorizing. But while theorists usually concede the importance of distinguishing between our ordinary use of this notion and its special acceptation in informationtheory, some biological theorizing requires "information" to serve a double duty. Lorenz's ethological theorizing is a case in point, and this paper challenges its conceptual underpinnings. Special attention is paid to Lorenz's contention that adaptation to an environment is akin to representation, and (...) it is urged that many purportedly (phylogenetically) adapted behaviors might well instead be better interpreted as non-information-laden (ontogenetically) adaptive behaviors. (shrink)
Function refers to a broad family of concepts of varying abstractness and range of application, from a many-one mathematical relation of great generality to, for example, highly specialized roles of designed elements in complex machines such as degaussing in a television set, or contributory processes to control mechanisms in complex metabolic pathways, such as the inhibitory function of the appropriate part of the lac-operon on the production of lactase through its action on the genome in the absence of lactose. We (...) would like a language broad enough, neutral enough, but yet powerful enough to cover all such cases, and at the same time to give a framework form explanation both of the family resemblances and differences. General logic and mathematics are too abstract, but more importantly, too broad, whereas other discourses of function, such as the biological and teleological contexts, are too narrow. Information is especially suited since it is mathematically grounded, but also has a wellknown physical interpretation through the Schr dinger/Brillouin Negentropy Principle of Information, and an engineering or design interpretation through Shannon's communication theory. My main focus will be on the functions of autonomous anticipatory systems, but I will try to demonstrate both the connections between this notion of function and the others, especially to dynamical systems with a physical interpretation on the one side and intentional systems on the other. The former are based in concepts like force, energy and work, while the latter involve notions like representation, control and purpose, traditionally, at least in Modern times, on opposite sides of the Cartesian divide. In principle, information can be reduced to energy, but it has the advantage of being more flexible, and easier to apply to higher level phenomena. (shrink)
A new logic of partitions has been developed that is dual to ordinary logic when the latter is interpreted as the logic of subsets of a fixed universe rather than the logic of propositions. For a finite universe, the logic of subsets gave rise to finite probability theory by assigning to each subset its relative size as a probability. The analogous construction for the dual logic of partitions gives rise to a notion of logical entropy that is precisely related (...) to Claude Shannon's entropy. In this manner, the new logic of partitions provides a logico-conceptual foundation for information-theoretic entropy or information content. (shrink)
from non-conscious components by positing that consciousness is a universal primitive. For example, the double aspect theory of information holds that infor- mation has a phenomenal aspect. How then do you get from phenomenal infor- mation to human consciousness? This paper proposes that an entity is conscious to the extent it amplifies information, first by trapping and integrating it through closure, and second by maintaining dynamics at the edge of chaos through simul- taneous processes of divergence and (...) convergence. The origin of life through autocatalytic closure, and the origin of an interconnected worldview through conceptual closure, induced phase transitions in the degree to which informa- tion, and thus consciousness, is locally amplified. Divergence and convergence of cognitive information may involve phenomena observed in light e.g. focusing, interference, and resonance. By making information flow inward- biased, clo- sure shields us from external consciousness; thus the paucity of consciousness may be an illusion. (shrink)
I argue that so-called 'background knowledge' in confirmation theory has little, if anything, to do with 'knowledge' in the sense of mainstream epistemology. I argue that it is better construed as 'background information', which need not be believed in, justified, or true.
Fred Dretske's "Knowledge and the Flow of Information" is an extended attempt to develop a philosophically useful theory of information. Dretske adapts central ideas from Shannon and Weaver's mathematical theory of communication, and applies them to some traditional problems in epistemology. In doing so, he succeeds in building for philosophers a much-needed bridge to important work in cognitive science. The pay-off for epistemologists is that Dretske promises a way out of a long-standing impasse -- the Gettier (...) problem. He offers an alternative model of knowledge as information-based belief, which purports to avoid the problems justificatory accounts face. This essay looks closely at Dretske's theory. I argue that while the information-theoretic framework is attractive, it does not provide an adequate account of knowledge. And there seems to be no way of tightening the theory without introducing some version of a theory of justification -- the very notion Dretske's theory was designed to avoid. (shrink)
The history of the relationship between Christian theology and the natural sciences has been conditioned by the initial decision of the masters of the "first scientific revolution" to disregard any necessary explanatory premiss to account for the constituting organization and the framing of naturally occurring entities. Not paying any attention to hierarchical control, they ended-up disseminating a vision and understanding in which it was no longer possible for a theology of nature to send questions in the direction of the experimental (...) sciences, as was done in the past between theology and many philosophically-based thought-systems. Presenting the history of some hinge-periods in the development of the Western-world sciences, this book first sets out to consider the conceptual revolution which has, in the 20th Century, related consciousness, physical laws and levels of organization, in order to show that a new chance existed then for theology. This discourse was invited to revise its language to open it up to the quest for meaning which we find on the periphery of the project of the experimental sciences. The Century-old reflection on the foundations of probability had prepared the ground for the introduction of the concept of information, at first linked to an effort aimed at maximizing the efficiency of electromagnetic communications. Taking the full measure of the questions that informationtheory poses to the biological sciences, this work attempts to identify the areas of convergence setting the stage for general systems theory, while it also tries to identify the insufficiencies of this recent vision and to highlight the questions left unanswered. Re-reading some of the traditional proofs of God's existence from the order of the world, relying on some pioneering insights of Ludwig von Bertalanffy and Norbert Wiener, the author brings those proofs and insights in contact with the fascinating initial project of cybernetics and the elements of a "mythical" nature which, from its inception, it could never entirely eliminate. This book ends with the confrontation between the conceptually most extended regulation factors in the history of Western thought. It articulates the poetic utopia concerned with an immediate grasp of the world in its "deictic" character with the concurrent one aimed at the domination over matter and energy expressed by technology's driving rational utopia. (shrink)
It is argued that some elusive “entropic” characteristics of chemical bonds, e.g., bond multiplicities (orders), which connect the bonded atoms in molecules, can be probed using quantities and techniques of InformationTheory (IT). This complementary perspective increases our insight and understanding of the molecular electronic structure. The specific IT tools for detecting effects of chemical bonds and predicting their entropic multiplicities in molecules are summarized. Alternative information densities, including measures of the local entropy deficiency or its displacement (...) relative to the system atomic promolecule, and the nonadditive Fisher information in the atomic orbital resolution(called contragradience ) are used to diagnose the bonding patterns in illustrative diatomic and polyatomic molecules. The elements of the orbital communication theory of the chemical bond are briefly summarized and illustrated for the simplest case of the two -orbital model. The information-cascade perspective also suggests a novel, indirect mechanism of the orbital interactions in molecular systems, through “bridges” (orbital intermediates), in addition to the familiar direct chemical bonds realized through “space”, as a result of the orbital constructive interference in the subspace of the occupied molecular orbitals. Some implications of these two sources of chemical bonds in propellanes, π-electron systems and polymers are examined. The current –density concept associated with the wave-function phase is introduced and the relevant phase -continuity equation is discussed. For the first time, the quantum generalizations of the classical measures of the information content, functionals of the probability distribution alone, are introduced to distinguish systems with the same electron density, but differing in their current(phase) composition. The corresponding information/entropy sources are identified in the associated continuity equations. (shrink)
Giulio Tononi (2008) has offered his integrated informationtheory of consciousness (IITC) as a “provisional manifesto.” I critically examine how the approach fares. I point out some (relatively) internal concerns with the theory and then more broadly philosophical ones; finally I assess the prospects for IITC as a fundamental theory of consciousness. I argue that the IITC’s scientific promise does carry over to a significant extent to broader philosophical theorizing about qualia and consciousness, though not as (...) directly as Tononi suggests, since the account is much more focused on the qualitative character of experience rather than on consciousness itself. I propose understanding it as “integrated informationtheory of qualia” (IITQ), rather than of consciousness. (shrink)
This book presents an attempt to develop a theory of knowledge and a philosophy of mind using ideas derived from the mathematical theory of communication developed by Claude Shannon. Information is seen as an objective commodity defined by the dependency relations between distinct events. Knowledge is then analyzed as information caused belief. Perception is the delivery of information in analog form (experience) for conceptual utilization by cognitive mechanisms. The final chapters attempt to develop a (...) class='Hi'>theory of meaning (or belief content) by viewing meaning as a certain kind of information-carrying role. (shrink)
Combining testimonial reports from independent and partially reliable information sources is an important problem of uncertain reasoning. Within the framework of Dempster-Shafer theory, we propose a general model of partially reliable sources which includes several previously known results as special cases. The paper reproduces these results, gives a number of new insights, and thereby contributes to a better understanding of this important application of reasoning with uncertain and incomplete information.
The usual way to try to ground knowing according to contemporary theory of knowledge is: We know something if (1) it’s true, (2) we believe it, and (3) we believe it for the “right” reasons. Floridi proposes a better way. His grounding is based partly on probability theory, and partly on a question/answer network of verbal and behavioural interactions evolving in time. This is rather like modeling the data-exchange between a data-seeker who needs to know which button to (...) press on a food-dispenser and a data-knower who already knows the correct number. The success criterion, hence the grounding, is whether the seeker’s probability of lunch is indeed increasing (hence uncertainty is decreasing) as a result of the interaction. Floridi also suggests that his philosophy of information casts some light on the problem of consciousness. I’m not so sure. (shrink)
According to Fred Dretskes externalist theory of knowledge a subject knows that p if and only if she believes that p and this belief is caused or causally sustained by the information that p. Another famous feature of Dretskes epistemology is his denial that knowledge is closed under known logical entailment. I argue that, given Dretskes construal of information, he is in fact committed to the view that both information and knowledge are closed under known entailment. (...) This has far-reaching consequences. For if it is true that, as Dretske also believes, accepting closure leads to skepticism, he must either embrace skepticism or abandon his informationtheory of knowledge. The latter alternative would seem to be preferable. But taking this route would deprive one of the most powerfully developed externalist epistemologies of its foundation. (shrink)
It has been argued that moral problems in relation to Information Technology (IT) require new theories of ethics. In recent years, an interesting new theory to address such concerns has been proposed, namely the theory of Information Ethics (IE). Despite the promise of IE, the theory has not enjoyed public discussion. The aim of this paper is to initiate such discussion by critically evaluating the theory of IE.
Since the middle of this century, the dominant prescriptive approach to decision theory has been a deductive viewpoint which is concerned with axioms of rational preference and their consequences. After summarizing important problems with the preference primitive, this paper argues for a constructive approach in which information is the foundation for decision-making. This approach poses comparability of uncertain acts as a question rather than an assumption. It is argued that, in general, neither preference nor subjective probability can be (...) assumed given, and that these need to be generated by using the relevant information available to the decision-agent in a given situation. A specific constructive model is discussed and illustrated with a real example from this viewpoint. (shrink)
This paper traces the application of informationtheory to philosophical problems of mind and meaning from the earliest days of the creation of the mathematical theory of communication. The use of informationtheory to understand purposive behavior, learning, pattern recognition, and more marked the beginning of the naturalization of mind and meaning. From the inception of informationtheory, Wiener, Turing, and others began trying to show how to make a mind from informational and (...) computational materials. Over the last 50 years, many philosophers saw different aspects of the naturalization of the mind, though few saw at once all of the pieces of the puzzle that we now know. Starting with Norbert Wiener himself, philosophers and information theorists used concepts from informationtheory to understand cognition. This paper provides a window on the historical sequence of contributions made to the overall project of naturalizing the mind by philosophers from Shannon, Wiener, and MacKay, to Dennett, Sayre, Dretske, Fodor, and Perry, among others. At some time between 1928 and 1948, American engineers and mathematicians began to talk about `Theory of Information' and `InformationTheory,' understanding by these terms approximately and vaguely a theory for which Hartley's `amount of information' is a basic concept. I have been unable to find out when and by whom these names were first used. Hartley himself does not use them nor does he employ the term `Theory of Transmission of Information,' from which the two other shorter terms presumably were derived. It seems that Norbert Wiener and Claude Shannon were using them in the Mid-Forties. (shrink)
Intelligent design advocate William Dembski has introduced a measure of information called “complex specified information”, or CSI. He claims that CSI is a reliable marker of design by intelligent agents. He puts forth a “Law of Conservation of Information” which states that chance and natural laws are incapable of generating CSI. In particular, CSI cannot be generated by evolutionary computation. Dembski asserts that CSI is present in intelligent causes and in the flagellum of Escherichia coli , and (...) concludes that neither have natural explanations. In this paper, we examine Dembski’s claims, point out significant errors in his reasoning, and conclude that there is no reason to accept his assertions. (shrink)
We offer a novel theory of information that differs from traditional accounts in two respects: (i) it explains information in terms of counterfactuals rather than conditional probabilities, and (ii) it does not make essential reference to doxastic states of subjects, and consequently allows for the sort of objective, reductive explanations of various notions in epistemology and philosophy of mind that many have wanted from an account of information.
Semantic information is usually supposed to satisfy the veridicality thesis: p qualifies as semantic information only if p is true. However, what it means for semantic information to be true is often left implicit, with correspondentist interpretations representing the most popular, default option. The article develops an alternative approach, namely a correctness theory of truth (CTT) for semantic information. This is meant as a contribution not only to the philosophy of information but also to (...) the philosophical debate on the nature of truth. After the introduction, in Sect. 2, semantic information is shown to be translatable into propositional semantic information (i). In Sect. 3, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in Sect. 4, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In Sects. 5 and 6, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates Q by verifying and validating it (in the computer science’s sense of verification and validation ); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of proxy ) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of commutation ); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. Sect. 7 provides some further clarifications about CTT, in the light of semantic paradoxes. Section 8 draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users. In the course of the article all technical expressions from computer science are explained. (shrink)
Introduction into the structure, contents and specifications (especially the Systematifier) of the Information Coding Classification, developed in the seventies and used in many ways by the author and a few others following its publication in 1982. Its theoretical basis is explained consisting in (1) the Integrative Level Theory, following an evolutionary approach of ontical areas, and integrating also on each level the aspects contained in the sequence of the levels, (2) the distinction between categories of form and (...)categories of being, (3) the application of a feature of Systems Theory (namely the element position plan) and (4) the inclusion of a concept theory, distinguishing four kinds of relationships, originated by the kinds of characteristics (which are the elements of concepts to be derived from the statements on the properties of referents of concepts). Its special Subject Groups on each of its nine levels are outlined and the combinatory facilities at certain positions of the Systematifier are shown. Further elaboration and use have been suggested, be it only as a switching language between the six existing universal classification systems at present in use internationally. (shrink)
The article addresses the problem of how semantic information can be upgraded to knowledge. The introductory section explains the technical terminology and the relevant background. Section 2 argues that, for semantic information to be upgraded to knowledge, it is necessary and sufficient to be embedded in a network of questions and answers that correctly accounts for it. Section 3 shows that an information flow network of type A fulfils such a requirement, by warranting that the erotetic deficit, (...) characterising the target semantic information t by default, is correctly satisfied by the information flow of correct answers provided by an informational source s . Section 4 illustrates some of the major advantages of such a Network Theory of Account (NTA) and clears the ground of a few potential difficulties. Section 5 clarifies why NTA and an informational analysis of knowledge, according to which knowledge is accounted semantic information, is not subject to Gettier-type counterexamples. A concluding section briefly summarises the results obtained. (shrink)
A remarkable theorem by Clifton, Bub and Halvorson (2003) (CBH) characterizes quantum theory in terms of information--theoretic principles. According to Bub (2004, 2005) the philosophical significance of the theorem is that quantum theory should be regarded as a ``principle'' theory about (quantum) information rather than a ``constructive'' theory about the dynamics of quantum systems. Here we criticize Bub's principle approach arguing that if the mathematical formalism of quantum mechanics remains intact then there is no (...) escape route from solving the measurement problem by constructive theories. We further propose a (Wigner--type) thought experiment that we argue demonstrates that quantum mechanics on the information--theoretic approach is incomplete. (shrink)
We show that three fundamental information-theoretic constraints -- the impossibility of superluminal information transfer between two physical systems by performing measurements on one of them, the impossibility of broadcasting the information contained in an unknown physical state, and the impossibility of unconditionally secure bit commitment -- suffice to entail that the observables and state space of a physical theory are quantum-mechanical. We demonstrate the converse derivation in part, and consider the implications of alternative answers to a (...) remaining open question about nonlocality and bit commitment. (shrink)
There are two basic approaches to the problem of induction:the empirical one, which deems that the possibility of induction depends on how theworld was made (and how it works) and the logical one, which considers the formation(and function) of language. The first is closer to being useful for induction, whilethe second is more rigorous and clearer. The purpose of this paper is to create an empiricalapproach to induction that contains the same formal exactitude as the logical approach.This requires: (a) that (...) the empirical conditions for the induction are enunciatedand (b) that the most important results already obtained from inductive logic are againdemonstrated to be valid. Here we will be dealing only with induction by elimination,namely the analysis of the experimental confutation of a theory. The result will bea rule of refutation that takes into consideration all of the empirical aspect of theexperiment and has each of the asymptotic properties which inductive logic has shown tobe characteristic of induction. (shrink)
The conclusions derived by Keynes in his Treatise on Probability (1921) concerning induction, analogical reasoning, expectations formation and decision making, mirror and foreshadow the main conclusions of cognitive science and psychology.The problem of weight is studied within an economic context by examining the role it played in Keynes' applied philosophy work, The General Theory (1936). Keynes' approach is then reformulated as an optimal control approach to dealing with changes in information evaluation over time. Based on this analysis the (...) problem of inductive justification, from a societal perspective, is not, What can we rationally believe will occur in the economic future, given our past experiences? but Can we make the future so as to attain specific economic goals with practical certainty? An answer requires that restrictions be placed on the methodological individualist approach and the acceptance of a restricted holistic approach. (shrink)
Cohen and Meskin 2006 recently offered a counterfactual theory of information to replace the standard probabilistic theory of information. They claim that the counterfactual theory fares better than the standard account on three grounds: first, it provides a better framework for explaining information flow properties; second, it requires a less expensive ontology; and third, because it does not refer to doxastic states of the information-receiving organism, it provides an objective basis. In this paper, (...) I show that none of these is really an advantage. Moreover, the counterfactual theory fails to satisfy one of the basic properties of information flow, namely the Conjunction principle. Thus, I conclude, there is no reason to give up the standard probabilistic theory for the counterfactual theory of information. (shrink)
This paper outlines a quantitative theory of strongly semantic information (TSSI) based on truth-values rather than probability distributions. The main hypothesis supported in the paper is that the classic quantitative theory of weakly semantic information (TWSI), based on probability distributions, assumes that truth-values supervene on factual semantic information, yet this principle is too weak and generates a well-known semantic paradox, whereas TSSI, according to which factual semantic information encapsulates truth, can avoid the paradox and (...) is more in line with the standard conception of what generally counts as semantic information. After a brief introduction, section two outlines the semantic paradox implied by TWSI, analysing it in terms of an initial conflict between two requisites of a quantitative theory of semantic information. In section three, three criteria of semantic information equivalence are used to provide a taxonomy of quantitative approaches to semantic information and introduce TSSI. In section four, some further desiderata that should be fulfilled by a quantitative TSSI are explained. From section five to section seven, TSSI is developed on the basis of a calculus of truth-values and semantic discrepancy with respect to a given situation. In section eight, it is shown how TSSI succeeds in solving the paradox. Section nine summarises the main results of the paper and indicates some future developments. (shrink)
We offer a novel theory of information that differs from traditional accounts in two respects: (i) it explains information in terms of counterfactuals rather than conditional probabilities, and (ii) it does not make essential reference to doxastic states of subjects, and consequently allows for the sort of objective, reductive explanations of various notions in epistemology and philosophy of mind that many have wanted from an account of information.
We contend that if efficiency and reliability are important factors in neural information processing then distributed, not localist, representations are “evolution's best bet.” We note that distributed codes are the most efficient method for representing information, and that this efficiency minimizes metabolic costs, providing adaptive advantage to an organism.
Punctuation has so far attracted attention within the linguistics community mostly from a syntactic perspective. In this paper, we give a preliminary account of the information-based aspects of punctuation, drawing our points from assorted, naturally occurring sentences. We present our formal models of these sentences and the semantic contributions of punctuation marks. Our formalism is a simpli ed analogue of an extension|due to Nicholas Asher|of Discourse Representation Theory.
Measures and theories of information abound, but there are few formalised methods for treating the contextuality that can manifest in different information systems. Quantum theory provides one possible formalism for treating information in context. This paper introduces a quantum inspired model of the human mental lexicon. This model is currently being experimentally investigated and we present a preliminary set of pilot data suggesting that concept combinations can indeed behave non-separably.
Simple hypotheses are intrinsically attractive, and, for this reason, need to be formulated with utmost precision if they are to be testable. Unfortunately, it is hard to see how Phillips & Singer's hypothesis might be unambiguously refuted. Despite this, the authors have provided much evidence consistent with the hypothesis, and have proposed a natural and powerful extension for information theoretic approaches to learning.
Weak Quantum Theory (WQT) and the Model of Pragmatic Information (MPI) are two psychophysical concepts developed on the basis of quantum physics. The present study contributes to their empirical examination. The issue of the study is whether WQT and MPI can not only explain ‘psi’-phenomena theoretically but also prove to be consistent with the empirical phenomenology of extrasensory perception (ESP). From the main statements of both models, 33 deductions for psychic readings are derived. Psychic readings are defined as (...) settings, in which psychics support or counsel clients by using information not mediated through the five senses. A qualitative approach is chosen to explore how the psychics experience extrasensory perceptions. Eight psychics are interviewed with a half-structured method. The reports are examined regarding deductive and inductive aspects, using a multi-level structured content analysis. The vast majority of deductions is clearly confirmed by the reports. Even though the study has to be seen as an explorative attempt with many aspects to be specified, WQT and MPI prove to be coherent and helpful concepts to explain ESP in psychic readings. (shrink)
This paper provides an interpretation of the Routley-Meyer semantics for a weak negation-free relevant logic using Israel and Perry's theory of information. In particular, Routley and Meyer's ternary accessibility relation is given an interpretation in information-theoretic terms.
We do not yet have a sound ontology for intrinsic value. Albert Borgmann’s work on information technology and Daniel Dennett’s thoughts on evolutionary theory can provide the basis for an account of intrinsic value in terms of what it is, how it comes into existence, where it is found, and whether it can be quantified or compared. Borgmann’s information and realization relations are cornerstones forunderstanding value. According to Borgmann, things are valuable when they are meaningful and things (...) become meaningful as information and realizations. It is in these relations that intrinsic and extrinsic values find their common roots. Dennett’s musing on the relationship between DNA instructions, DNA readers, and phenotypes invites a commingling of information technology and evolutionary theory. His notion of design space provides a basis for the claim the biotic community has on intrinsic and extrinsic values. (shrink)
Software application ontologies have the potential to become the keystone in state-of-the-art information management techniques. It is expected that these ontologies will support the sort of reasoning power required to navigate large and complex terminologies correctly and efficiently. Yet, there is one problem in particular that continues to stand in our way. As these terminological structures increase in size and complexity, and the drive to integrate them inevitably swells, it is clear that the level of consistency required for such (...) navigation will become correspondingly difficult to maintain. While descriptive semantic representations are certainly a necessary component to any adequate ontology-based system, so long as ontology engineers rely solely on semantic information, without a sound ontological theory informing their modeling decisions, this goal will surely remain out of reach. In this paper we describe how Language and Computing nv (L&C), along with The Institute for Formal Ontology and Medical Information Sciences (IFOMIS), are working towards developing and implementing just such a theory, combining the open software architecture of L&C’s LinkSuiteTM with the philosophical rigor of IFOMIS’s Basic Formal Ontology. In this way we aim to move beyond the more or less simple controlled vocabularies that have dominated the industry to date. (shrink)
Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are (...) the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism and connectionism on the other. We defend the relevance to cognitive science of both computation, in a generic sense that we fully articulate for the first time, and information processing, in three important senses of the term. Our account advances some foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates’ empirical aspects. (shrink)
Since the cognitive revolution, it’s become commonplace that cognition involves both computation and information processing. Is this one claim or two? Is computation the same as information processing? The two terms are often used interchangeably, but this usage masks important differences. In this paper, we distinguish information processing from computation and examine some of their mutual relations, shedding light on the role each can play in a theory of cognition. We recommend that theoristError: Illegal entry in (...) bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMaps of cognition be explicit and careful in choosing 1 notions of computation and information and connecting them together. Much confusion can be avoided by doing so. Keywords: computation, information processing, computationalism, computational theory of mind, cognitivism. (shrink)
Turing machine An idealized computing device attached to a tape, each square of which is capable of holding a symbol. We write a program (a nite binary string) on the tape, and start the machine. If the machine halts with string o written at a designated place on the tape.
Our work supports synchronization for binding within Phillips & Singer's “contextual field” (CF) as well as the type of its lateral interaction they propose. Both firmly agree with our “association field” (AF) and its modulatory influences (Eckhorn et al. 1990). However, the CF connections seem to produce anticorrelation among assemblies representing unrelated structures, whereas experimental evidence indicates decoupling. Finally, it is unclear how the cortex can have access to the logistic function used in the “coherent infomax” approach.
Supplementary to matter and energy, information is the third essence for modeling the natural world. An emerging discipline known as cognitive informatics (CI) is developed recently that forms a profound interdisciplinary study of cognitive and information sciences, and tackles the common root problems sharing by informatics, computing, software engineering, artificial intelligence, cognitive science, neuropsychology, philosophy, linguistics, and life science. CI focuses on internal information processing mechanisms and the natural intelligence of the brain. This paper describes the historical (...) development of informatics from the classical informationtheory and contemporary informatics, to CI. The domain of CI, and its interdisciplinary nature are explored. Foundations of CI, particularly the brain versus the mind, the acquired life functions versus the inherited ones, and generic relationships between information, matter, and energy are investigated. The potential engineering applications of CI and perspectives on future research are discussed. It is expected that the investigation into CI will result in fundamental findings towards the development of next generation IT and software technologies, and new architectures of computing systems. (shrink)
Richard Dawkins has popularized an argument which, according to him, proves that there is almost certainly no God. It rests on the assumption that complex and statistically improbable things are more difficult to explain than those that are not, and that any explanatory mechanism that is called on to do the explaining must show how this complexity can be built up from simpler means as it would be useless otherwise. In this paper, I first question what justifies the consideration of (...) the designer’s own complexity. I suggest a different understanding of both order and simplicity inevitable when one considers the psychological counterpart of information. I then assess what seems to be the inference engine of the proposal, the metaphor of biological organisms as either self-programmed machines or algorithms. I show how self-generated organized complexity would not sit well with our knowledge of both abduction and the theorems of informationtheory applied to genetics. I then turn to the positive side of Dawkins’ challenge, and I review some philosophers and their proposals for how the complexity of the world could be controlled from outside if one wanted to uphold a traditional understanding of God’s simplicity. (shrink)
Theory choice can be approached in at least four ways. One of these calls for the application of decision theory, and this article endorses this approach. But applying standard forms of decision theory imposes an overly demanding standard of numeric information, supposedly satisfied by point-valued utility and probability functions. To ameliorate this difficulty, a version of decision theory that requires merely comparative utilities and plausibilities is proposed. After a brief summary of this alternative, the article (...) illustrates how comparative decision theory affords a rational reconstruction of decisions made by exemplary scientists in two cases of theory choice: Buffon’s law and the luminiferous ether. It also offers a rational reconstruction of two cases of theory diagnosis: Mendeleev’s anomalies and the Pioneer anomaly. (shrink)