The purpose of this paper is to look at some existing methods of semanticinformation quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semanticinformation before going on to look at Floridi’s theory of strongly semanticinformation. The latter then serves to initiate an in-depth investigation into the idea of utilising the notion of truthlikeness to quantify semanticinformation. Firstly, a couple of approaches to (...) measure truthlikeness are drawn from the literature and explored, with a focus on their applicability to semanticinformation quantification. Secondly, a similar but new approach to measure truthlikeness/information is presented and some supplementary points are made. (shrink)
In The Philosophy of Information, Luciano Floridi presents a theory of “strongly semanticinformation”, based on the idea that “information encapsulates truth” (the so-called “veridicality thesis”). Starting with Popper, philosophers of science have developed different explications of the notion of verisimilitude or truthlikeness, construed as a combination of truth and information. Thus, the theory of strongly semanticinformation and the theory of verisimilitude are intimately tied. Yet, with few exceptions, this link has virtually (...) pass unnoticed. In this paper, we briefly survey both theories and offer a critical comparison of strongly semanticinformation and related notions, like truth, verisimilitude, and partial truth. (shrink)
Up to now theories of semanticinformation have implicitly relied on logical monism, or the view that there is one true logic. The latter position has been explicitly challenged by logical pluralists. Adopting an unbiased attitude in the philosophy of information, we take a suggestion from Beall and Restall at heart and exploit logical pluralism to recognise another kind of pluralism. The latter is called informational pluralism, a thesis whose implications for a theory of semantic (...) class='Hi'>information we explore. (shrink)
This paper addresses one of the fundamental problems of the philosophy of information: How does semanticinformation emerge within the underlying dynamics of the world?—the dynamical semanticinformation problem. It suggests that the canonical approach to semanticinformation that defines data before meaning and meaning before use is inadequate for pre-cognitive information media. Instead, we should follow a pragmatic approach to information where one defines the notion of information system as (...) a special kind of purposeful system emerging within the underlying dynamics of the world and define semanticinformation as the currency of the system. In this way, systems operating with semanticinformation can be viewed as patterns in the dynamics—semanticinformation is a dynamical system phenomenon of highly organized systems. In the simplest information systems, the syntax, semantics, and pragmatics of the information medium are co-defined. It proposes a new more general theory of information semantics that focuses on the interface role of the information states in the information system—the interface theory of meaning. Finally, with the new framework, it addresses the debate between weakly semantic and strongly semantic accounts of information, siding with the strongly semantic view because the pragmatic account developed here is a better generalization of it. (shrink)
The article addresses the problem of how semanticinformation can be upgraded to knowledge. The introductory section explains the technical terminology and the relevant background. Section 2 argues that, for semanticinformation to be upgraded to knowledge, it is necessary and sufficient to be embedded in a network of questions and answers that correctly accounts for it. Section 3 shows that an information flow network of type A fulfils such a requirement, by warranting that the (...) erotetic deficit, characterising the target semanticinformation t by default, is correctly satisfied by the information flow of correct answers provided by an informational source s. Section 4 illustrates some of the major advantages of such a Network Theory of Account (NTA) and clears the ground of a few potential difficulties. Section 5 clarifies why NTA and an informational analysis of knowledge, according to which knowledge is accounted semanticinformation, is not subject to Gettier-type counterexamples. A concluding section briefly summarises the results obtained. (shrink)
This paper outlines a quantitative theory of strongly semanticinformation (TSSI) based on truth-values rather than probability distributions. The main hypothesis supported in the paper is that the classic quantitative theory of weakly semanticinformation (TWSI), based on probability distributions, assumes that truth-values supervene on factual semanticinformation, yet this principle is too weak and generates a well-known semantic paradox, whereas TSSI, according to which factual semanticinformation encapsulates truth, can avoid (...) the paradox and is more in line with the standard conception of what generally counts as semanticinformation. After a brief introduction, section two outlines the semantic paradox implied by TWSI, analysing it in terms of an initial conflict between two requisites of a quantitative theory of semanticinformation. In section three, three criteria of semanticinformation equivalence are used to provide a taxonomy of quantitative approaches to semanticinformation and introduce TSSI. In section four, some further desiderata that should be fulfilled by a quantitative TSSI are explained. From section five to section seven, TSSI is developed on the basis of a calculus of truth-values and semantic discrepancy with respect to a given situation. In section eight, it is shown how TSSI succeeds in solving the paradox. Section nine summarises the main results of the paper and indicates some future developments. (shrink)
One major fault line in foundational theories of cognition is between the so-called “representational” and “non-representational” theories. Is it possible to formulate an intermediate approach for a foundational theory of cognition by defining a conception of representation that may bridge the fault line? Such an account of representation, as well as an account of correspondence semantics, is offered here. The account extends previously developed agent-based pragmatic theories of semanticinformation, where meaning of an information state is defined (...) by its interface role, to a theory that accommodates a notion of representation and correspondence semantics. It is argued that the account can be used to develop an intermediate approach to cognition, by showing that the major sources of tension between “representational” and “non-representational” theories may be eased. (shrink)
n various publications over the past years, Floridi has developed a theory of semanticinformation as well-formed, meaningful, and truthful data. This theory is more or less orthogonal to the standard entropy-based notions of information known from physics, information theory, and computer science that all define the amount of information in a certain system as a scalar value without any direct semantic implication. In this context the question rises what the exact relation between these (...) various conceptions of information is and whether there is a real need to enrich these mathematically more or less rigid definitions with a less formal notion of semanticinformation. I investigate various philosophical aspects of the more formal definitions of information in the light of Floridi’s theory. The position I defend is that the formal treatment of the notion of information as a general theory of entropy is one of the fundamental achievements of modern science that in itself is a rich source for new philosophical reflection. This makes information theory a competitor of classical epistemology rather than a servant. In this light Floridi’s philosophy of information is more a reprise of classical epistemology that only pays lip service to information theory but fails to address the important central questions of philosophy of information. Specifically, I will defend the view that notions that are associated with truth, knowledge, and meaning all can adequately be reconstructed in the context of modern information theory and that consequently there is no need to introduce a concept of semanticinformation. (shrink)
There is no consensus yet on the definition of semanticinformation. This paper contributes to the current debate by criticising and revising the Standard Definition of semanticInformation (SDI) as meaningful data, in favour of the Dretske-Grice approach: meaningful and well-formed data constitute semanticinformation only if they also qualify as contingently truthful. After a brief introduction, SDI is criticised for providing necessary but insufficient conditions for the definition of semanticinformation. SDI (...) is incorrect because truth-values do not supervene on semanticinformation, and misinformation (that is, false semanticinformation) is not a type of semanticinformation, but pseudo-information, that is not semanticinformation at all. This is shown by arguing that none of the reasons for interpreting misinformation as a type of semanticinformation is convincing, whilst there are compelling reasons to treat it as pseudo-information. As a consequence, SDI is revised to include a necessary truth-condition. The last section summarises the main results of the paper and indicates some interesting areas of application of the revised definition. (shrink)
Semanticinformation is usually supposed to satisfy the veridicality thesis: p qualifies as semanticinformation only if p is true. However, what it means for semanticinformation to be true is often left implicit, with correspondentist interpretations representing the most popular, default option. The article develops an alternative approach, namely a correctness theory of truth (CTT) for semanticinformation. This is meant as a contribution not only to the philosophy of information (...) but also to the philosophical debate on the nature of truth. After the introduction, in Sect. 2, semanticinformation is shown to be translatable into propositional semanticinformation (i). In Sect. 3, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in Sect. 4, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In Sects. 5 and 6, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates Q by verifying and validating it (in the computer science’s sense of verification and validation ); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of proxy ) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of commutation ); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. Sect. 7 provides some further clarifications about CTT, in the light of semantic paradoxes. Section 8 draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users. In the course of the article all technical expressions from computer science are explained. (shrink)
Giovanni Sommaruga (ed): Formal Theories of Information: From Shannon to SemanticInformation Theory and General Concepts of Information Content Type Journal Article Pages 35-40 DOI 10.1007/s11023-011-9250-2 Authors Sebastian Sequoiah-Grayson, Department of Theoretical Philosophy, Faculty of Philosophy, University of Groningen, Groningen, The Netherlands Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495 Journal Volume Volume 22 Journal Issue Volume 22, Number 1.
Giovanni Sommaruga (ed): Formal Theories of Information: From Shannon to SemanticInformation Theory and General Concepts of Information Content Type Journal Article Pages 119-122 DOI 10.1007/s11023-011-9228-0 Authors Giuseppe Primiero, Centre for Logic and Philosophy of Science, University of Ghent, Blandijnberg 2, Ghent, 9000 Belgium Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495 Journal Volume Volume 21 Journal Issue Volume 21, Number 1.
The study of information based on the approach of Shannon was detached from problems of meaning. Also, it did not allow analysis of the structural characteristics of information, nor describe the way structures carry information. An outline of a different theory of information, including its semantics, was earlier proposed by the author. This theory was using closure spaces to model information. In the present paper, structures (called syllogistics) underlying syllogistic reasoning as well as ethnoscientific classifications (...) are identified together with the conditions for the lattice of closed subsets describing information to allow its existence. The structures can be used for logical analysis outside of language at the more general level of information, which in turn can be applied to the description of semantic relations in the context of information. (shrink)
Computers today are not only the calculation tools - they are directly (inter)acting in the physical world which itself may be conceived of as the universal computer (Zuse, Fredkin, Wolfram, Chaitin, Lloyd). In expanding its domains from abstract logical symbol manipulation to physical embedded and networked devices, computing goes beyond Church-Turing limit (Copeland, Siegelman, Burgin, Schachter). Computational processes are distributed, reactive, interactive, agent-based and concurrent. The main criterion of success of computation is not its termination, but the adequacy of its (...) response, its speed, generality and flexibility; adaptability, and tolerance to noise, error,faults, and damage. Interactive computing is a generalization of Turing computing, and it calls for new conceptualizations (Goldin, Wegner). In the info-computationalist framework, with computation seen as information processing, natural computation appears as the most suitable paradigm of computation and information semantics requires logical pluralism. (shrink)
This article mounts a defence of Floridi’s theory of strongly semanticinformation against recent independent objections from Fetzer and Dodig-Crnkovic. It is argued that Fetzer and Dodig-Crnkovic’s objections result from an adherence to a redundant practice of analysis. This leads them to fail to accept an informational pluralism, as stipulated by what will be referred to as Shannon’s Principle, and the non-reductionist stance. It is demonstrated that Fetzer and Dodig-Crnkovic fail to acknowledge that Floridi’s theory of strongly (...) class='Hi'>semanticinformation captures one of our deepest and most compelling intuitions regarding informativeness as a basic notion. This modal intuition will be referred to as the contingency requirement on informativeness. It will be demonstrated that its clarification validates the theory of strongly semanticinformation as a novel, and non ad hoc solution to the Bar-Hillel-Carnap semantic paradox. (shrink)
The paper proposes an explanation of some argument scope phenomena in German in terms of interaction of syntactic and semanticinformation. On the assumption that lexical semantics of a verb induces a hierarchical ordering on its arguments, it is proposed that this hierarchy together with the mapping of the hierarchy to syntactic structure define a basic scope configuration. The mapping is controlled both by syntactic and by semanticinformation. Another hypothesis proposes that changes in the syntactic (...) structure caused by topicalizarion and scrambling extend the mapping by assigning a specific role to the traces of the moved DPs. The traces can either have the semantic type of DPs or the type of individuals. This typing ambiguity yields two options: either the DP is semantically reconstructed into its original argument position, or the domain of the mapping of verbal arguments is extended. The options correspond to the narrow and the wide scopes of an argument, scope being expressed at the level of Discourse Representation Structures. This treatment of German facts is more restrictive than the ones based on Cooper storage, Logical Form, or Flexible Type Assignment. (shrink)
Information is the fuel of cognition. At its most basic level, information is a matter of structures interacting under laws. The notion of information thus reflects the (relational) fact that a structure is created by the impact of another structure. The impacted structure is an encoding, in some concrete form, of the interaction with the impacting structure. Information is, essentially, the structural trace in some system of an interaction with another system; it is also, as a (...) consequence, the structural fuel which drives the impacted system's subsequent processes and behavior. Information takes various forms because the world has many levels of compositional and functional complexity, under different constraints. The key constraints that matter in the understanding of information are natural patterns of organization, or types, and systematic correlations among types, or laws. These level- sensitive constraints, in the form of types and laws, shape the very form in which information is tokened in some structure, that is, the very form in which it is encoded. As a result, the information-producing interactions bring about different sorts of structures, with various sorts of causal effects and functions, whence so many ways in which information is coded and utilized. (shrink)
Semantics connected to some information based metaphor are well-known in logic literature: a paradigmatic example is Kripke semantic for Intuitionistic Logic. In this paper we start from the concrete problem of providing suitable logic-algebraic models for the calculus of attribute dependencies in Formal Contexts with information gaps and we obtain an intuitive model based on the notion of passage of information showing that Kleene algebras, semi-simple Nelson algebras, three-valued ukasiewicz algebras and Post algebras of order three (...) are, in a sense, naturally and directly connected to partially defined information systems. In this way wecan provide for these logic-algebraic structures a raison dêetre different from the original motivations concerning, for instance, computability theory. (shrink)
As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The “Semantic Web” was touted by its developers as equally revolutionary but has not yet achieved anything like the Web’s exponential uptake. This 17 000 word survey article explores why this might be so, from a perspective that bridges (...) both philosophy and IT. (*Also translated into Croatian and republished in Vjesnik bibliotekara Hrvatske 53, 1(2010), 155-206: See external link #2). (shrink)
The paper investigates the ethics of information transparency (henceforth transparency). It argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles. A new definition of transparency is offered in order to take into account the dynamics of information production and the differences between data and information. It is then argued that the proposed definition provides a better understanding of what sort of information should (...) be disclosed and what sort of information should be used in order to implement and make effective the ethical practices and principles to which an organisation is committed. The concepts of “heterogeneous organisation” and “autonomous computational artefact” are further defined in order to clarify the ethical implications of the technology used in implementing information transparency. It is argued that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing. (shrink)
In this article the linguistic processes of consciousness are discussed at the informational and semantic levels. The key question is devoted to the distinction between the information, meaning and sense in the physical, logico-semantic and historic levels of brain and consciousness. The principal point runs that the human linguistic process of sense producing takes the variety and indistinctness in the cultural presupposition. The modern theories of philosophy of mind relying on the theories of Soviet psychological school propose (...) some new solutions in the pragmatic questions of the semantic noncomputability. In this review we will try to justify the dualistic correlation between the cultural base and the communicative semantic process. (shrink)
Abstract: Luciano Floridi has impressively applied the concept of information to problems in semantics and epistemology, among other areas. In this essay, I briefly review two areas where I think one may usefully raise questions about some of Floridi's conclusions. One area is in the project to naturalize semantics and Floridi's use of the derived versus nonderived notion of semantic content. The other area is in the logic of information and knowledge and whether knowledge based on (...) class='Hi'>information necessarily supports closure, in every instance. I suggest that it does not and, thereby, raise a challenge to Floridi's logic of being informed. (shrink)
This paper explores John Maynard Smith’s conceptual work on animal signals. Maynard Smith defined animal signals as traits that (1) change another organism’s behaviour while benefiting the sender, that (2) are evolved for this function, and that (3) have their effects through the evolved response of the receiver. Like many ethologists, Maynard Smith assumed that animal signals convey semanticinformation. Yet his definition of animal signals remains silent on the nature of semanticinformation and on the (...) conditions determining its content. I therefore compare three ways to specify the semantic content of animal signals. The first suggestion models semantic content on Maynard Smith’s theory of genetic information. On the second proposal, semantic content is equated with a condition identified by conventional content ascriptions. The third suggestion is to explain semantic content in terms of consumer-based teleosemantics. I show how these accounts equate semantic content with distinct kinds of conditions and how they differ with respect to the kinds of traits that qualify as carrying semanticinformation. (shrink)
Information modeling (also known as conceptual modeling or semantic data modeling) may be characterized as the formulation of a model in which information aspects of objective and subjective reality are presented (the application), independent of datasets and processes by which they may be realized (the system).A methodology for information modeling should incorporate a number of concepts which have appeared in the literature, but should also be formulated in terms of constructs which are understandable to and expressible (...) by the system user as well as the system developer. This is particularly desirable in connection with certain intimate relationships, such as being the same as or being a part of. (shrink)
Information management systems improve the retention of information in large collections. As such they act as memory prostheses, implying an ideal basis in human memory models. Since humans process information by association, and situate it in the context of space and time, systems should maximize their effectiveness by mimicking these functions. Since human attentional capacity is limited, systems should scaffold cognitive efforts in a comprehensible manner. We propose the Principles of Mnemonic Associative Knowledge (P-MAK), which describes a (...) framework for semantically identifying, organizing, and retrieving information, and for encoding episodic events by time and stimuli. Inspired by prominent human memory models, we propose associative networks as a preferred representation. Networks are ideal for their parsimony, flexibility, and ease of inspection. Networks also possess topological properties—such as clusters, hubs, and the small world—that aid analysis and navigation in an information space. Our cognitive perspective addresses fundamental problems faced by information management systems, in particular the retrieval of related items and the representation of context. We present evidence from neuroscience and memory research in support of this approach, and discuss the implications of systems design within the constraints of P-MAK’s principles, using text documents as an illustrative semantic domain. (shrink)
Access to legal information and, in particular, to legal literature is examined for the creation of a search and retrieval system for Italian legal literature. The design and implementation of services such as integrated access to a wide range of resources are described, with a particular focus on the importance of exploiting metadata assigned to disparate legal material. The integration of structured repositories and Web documents is the main purpose of the system: it is constructed on the basis of (...) a federation system with service provider functions, aiming at creating a centralized index of legal resources. The index is based on a uniform metadata view created for structured data by means of the OAI approach and for Web documents by a machine learning approach, which, in this paper, has been assessed as regards document classification. Semantic searching is a major requirement for legal literature users and a solution based on the exploitation of Dublin Core metadata, as well as the use of legal ontologies and related terms prepared for accessing indexed articles have been implemented. (shrink)
In this paper, I reassess Floridi’s solution to the Bar-Hillel–Carnap paradox (the information yield of inconsistent propositions is maximal) by questioning the orthodox view that contradictions cannot be true. The main part of the paper is devoted to showing that the veridicality thesis (semanticinformation has to be true) is compatible with dialetheism (there are true contradictions) and that, unless we accept the additional non-falsity thesis (information cannot be false), there is no reason to presuppose that (...) there is no such thing like contradictory information. (shrink)
In this article I defend that Floridi’s Theory of Strongly SemanticInformation – TSSI – is correct while encompassing the Veracity Thesis, which guides the semanticinformation definition as “p is information if and only if p is constituted by meaningful, truth well-formed data”. I argue that the theory is not arbitrary because it deals with important philosophical conundrums, mainly by avoiding the Bar-Hillel and Carnap paradox (1953) generated from the classical theory of semantic (...)information. First, one of the classic theory’s main result is discussed: the production of “too much informative sentences to be true”. Then the motivations to elaborate a “logic of being informed” are summarized and it is shown how the KTB-IL system is built and modelled keeping the veracity axiom among its axioms – K or A4. Finally the TSSI is examined and defended by showing that it aletically restricts the extension of the classic concept of information, avoiding problems with tautologies and contradictions. The TSSI offers an original solution by capturing our modal intuitions concerning informativeness as a basic notion. (shrink)
Abstract: This article provides replies to, and comments on, the contributions to the special issue on the philosophy of information. It seeks to highlight con-vergences and points of potential agreement, while offering clarifications and further details. It also answers some criticisms and replies to some objections articulated in the special issue.
Abstract: According to the Veridicality Thesis, information requires truth. On this view, smoke carries information about there being a fire only if there is a fire, the proposition that the earth has two moons carries information about the earth having two moons only if the earth has two moons, and so on. We reject this Veridicality Thesis. We argue that the main notions of information used in cognitive science and computer science allow A to have (...) class='Hi'>information about the obtaining of p even when p is false. (shrink)
Floridi’s chapter on relevant information bridges the analysis of “being informed” with the analysis of knowledge as “relevant information that is accounted for” by analysing subjective or epistemic relevance in terms of the questions that an agent might ask in certain circumstances. In this paper, I scrutinise this analysis, identify a number of problems with it, and finally propose an improvement. By way of epilogue, I offer some more general remarks on the relation between (bounded) rationality, the need (...) to ask the right questions, and the ability to ask the right questions. (shrink)
Deductive inference is usually regarded as being "tautological" or "analytical": the information conveyed by the conclusion is contained in the information conveyed by the premises. This idea, however, clashes with the undecidability of first-order logic and with the (likely) intractability of Boolean logic. In this article, we address the problem both from the semantic and the proof-theoretical point of view. We propose a hierarchy of propositional logics that are all tractable (i.e. decidable in polynomial time), although by (...) means of growing computational resources, and converge towards classical propositional logic. The underlying claim is that this hierarchy can be used to represent increasing levels of "depth" or "informativeness" of Boolean reasoning. Special attention is paid to the most basic logic in this hierarchy, the pure "intelim logic", which satisfies all the requirements of a natural deduction system (allowing both introduction and elimination rules for each logical operator) while admitting of a feasible (quadratic) decision procedure. We argue that this logic is "analytic" in a particularly strict sense, in that it rules out any use of "virtual information", which is chiefly responsible for the combinatorial explosion of standard classical systems. As a result, analyticity and tractability are reconciled and growing degrees of computational complexity are associated with the depth at which the use of virtual information is allowed. (shrink)
We present a compositional semantics for first-order logic with imperfect information that is equivalent to Sevenster and Sandu’s equilibrium semantics (under which the truth value of a sentence in a finite model is equal to the minimax value of its semantic game). Our semantics is a generalization of an earlier semantics developed by the first author that was based on behavioral strategies, rather than mixed strategies.