The Hebbian view of word representation is challenged by findings of task (level of processing)-dependent, event-related potential patterns that do not support the notion of a fixed set of neurons representing a given word. With cross-language phonological reliability encoding more asymmetrical left hemisphere activity is evoked than with word comprehension. This suggests a dynamical view of the brain as a self-organizing, connectivity-adjusting system.
. We explore the possibility and some potential payoffs of using the theory of accessible categories in the study of categories of logics. We illustrate this by two case studies focusing on the category of finitary structural logics and its subcategory of algebraizable logics.
The usual way to try to ground knowing according to contemporary theory of knowledge is: We know something if (1) it’s true, (2) we believe it, and (3) we believe it for the “right” reasons. Floridi proposes a better way. His grounding is based partly on probability theory, and partly on a question/answer network of verbal and behavioural interactions evolving in time. This is rather like modeling the data-exchange between a data-seeker who needs to know which button to press on (...) a food-dispenser and a data-knower who already knows the correct number. The success criterion, hence the grounding, is whether the seeker’s probability of lunch is indeed increasing (hence uncertainty is decreasing) as a result of the interaction. Floridi also suggests that his philosophy of information casts some light on the problem of consciousness. I’m not so sure. (shrink)
Abstract: The core aim of this special issue is to present the philosophy of information as a way of doing philosophy, to focus on the contributions of Luciano Floridi to that area, and most important, to stimulate the debate on the most distinctive and controversial views he has defended in that context. This introduction contains a description of the philosophy of information, a discussion of two common misconceptions about the scope and the ambition of the philosophy of information, and (...) a brief overview of the essays in the issue. (shrink)
Luciano Floridi’s Philosophy and Computing: An Introduction is a survey of some important ideas that ground the newly emerging area of philosophy known, thanks to Floridi, as the philosophy of information. It was written as a textbook for philosophy students interested in the digital age, but is probably more useful for postgraduates who want to investigate intersections between philosophy and computer science, information theory and ICT (information and communications technology). The book is divided into five independent chapters followed by (...) a worthy, though impressionistic, afterthought under the title of the conclusion. Chapter One, “Divide et Computa: Philosophy and the Digital Environment,” begins by outlining four topics to consider when examining the significance of the digital revolution: 1) computation, 2) automatic control, 3) modeling and virtual reality, and 4) information management. This preliminary outline is followed by a brief historical consideration of the transition from analogue to digital information processing and the importance of “digitization” for developing mechanical means to manage information. According to Floridi, this digitization has occurred in three main areas. Regarding the scope of digitized content, we have moved from numerical data to sounds and images. At the same time, our interfaces to the computer have become less digital and more humane. Graphical user interfaces and WYSIWYG software have quickly replaced punch cards. In the area of connectivity, we have moved from the mainframe to the Internet, hence, to the possibility of a global information network. Together these transformations are accelerating the evolution of the infosphere and consequently its dramatic effect on the shape of society. These changes are of world historical significance, thus worthy of philosophical investigation, as the last part of the chapter shows.. (shrink)
I describe the emergence of Floridi’s philosophy of information (PI) and information ethics (IE) against the larger backdrop of Information and Computer Ethics (ICE). Among their many strengths, PI and IE offer promising metaphysical and ethical frameworks for a global ICE that holds together globally shared norms with the irreducible differences that define local cultural and ethical traditions. I then review the major defenses and critiques of PI and IE offered by contributors to this special issue, and highlight Floridi’s responses (...) to especially two central problems – the charge of relativism and the meaning of ‹entropy’ in IE. These responses, conjoined with several elaborations of PI and IE offered here by diverse contributors, including important connections with the naturalistic philosophies of Spinoza and other major Western and Eastern figures, thus issue in an expanded and more refined version of PI and IE – one still facing important questions as well as possibilities for further development. (shrink)
Artificial life (also known as “ALife”) is a broad, interdisciplinary endeavor that studies life and life-like processes through simulation and synthesis. The goals of this activity include modelling and even creating life and life-like systems, as well as developing practical applications using intuitions and methods taken from living systems. Artificial life both illuminates traditional philosophical questions and raises new philosophical questions. Since both artificial life and philosophy investigate the essential nature of certain fundamental aspects of reality like life and adaptation, (...) artificial life offers philosophy a new perspective on these phenomena. This chapter provides an introduction to current research in artificial life and explains its philosophical implications. (shrink)
The Construction of Personal Identities Online Content Type Journal Article Category Introduction Pages 1-3 DOI 10.1007/s11023-011-9254-y Authors Luciano Floridi, Department of Philosophy, University of Hertfordshire, de Havilland Campus, Hatfield, Hertfordshire, AL10 9AB UK Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495.
This accessible book explores the development, history and future of Information and Communication Technology using examples from philosophy. Luciano Floridi offers both an introduction to these technologies and a philosophical analysis of the problems they pose. The book examines a wide range of areas of technology, including the digital revolution, the Web and Internet, Artificial Intelligence and CD-ROMS. We see how the relationship between philosophy and computing provokes many crucial philosophical questions. Ultimately, Philosophy and Computing outlines what the future (...) philosophy of information will need to undertake. (shrink)
Degenerate Epistemology Content Type Journal Article Category Editor Letter Pages 1-3 DOI 10.1007/s13347-012-0067-6 Authors Luciano Floridi, Department of Philosophy, University of Hertfordshire, de Havilland Campus, Hatfield, Hertfordshire AL10 9AB, UK Journal Philosophy & Technology Online ISSN 2210-5441 Print ISSN 2210-5433.
Spotting the Sun: A translation and analysis of three early seventeenth-century works on sunspots Content Type Journal Article Category Essay Review Pages 1-6 DOI 10.1007/s11016-011-9598-1 Authors Luciano Boschiero, Campion College, PO Box 3052, Toongabbie East, NSW 2146, Australia Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
Shaping knowledge: Thomas Harriot and the mechanics of motion Content Type Journal Article Category Book Review Pages 1-3 DOI 10.1007/s11016-012-9665-2 Authors Luciano Boschiero, Campion College, 8-14 Austin Woodbury Place, Old Toongabbie, 2146 Australia Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
The physicist's conception of space-time underwent two major upheavals thanks to the general theory of relativity and quantum mechanics. Both theories play a fundamental role in describing the same natural world, although at different scales. However, the inconsistency between them emerged clearly as the limitation of twentieth-century physics, so a more complete description of nature must encompass general relativity and quantum mechanics as well. The problem is a theorists' problem par excellence. Experiment provide little guide, and the inconsistency mentioned above (...) is an important problem which clearly illustrates the intermingling of philosophical, mathematical, and physical thought. In fact, in order to unify general relativity with quantum field theory, it seems necessary to invent a new mathematical framework which will generalise Riemannian geometry and therefore our present conception of space and space-time. Contemporary developments in theoretical physics suggest that another revolution may be in progress, through which a new kind of geometry may enter physics, and space-time itself can be reinterpreted as an approximate, derived concept. The main purpose of this article is to show the great significance of space-time geometry in predetermining the laws which are supposed to govern the behaviour of matter, and further to support the thesis that matter itself can be built from geometry, in the sense that particles of matter as well as the other forces of nature emerges in the same way that gravity emerges from geometry. Scientific research is not a process of steady accumulation of absolute truths, which has culminated in present theories, but rather a much more dynamic kind of process in which there are no final theoretical concepts valid in unlimited domains. (David Bohm). (shrink)
The essential difficulty about Computer Ethics' (CE) philosophical status is a methodological problem: standard ethical theories cannot easily be adapted to deal with CE-problems, which appear to strain their conceptual resources, and CE requires a conceptual foundation as an ethical theory. Information Ethics (IE), the philosophical foundational counterpart of CE, can be seen as a particular case of environmental ethics or ethics of the infosphere. What is good for an information entity and the infosphere in general? This is the ethical (...) question asked by IE. The answer is provided by a minimalist theory of deseerts: IE argues that there is something more elementary and fundamental than life and pain, namely being, understood as information, and entropy, and that any information entity is to be recognised as the centre of a minimal moral claim, which deserves recognition and should help to regulate the implementation of any information process involving it. IE can provide a valuable perspective from which to approach, with insight and adequate discernment, not only moral problems in CE, but also the whole range of conceptual and moral phenomena that form the ethical discourse. (shrink)
What is the ultimate nature of reality? This paper defends an answer in terms of informational realism (IR). It does so in three stages. First, it is shown that, within the debate about structural realism (SR), epistemic (ESR) and ontic (OSR) structural realism are reconcilable by using the methodology of the levels of abstractions. It follows that OSR is defensible from a structuralist-friendly position. Second, it is argued that OSR is also plausible, because not all related objects are logically prior (...) to all relational structures. The relation of difference is at least as fundamental as (because constitutive of) any relata. Third, it is suggested that an ontology of structural objects for OSR can reasonably be developed in terms of informational objects, and that Object Oriented Programming provides a flexible and powerful methodology with which to clarify and make precise the concept of “informational object”. The outcome is informational realism, the view that the world is the totality of informational objects dynamically interacting with each other.. (shrink)
Abstract: This article provides replies to, and comments on, the contributions to the special issue on the philosophy of information. It seeks to highlight con-vergences and points of potential agreement, while offering clarifications and further details. It also answers some criticisms and replies to some objections articulated in the special issue.
The paper investigates the ethics of information transparency (henceforth transparency). It argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles. A new definition of transparency is offered in order to take into account the dynamics of information production and the differences between data and information. It is then argued that the proposed definition provides a better understanding of what sort of information should be disclosed and what (...) sort of information should be used in order to implement and make effective the ethical practices and principles to which an organisation is committed. The concepts of “heterogeneous organisation” and “autonomous computational artefact” are further defined in order to clarify the ethical implications of the technology used in implementing information transparency. It is argued that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing. (shrink)
There is no consensus yet on the definition of semantic information. This paper contributes to the current debate by criticising and revising the Standard Definition of semantic Information (SDI) as meaningful data, in favour of the Dretske-Grice approach: meaningful and well-formed data constitute semantic information only if they also qualify as contingently truthful. After a brief introduction, SDI is criticised for providing necessary but insufficient conditions for the definition of semantic information. SDI is incorrect because truth-values do not supervene on (...) semantic information, and misinformation (that is, false semantic information) is not a type of semantic information, but pseudo-information, that is not semantic information at all. This is shown by arguing that none of the reasons for interpreting misinformation as a type of semantic information is convincing, whilst there are compelling reasons to treat it as pseudo-information. As a consequence, SDI is revised to include a necessary truth-condition. The last section summarises the main results of the paper and indicates some interesting areas of application of the revised definition. (shrink)
Abstract: In the past, major scientific and technological revolutions, like the Copernican Revolution and the Industrial Revolution, have had profound effects, not only upon society in general, but also upon Philosophy. Today's Information Revolution is no exception. Already it has had significant impacts upon our understanding of human nature, the nature of society, even the nature of the universe. Given these developments, this essay considers some of the philosophical contributions of two "philosophers of the Information Age"—Norbert Wiener and Luciano (...) Floridi—with regard to the nature of the universe, human nature, the nature of society, and the nature of "artificial agents" such as robots, softbots, and cyborgs. (shrink)
The paper presents, firstly, a brief review of the long history of information ethics beginning with the Greek concept of parrhesia or freedom of speech as analyzed by Michel Foucault. The recent concept of information ethics is related particularly to problems which arose in the last century with the development of computer technology and the internet. A broader concept of information ethics as dealing with the digital reconstruction of all possible phenomena leads to questions relating to digital ontology. Following Heidegger’s (...) conception of the relation between ontology and metaphysics, the author argues that ontology has to do with Being itself and not just with the Being of beings which is the matter of metaphysics. The primary aim of an ontological foundation of information ethics is to question the metaphysical ambitions of digital ontology understood as today’s pervading understanding of Being. The author analyzes some challenges of digital technology, particularly with regard to the moral status of digital agents. The author argues that information ethics does not only deal with ethical questions relating to the infosphere. This view is contrasted with arguments presented by Luciano Floridi on the foundation of information ethics as well as on the moral status of digital agents. It is argued that a reductionist view of the human body as digital data overlooks the limits of digital ontology and gives up one basis for ethical orientation. Finally issues related to the digital divide as well as to intercultural aspects of information ethics are explored – and long and short-term agendas for appropriate responses are presented. (shrink)
In Kant’s logical texts the reference of the form of the judgment to an “unknown = x” is well known, but its understanding remains far from consensual. Due to the universality of all concepts, the subject as much as the predicate, in the form S is P, is regarded as predicate of the x, which, in turn, is regarded as the subject of the judgment. In the CPR, particularly in the text on the “logical use of the understanding”, this Kantian (...) interpretation of the subject-predicate relation leads to the question about the relations that must hold between intuition and concept in the judgment. In contrast to intuition, if no concept, due to its universal character, refers immediately to an object, how should we understand the relations of subject and predicate to one another, as well as their relations to intuition, which corresponds to the very special individuality of that object in general = x? In the Kant-Literatur, the relations between intuition and concept in the judgment have been considered in diverse theoretical backgrounds, mainly in Fregean logic and in the logic of Port-Royal. Although so markedly different, these two solutions to the problem above seem to share a common thesis, in so far as they claim, though in different ways, a predicative character to those relations. If the analytic tradition recognizes in the relation between x and the concept S the marks of a propositional function Sx, in turn, the interpretation elaborated from the background of Port-Royal recognizes in this relation the minor premise x is S implicit in the judgment every S is P. This being the case, if it were possible to prove, on the contrary, that the relations between intuition and concept in the judgment could only be of a non-predicative character, then a third solution would be open to us, a solution that could enable us to track down the sense of the conceptions of judgment and logical form in the CPR. In applying this argumentative strategy, it is of the utmost importance to insist on the specificity of Kant’s notion of extension, in order to prove its irreducibility to the Port-Royal notion of extension as well as to the modern one. (shrink)
This paper has three goals. The first is to introduce the “knowledge game”, a new, simple and yet powerful tool for analysing some intriguing philosophical questions. The second is to apply the knowledge game as an informative test to discriminate between conscious (human) and conscious-less agents (zombies and robots), depending on which version of the game they can win. And the third is to use a version of the knowledge game to provide an answer to Dretske’s question “how do you (...) know you are not a zombie?”. (shrink)
This is the revised version of an invited keynote lecture delivered at the 1st Australian Computing and Philosophy Conference (CAP@AU; the Australian National University in Canberra, 31 October–2 November, 2003). The paper is divided into two parts. The first part defends an informational approach to structural realism. It does so in three steps. First, it is shown that, within the debate about structural realism (SR), epistemic (ESR) and ontic (OSR) structural realism are reconcilable. It follows that a version of OSR (...) is defensible from a structuralist-friendly position. Second, it is argued that a version of OSR is also plausible, because not all relata (structured entities) are logically prior to relations (structures). Third, it is shown that a version of OSR is also applicable to both sub-observable (unobservable and instrumentally-only observable) and observable entities, by developing its ontology of structural objects in terms of informational objects. The outcome is informational structural realism, a version of OSR supporting the ontological commitment to a view of the world as the totality of informational objects dynamically interacting with each other. The paper has been discussed by several colleagues and, in the second half, ten objections that have been moved to the proposal are answered in order to clarify it further. (shrink)
What is the most general common set ofattributes that characterises something asintrinsically valuableand hence as subject to some moral respect, andwithout which something would rightly beconsidered intrinsically worthless or even positivelyunworthy and therefore rightly to bedisrespected in itself? Thispaper develops and supports the thesis that theminimal condition of possibility of an entity'sleast intrinsic value is to be identified with itsontological status as an information object.All entities, even when interpreted as only clusters ofinformation, still have a minimal moral worthqua information objects (...) and so may deserve to be respected. Thepaper is organised into four main sections.Section 1 models moral action as an information systemusing the object-oriented programmingmethodology (OOP). Section 2 addresses the question of whatrole the several components constituting themoral system can have in an ethical analysis. If theycan play only an instrumental role, thenComputer Ethics (CE) is probably bound to remain at most apractical, field-dependent, applied orprofessional ethics. However, Computer Ethics can give rise to amacroethical approach, namely InformationEthics (IE), if one can show that ethical concern should beextended to include not only human, animal orbiological entities, but also information objects. Thefollowing two sections show how this minimalistlevel of analysis can be achieved. Section 3 provides anaxiological analysis of information objects. Itcriticises the Kantian approach to the concept ofintrinsic value and shows that it can beimproved by using the methodology introduced in the first section.The solution of the Kantian problem prompts thereformulation of the key question concerningthe moral worth of an entity: what is theintrinsic value of x qua an object constituted by itsinherited attributes? In answering thisquestion, it is argued that entitiescan share different observable propertiesdepending on the level of abstraction adopted,and that it is still possible to speak of moral value even at thehighest level of ontological abstractionrepresented by the informational analysis. Section 4 develops aminimalist axiology based on the concept ofinformation object. It further supports IE's position byaddressing five objections that may undermineits acceptability. (shrink)
Semantic information is usually supposed to satisfy the veridicality thesis: p qualifies as semantic information only if p is true. However, what it means for semantic information to be true is often left implicit, with correspondentist interpretations representing the most popular, default option. The article develops an alternative approach, namely a correctness theory of truth (CTT) for semantic information. This is meant as a contribution not only to the philosophy of information but also to the philosophical debate on the nature (...) of truth. After the introduction, in Sect. 2, semantic information is shown to be translatable into propositional semantic information (i). In Sect. 3, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in Sect. 4, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In Sects. 5 and 6, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates Q by verifying and validating it (in the computer science’s sense of verification and validation ); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of proxy ) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of commutation ); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. Sect. 7 provides some further clarifications about CTT, in the light of semantic paradoxes. Section 8 draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users. In the course of the article all technical expressions from computer science are explained. (shrink)
In this paper, I present an informational approach to the nature of personal identity. In “Plato and the problem of the chariot”, I use Plato’s famous metaphor of the chariot to introduce a specific problem regarding the nature of the self as an informational multiagent system: what keeps the self together as a whole and coherent unity? In “Egology and its two branches” and “Egology as synchronic individualisation”, I outline two branches of the theory of the self: one concerning the (...) individualisation of the self as an entity, the other concerning the identification of such entity. I argue that both presuppose an informational approach, defend the view that the individualisation of the self is logically prior to its identification , and suggest that such individualisation can be provided in informational terms. Hence, in “A reconciling hypothesis: the three membranes model”, I offer an informational individualisation of the self, based on a tripartite model, which can help to solve the problem of the chariot. Once this model of the self is outlined, in “ICTs as technologies of the self” I use it to show how ICTs may be interpreted as technologies of the self. In “The logic of realisation”, I introduce the concept of “realization” (Aristotle’s anagnorisis ) and support the rather Spinozian view according to which, from the perspective of informational structural realism, selves are the final stage in the development of informational structures. The final “Conclusion: from the egology to the ecology of the self” briefly concludes the article with a reference to the purposeful shaping of the self, in a shift from egology to ecology. (shrink)
This paper analyses the relations between philosophy of information (PI), library and information science (LIS) and social epistemology (SE). In the first section, it is argued that there is a natural relation between philosophy and LIS but that SE cannot provide a satisfactory foundation for LIS. SE should rather be seen as sharing with LIS a common ground, represented by the study of information, to be investigated by a new discipline, PI. In the second section, the nature of PI is (...) outlined as the philosophical area that studies the conceptual nature of information, its dynamics and problems. In the third section, LIS is defined as a form of applied PI. The hypothesis supported is that PI should replace SE as the philosophical discipline that can best provide the conceptual foundation for LIS. In the conclusion, it is suggested that the 'identity' crisis undergone by LIS has been the natural outcome of a justified but precocious search for a philosophical counterpart that has emerged only recently: namely, PI. The development of LIS should not rely on some borrowed, pre-packaged theory. As applied PI, LIS can fruitfully contribute to the growth of basic theoretical research in PI itself and thus provide its own foundation. (shrink)
The paper develops some of the conclusions, reached in Floridi (2007), concerning the future developments of Information and Communication Technologies (ICTs) and their impact on our lives. The two main theses supported in that article were that, as the information society develops, the threshold between online and offline is becoming increasingly blurred, and that once there won't be any significant difference, we shall gradually re-conceptualise ourselves not as cyborgs but rather as inforgs, i.e. socially connected, informational organisms. In this paper, (...) I look at the development of the so-called Semantic Web and Web 2.0 from this perspective and try to forecast their future. Regarding the Semantic Web, I argue that it is a clear and well-defined project, which, despite some authoritative views to the contrary, is not a promising reality and will probably fail in the same way AI has failed in the past. Regarding Web 2.0, I argue that, although it is a rather ill-defined project, which lacks a clear explanation of its nature and scope, it does have the potentiality of becoming a success (and indeed it is already, as part of the new phenomenon of Cloud Computing) because it leverages the only semantic engines available so far in nature, us. I conclude by suggesting what other changes might be expected in the future of our digital environment. (shrink)
“I love information upon all subjects that come in my way, and especially upon those that are most important.” Thus boldly declares Euphranor, one of the defenders of Christian faith in Berkley's Alciphron (Dialogue 1, Section 5, Paragraph 6/10, see Berkeley ). Evidently, information has been an object of philosophical desire for some time, well before the computer revolution, Internet or the dot.com pandemonium (see for example Dunn  and Adams ). Yet what does Euphranor love, exactly? What is information (...) ? The question has received many answers in different fields. Unsurprisingly, several surveys do not even converge on a single, unified definition of information (see for example Braman , Losee , Machlup and Mansfield , Debons and Cameron , Larson and Debons ). (shrink)
In this paper, a critique will be developed and an alternative proposed to Luciano Floridi’s approach to Information Ethics (IE). IE is a macroethical theory that is to both serve as a foundation for computer ethics and to guide our overall moral attitude towards the world. The central claims of IE are that everything that exists can be described as an information object, and that all information objects, qua information objects, have intrinsic value and are therefore deserving of moral (...) respect. In my critique of IE, I will argue that Floridi has presented no convincing arguments that everything that exists has some minimal amount of intrinsic value. I will argue, however, that his theory could be salvaged in large part if it were modified from a value-based into a respect-based theory, according to which many (but not all) inanimate things in the world deserve moral respect, not because of intrinsic value, but because of their (potential) extrinsic, instrumental or emotional value for persons. (shrink)
The paper argues that digital ontology (the ultimate nature of reality is digital, and the universe is a computational system equivalent to a Turing Machine) should be carefully distinguished from informational ontology (the ultimate nature of reality is structural), in order to abandon the former and retain only the latter as a promising line of research. Digital vs. analogue is a Boolean dichotomy typical of our computational paradigm, but digital and analogue are only “modes of presentation” of Being (to paraphrase (...) Kant), that is, ways in which reality is experienced or conceptualised by an epistemic agent at a given level of abstraction. A preferable alternative is provided by an informational approach to structural realism, according to which knowledge of the world is knowledge of its structures. The most reasonable ontological commitment turns out to be in favour of an interpretation of reality as the totality of structures dynamically interacting with each other. The paper is the first part (the pars destruens) of a two-part piece of research. The pars construens, entitled “A Defence of Informational Structural Realism”, is developed in a separate article, also published in this journal. (shrink)
This article is the second step in our research into the Symbol Grounding Problem (SGP). In a previous work, we defined the main condition that must be satisfied by any strategy in order to provide a valid solution to the SGP, namely the zero semantic commitment condition (Z condition). We then showed that all the main strategies proposed so far fail to satisfy the Z condition, although they provide several important lessons to be followed by any new proposal. Here, we (...) develop a new solution of the SGP. It is called praxical in order to stress the key role played by the interactions between the agents and their environment. It is based on a new theory of meaning—Action-based Semantics (AbS)—and on a new kind of artificial agents, called two-machine artificial agents (AM²). Thanks to their architecture, AM2s implement AbS, and this allows them to ground their symbols semantically and to develop some fairly advanced semantic abilities, including the development of semantically grounded communication and the elaboration of representations, while still respecting the Z condition. (shrink)
Abstract: Luciano Floridi has impressively applied the concept of information to problems in semantics and epistemology, among other areas. In this essay, I briefly review two areas where I think one may usefully raise questions about some of Floridi's conclusions. One area is in the project to naturalize semantics and Floridi's use of the derived versus nonderived notion of semantic content. The other area is in the logic of information and knowledge and whether knowledge based on information necessarily supports (...) closure, in every instance. I suggest that it does not and, thereby, raise a challenge to Floridi's logic of being informed. (shrink)
In this article, I summarise the ontological theory of informational privacy (an approach based on information ethics) and then discuss four types of interesting challenges confronting any theory of informational privacy: (1) parochial ontologies and non-Western approaches to informational privacy; (2) individualism and the anthropology of informational privacy; (3) the scope and limits of informational privacy; and (4) public, passive and active informational privacy. I argue that the ontological theory of informational privacy can cope with such challenges fairly successfully. In (...) the conclusion, I discuss some of the work that lies ahead. (shrink)
Luciano Floridi (2003) offers a theory of information as a strongly semantic notion, according to which information encapsulates truth, thereby making truth a necessary condition for a sentence to qualify as information. While Floridi provides an impressive development of this position, the aspects of his approach of greatest philosophical significance are its foundations rather than its formalization. He rejects the conception of information as meaningful data, which entails at least three theses – that information can be false; that tautologies (...) are information; and, that It is true that ... is non-redundant – appear to be defensible. This inquiry offers various logical, epistemic, and ordinary-language grounds to demonstrate that an account of his kind is too narrow to be true and that its adoption would hopelessly obscure crucial differences between information, misinformation, and disinformation. (shrink)
The use of “levels of abstraction” in philosophical analysis (levelism) has recently come under attack. In this paper, I argue that a refined version of epistemological levelism should be retained as a fundamental method, called the method of levels of abstraction. After a brief introduction, in section “Some Definitions and Preliminary Examples” the nature and applicability of the epistemological method of levels of abstraction is clarified. In section “A Classic Application of the Method of Abstraction”, the philosophical fruitfulness of the (...) new method is shown by using Kant’s classic discussion of the “antinomies of pure reason” as an example. In section “The Philosophy of the Method of Abstraction”, the method is further specified and supported by distinguishing it from three other forms of “levelism”: (i) levels of organisation; (ii) levels of explanation and (iii) conceptual schemes. In that context, the problems of relativism and antirealism are also briefly addressed. The conclusion discusses some of the work that lies ahead, two potential limitations of the method and some results that have already been obtained by applying the method to some long-standing philosophical problems. (shrink)
The article investigates the sceptical challenge from an information-theoretic perspective. Its main goal is to articulate and defend the view that either informational scepticism is radical, but then it is epistemologically innocuous because redundant; or it is moderate, but then epistemologically beneficial because useful. In order to pursue this cooptation strategy, the article is divided into seven sections. Section 1 sets up the problem. Section 2 introduces Borel numbers as a convenient way to refer uniformly to (the data that individuate) (...) different possible worlds. Section 3 adopts the Hamming distance between Borel numbers as a metric to calculate the distance between possible worlds. In Sects. 4 and 5, radical and moderate informational scepticism are analysed using Borel numbers and Hamming distances, and shown to be either harmless (extreme form) or actually fruitful (moderate form). Section 6 further clarifies the approach by replying to some potential objections. In the conclusion, the Peircean nature of the overall approach is briefly discussed. (shrink)
The paper outlines a new interpretation of informational privacy and of its moral value. The main theses defended are: (a) informational privacy is a function of the ontological friction in the infosphere, that is, of the forces that oppose the information flow within the space of information; (b) digital ICTs (information and communication technologies) affect the ontological friction by changing the nature of the infosphere (re-ontologization); (c) digital ICTs can therefore both decrease and protect informational privacy but, most importantly, they (...) can also alter its nature and hence our understanding and appreciation of it; (d) a change in our ontological perspective, brought about by digital ICTs, suggests considering each person as being constituted by his or her information and hence regarding a breach of one’s informational privacy as a form of aggression towards one’s personal identity. (shrink)
Abstract: According to Luciano Floridi (2008) , informational structural realism provides a framework to reconcile the two main versions of realism about structure: the epistemic formulation (according to which all we can know is structure) and the ontic version (according to which structure is all there is). The reconciliation is achieved by introducing suitable levels of abstraction and by articulating a conception of structural objects in information-theoretic terms. In this essay, I argue that the proposed reconciliation works at the (...) expense of realism. I then propose an alternative framework, in terms of partial structures, that offers a way of combining information and structure in a realist setting while still preserving the distinctive features of the two formulations of structural realism. Suitably interpreted, the proposed framework also makes room for an empiricist form of informational structuralism (structural empiricism). Pluralism then emerges. (shrink)
Abstract: This article offers an account and defence of constructionism, both as a metaphilosophical approach and as a philosophical methodology, with references to the so-called maker's knowledge tradition. Its main thesis is that Plato's “user's knowledge” tradition should be complemented, if not replaced, by a constructionist approach to philosophical problems in general and to knowledge in particular. Epistemic agents know something when they are able to build (reproduce, simulate, model, construct, etc.) that something and plug the obtained information into the (...) correct network of relations that account for it. Their epistemic expertise increases with the scope and depth of the questions that they are able to ask and answer. Thus, constructionism deprioritises mimetic, passive, and declarative knowledge that something is the case, in favour of poietic, interactive, and practical knowledge of something being the case. Metaphilosophically, constructionism suggests adding conceptual engineering to conceptual analysis as a fundamental method. (shrink)
The article addresses the problem of how semantic information can be upgraded to knowledge. The introductory section explains the technical terminology and the relevant background. Section 2 argues that, for semantic information to be upgraded to knowledge, it is necessary and sufficient to be embedded in a network of questions and answers that correctly accounts for it. Section 3 shows that an information flow network of type A fulfils such a requirement, by warranting that the erotetic deficit, characterising the target (...) semantic information t by default, is correctly satisfied by the information flow of correct answers provided by an informational source s . Section 4 illustrates some of the major advantages of such a Network Theory of Account (NTA) and clears the ground of a few potential difficulties. Section 5 clarifies why NTA and an informational analysis of knowledge, according to which knowledge is accounted semantic information, is not subject to Gettier-type counterexamples. A concluding section briefly summarises the results obtained. (shrink)
Deductive inference is usually regarded as being “tautological” or “analytical”: the information conveyed by the conclusion is contained in the information conveyed by the premises. This idea, however, clashes with the undecidability of first-order logic and with the (likely) intractability of Boolean logic. In this article, we address the problem both from the semantic and the proof-theoretical point of view. We propose a hierarchy of propositional logics that are all tractable (i.e. decidable in polynomial time), although by means of growing (...) computational resources, and converge towards classical propositional logic. The underlying claim is that this hierarchy can be used to represent increasing levels of “depth” or “informativeness” of Boolean reasoning. Special attention is paid to the most basic logic in this hierarchy, the pure “intelim logic”, which satisfies all the requirements of a natural deduction system (allowing both introduction and elimination rules for each logical operator) while admitting of a feasible (quadratic) decision procedure. We argue that this logic is “analytic” in a particularly strict sense, in that it rules out any use of “virtual information”, which is chiefly responsible for the combinatorial explosion of standard classical systems. As a result, analyticity and tractability are reconciled and growing degrees of computational complexity are associated with the depth at which the use of virtual information is allowed. (shrink)
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...) interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on mind-less morality we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the Method of Abstraction for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The Method of Abstraction is explained in terms of an interface or set of features or observables at a given LoA. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the transition rules by which state is changed) at a given LoA. Morality may be thought of as a threshold defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary cost of this facility is the extension of the class of agents and moral agents to embrace AAs. (shrink)
Abstract: In this article I argue that the best way to understand the information turn is in terms of a fourth revolution in the long process of reassessing humanity's fundamental nature and role in the universe. We are not immobile, at the centre of the universe (Copernicus); we are not unnaturally distinct and different from the rest of the animal world (Darwin); and we are far from being entirely transparent to ourselves (Freud). We are now slowly accepting the idea that (...) we might be informational organisms among many agents (Turing), inforgs not so dramatically different from clever, engineered artefacts, but sharing with them a global environment that is ultimately made of information, the infosphere. (shrink)
The theories of information ethics articulated by Luciano Floridi and his collaborators have clear implications for law. Information law, including the law of privacy and of intellectual property, is especially likely to benefit from a coherent and comprehensive theory of information ethics. This article illustrates how information ethics might apply to legal doctrine, by examining legal questions related to the ownership and control of the personal data representations, including photographs, game avatars, and consumer profiles, that have become ubiquitous with (...) the proliferation of information and communication technologies. Recent controversy over the control of player performance statistics in “fantasy” sports leagues provides a limiting case for the analysis. Such data representations will in many instances constitute the kind of personal data that information ethics asserts constitutes an information entity. Legal doctrine in some instances proves sympathetic to such an assertion, but remains largely inchoate as to which data might constitute a given information entity in a given instance. Neither is information ethics, in its current state of development, entirely helpful in answering this critical question. While information ethics holds some promise to bring coherence to this area of the law, further work articulating a richer theory of information ethics will be necessary before it can do so. (shrink)
Agents require a constant flow, and a high level of processing, of relevant semantic information, in order to interact successfully among themselves and with the environment in which they are embedded. Standard theories of information, however, are silent on the nature of epistemic relevance. In this paper, a subjectivist interpretation of epistemic relevance is developed and defended. It is based on a counterfactual and metatheoretical analysis of the degree of relevance of some semantic information i to an informee/agent a, as (...) a function of the accuracy of i understood as an answer to a query q, given the probability that q might be asked by a. This interpretation of epistemic relevance vindicates a strongly semantic theory of information, according to which semantic information encapsulates truth. It accounts satisfactorily for several important applications and interpretations of the concept of relevant information in a variety of philosophical areas. And it interfaces successfully with current philosophical interpretations of causal and logical relevance. (shrink)
This article reviews eight proposed strategies for solving the Symbol Grounding Problem (SGP), which was given its classic formulation in Harnad (1990). After a concise introduction, we provide an analysis of the requirement that must be satisfied by any hypothesis seeking to solve the SGP, the zero semantical commitment condition. We then use it to assess the eight strategies, which are organised into three main approaches: representationalism, semi-representationalism and non-representationalism. The conclusion is that all the strategies are semantically committed and (...) hence that none of them provides a valid solution to the SGP, which remains an open problem. (shrink)
In The Philosophy of Information, Luciano Floridi presents a theory of “strongly semantic information”, based on the idea that “information encapsulates truth” (the so-called “veridicality thesis”). Starting with Popper, philosophers of science have developed different explications of the notion of verisimilitude or truthlikeness, construed as a combination of truth and information. Thus, the theory of strongly semantic information and the theory of verisimilitude are intimately tied. Yet, with few exceptions, this link has virtually pass unnoticed. In this paper, we (...) briefly survey both theories and offer a critical comparison of strongly semantic information and related notions, like truth, verisimilitude, and partial truth. (shrink)
In the paper it is argued that bridging the digital divide may cause a new ethical and social dilemma. Using Hardin''s Tragedy of the Commons, we show that an improper opening and enlargement of the digital environment (Infosphere) is likely to produce a Tragedy of the Digital Commons (TDC). In the course of the analysis, we explain why Adar and Huberman''s previous use of Hardin''s Tragedy to interpret certain recent phenomena in the Infosphere (especially peer-to-peer communication) may not be entirely (...) satisfactory. We then seek to provide an improved version of the TDC that avoids the possible shortcomings of their model. Next, we analyse some problems encountered by the application of classical ethics in the resolution of the TDC. In the conclusion, we outline the kind of work that will be required to develop an ethical approach that may bridge the digital divide but avoid the TDC. (shrink)
Information plays a major role in any moral action. ICT (Information and Communication Technologies) have revolutionized the life of information, from its production and management to its consumption, thus deeply affecting our moral lives. Amid the many issues they have raised, a very serious one, discussed in this paper, is labelled the tragedy of the Good Will. This is represented by the increasing pressure that ICT and their deluge of information are putting on any agent who would like to act (...) morally, when informed about actual or potential evils, but who also lacks the resources to do much about them. In the paper, it is argued that the tragedy may be at least mitigated, if not solved, by seeking to re-establish some equilibrium, through ICT themselves, between what agents know about the world and what they can do to improve it. (shrink)