Machina sapiens - l;algoritmo che ci ha rubato il segreto della conoscenza. -/- Le macchine possono pensare? Questa domanda inquietante, posta da Alan Turing nel 1950, ha forse trovato una risposta: oggi si può conversare con un computer senza poterlo distinguere da un essere umano. I nuovi agenti intelligenti come ChatGPT si sono rivelati capaci di svolgere compiti che vanno molto oltre le intenzioni iniziali dei loro creatori, e ancora non sappiamo perché: se sono stati addestrati per alcune abilità, altre (...) sono emerse spontaneamente mentre leggevano migliaia di libri e milioni di pagine web. È questo il segreto della conoscenza, ed è adesso nelle mani delle nostre creature? Cos'altro può emergere, mentre continuiamo su questa strada? (shrink)
What does it mean to be human? Philosophers and theologians have been wrestling with this question for centuries. Recent advances in cognition, neuroscience, artificial intelligence and robotics have yielded insights that bring us even closer to an answer. There are now computer programs that can accurately recognize faces, engage in conversation, and even compose music. There are also robots that can walk up a flight of stairs, work cooperatively with each other and express emotion. If machines can do everything we (...) can, does that mean we are machines? -/- This book examines whether an artificial person can be constructed and if so, what that might tell us about our future and ourselves. Different human capacities such as perception, creativity, consciousness, social behavior, and free will are described in separate chapters. Technological advances in these areas are summarized and compared to our own abilities. The book adopts a multi-disciplinary approach, with a naturalistic perspective drawn from biology and psychology matched against a technological perspective based on computer science and robotics. (shrink)
Psychoanalysis, particularly as articulated by figures like Freud and Lacan, highlights the inherent division within the human subject—a schism between the conscious and unconscious mind. It could be said that this suggests that such an internal division becomes amplified in the context of generative art, where technology and algorithms are used to generate artistic expressions that are meant to emerge from the depths of the unconscious. Here, we encounter the tension between the conscious artist and the generative process itself, which (...) may yield unexpected, even uncontrollable results. -/- This paper, therefore, seeks to address this division within the modern subject and its relationship to technology, wherein the division within the living body is revealed through the presence of prosthetic elements, which mirrors the division brought about by the incorporation of language as a signifier. I argue that the amplification of this internal schism does not necessarily lead to a more fractured subject. Instead, generative art, bolstered by advancements in AI and machine learning, offers a unique opportunity for individuals to externalize and explore their minds in novel ways. -/- By examining contemporary works such as Hal Foster’s Prosthetic Gods, which stands as a pivotal exploration of the convergence between modernist art and psychoanalytic theory and Isabel Millar’s Psychoanalysis of Artificial Intelligence, this paper elucidates the profound implications of Freud’s vision of modern -/- subjectivity as Prothesengott (Prosthetic God) and address the questions concerning this technological imbrication of the human mind and body through the Lacanian framework. Although for Freud, Man does not become a real God, rather, the potential to transcend one’s limitations ascribes us to God-like qualities by seeking to generate new forms of life that go beyond merely reproducing nature — a transcention of the natural. Millar emphasizes that Freud observes that this is evidenced by the fact that these additional organs remain distinct from the organism and can never assimilate into it. One continually falls short of realizing the fantasy he envisions, opting instead to use his supplementary artificial organs to endlessly revolve around the objects of the drive. -/- This evolving relationship that the drive has with its technological objects, resounds in Lacan’s conception of “lathouse” which allows extimate objects to convert interiority (unconscious) into exteriority (conscious) and exteriority into interiority. The thesis of this paper seeks to employ this underutilized concept to understand the nature of human subjectivity and its bodily and structural relationship to generative art. Therefore, this paper emphasizes what really happens when we enter into this relationship with the lathouse, whereby this artificial object has effects in the "real of jouissance", where these Lathouses create a network, namely the Alethosphere. My goal is to argue that generative art as a technological development, can be seen as an extension to the development of the drive. Conclusively, I make the claim for generative art's potential to externalize the human creative drive by emphasizing the interplay between randomness and structure, and how it offers a means to surpass our inherent limitations by presenting an avenue for self-expression that transcends traditional modes of art. (shrink)
While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...) robot blame, namely the folk's willingness to ascribe inculpating mental states or "mens rea" to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) attributions of mental states were, as suspected, similar across agent types. This raised the question - also explored in the experiment - whether people attribute knowledge and desire to robots in a merely metaphorical way (e.g., the robot "knew" rather than really knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense when given the chance. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, such as inappropriately letting the responsible human agent off the moral hook. (shrink)
This book surveys and examines the most famous philosophical arguments against building a machine with human-level intelligence. From claims and counter-claims about the ability to implement consciousness, rationality, and meaning, to arguments about cognitive architecture, the book presents a vivid history of the clash between the philosophy and AI. Tellingly, the AI Wars are mostly quiet now. Explaining this crucial fact opens new paths to understanding the current resurgence AI (especially, deep learning AI and robotics), what happens when philosophy meets (...) science, and the role of philosophy in the culture in which it is embedded. -/- Organising the arguments into four core topics - 'Is AI possible', 'Architectures of the Mind', 'Mental Semantics and Mental Symbols' and 'Rationality and Creativity' - this book shows the debate that played out between the philosophers on both sides of the question, and, as well, the debate between philosophers and AI scientists and engineers building AI systems. Up-to-date and forward-looking, the book is packed with fresh insights and supporting material, including: -/- - Accessible introductions to each war, explaining the background behind the main arguments against AI - Each chapter details what happened in the AI wars, the legacy of the attacks, and what new controversies are on the horizon. - Extensive bibliography of key readings. (shrink)
لم نعد بحاجة إلى فانوس سحري نمسح عليه بأصابعنا لكي يخرج منه المارد القادر على خدمتنا وتلبية بعض أهم مطالبنا الحياتية، ولم نعد بحاجة إلى تعويذات نلج بها في عالم السحر والخيال؛ لقد خرج المارد بالفعل من قمقمه الحاسوبي؛ من جوف مختبرات البرمجة والذكاء الاصطناعي، بتعويذات (أكواد) رياضية رمزية سرعان ما تمكن من التهامها وهضمها، ليبيت قادرًا على إنتاج تعويذات أخرى مماثلة، وربما أفضل منها! خرج «المُحول التوليدي المدرب مُسبقًا»، المعروف اختصارًا باسم «جي بي تي»، ملوحًا بإمكانات بحثية وخدمية وإنتاجية (...) هائلة، ومهددًا بتصفية قطاعات بأسرها من المهن والوظائف، وبتجريف مهارات البحث العلمي لدى طلاب المدارس والجامعات! (shrink)
الذكاء الاصطناعي العاطفي»، ويُعرف أيضًا باسم «الحوسبة العاطفية»، و«الذكاء الاصطناعي المتمركز حول الإنسان»، و«الذكاء الاصطناعي الاجتماعي»، مفهوم جديد نسبيًا (ما زالت تقنياته في طور التطوير)، وهو أحد مجالات علوم الحاسوب الهادفة إلى تطوير آلات قادرة على فهم المشاعر البشرية. يشير المفهوم ببساطة إلى اكتشاف وبرمجة المشاعر الإنسانية بُغية تحسين الذكاء الاصطناعي، وتوسيع نطاق استخدامه، بحيث لا يقتصر أداء الروبوتات على تحليل الجوانب المعرفية (المنطقية) والتفاعل معها فحسب، بل والامتداد بالتحليل والتفاعل إلى الجوانب العاطفية للتواصل البشري.
Book. From the Publisher. An influential scientist in the field of artificial intelligence (AI) explains its fundamental concepts and how it is changing culture and society. -/- A particular form of AI is now embedded in our tech, our infrastructure, and our lives. How did it get there? Where and why should we be concerned? And what should we do now? The Shortcut: Why Intelligent Machines Do Not Think Like Us provides an accessible yet probing exposure of AI in its (...) prevalent form today, proposing a new narrative to connect and make sense of events that have happened in the recent tumultuous past, and enabling us to think soberly about the road ahead. -/- This book is divided into ten carefully crafted and easily digestible chapters. Each chapter grapples with an important question for AI. Ranging from the scientific concepts that underpin the technology to wider implications for society, it develops a unified description using tools from different disciplines and avoiding unnecessary abstractions or words that end with -ism. The book uses real examples wherever possible, introducing the reader to the people who have created some of these technologies and to ideas shaping modern society that originate from the technical side of AI. It contains important practical advice about how we should approach AI in the future without promoting exaggerated hypes or fears. -/- Entertaining and disturbing but always thoughtful, The Shortcut confronts the hidden logic of AI while preserving a space for human dignity. It is essential reading for anyone with an interest in AI, the history of technology, and the history of ideas. General readers will come away much more informed about how AI really works today and what we should do next. -/- Table of Contents -/- ABOUT THE AUTHOR. PROLOGUE. 1 The Search for Intelligence. 2 The Shortcut. 3 Finding Order in the World. 4 Lady Lovelace Was Wrong. 5 Unintended Behaviour. 6 Microtargeting and Mass Persuasion. 7 The Feedback Loop. 8 The Glitch. 9 Social Machines. 10 Regulating, Not Unplugging. EPILOGUE. BIBLIOGRAPHY. INDEX. (shrink)
La inteligencia artificial (IA) es la capacidad de una máquina o sistema informático para simular y realizar tareas que normalmente requerirían inteligencia humana, como el razonamiento lógico, el aprendizaje y la resolución de problemas. La inteligencia artificial se basa en el uso de algoritmos y tecnologías de aprendizaje automático para dar a las máquinas la capacidad de aplicar ciertas habilidades cognitivas y realizar tareas por sí mismas de manera autónoma o semiautónoma. La inteligencia artificial se distingue por su grado de (...) capacidad cognitiva o por su grado de autonomía. Por capacidad puede ser débil o limitada, general o superlativa. Por su autonomía, puede ser reactiva, deliberativa, cognitiva o totalmente autónoma. A medida que la inteligencia artificial mejora, muchos procesos son cada vez más eficientes y, tareas que hoy parecen complicadas se estarán realizando con mayor rapidez y precisión. (shrink)
Resumen: La inteligencia artificial (IA) es la capacidad de una máquina o sistema informático para simular y realizar tareas que normalmente requerirían inteligencia humana, como el razonamiento lógico, el aprendizaje y la resolución de problemas. La inteligencia artificial se basa en el uso de algoritmos y tecnologías de aprendizaje automático para dar a las máquinas la capacidad de aplicar ciertas habilidades cognitivas y realizar tareas por sí mismas de manera autónoma o semiautónoma. La inteligencia artificial se distingue por su grado (...) de capacidad cognitiva o por su grado de autonomía. Por capacidad puede ser débil o limitada, general o superlativa. Por su autonomía, puede ser reactiva, deliberativa, cognitiva o totalmente autónoma. A medida que la inteligencia artificial mejora, muchos procesos son cada vez más eficientes y, tareas que hoy parecen complicadas se estarán realizando con mayor rapidez y precisión. (shrink)
Chapter 3 of the ongoing publication "Artificial Aesthetics" Book information: Assume you're a designer, an architect, a photographer, a videographer, a curator, an art historian, a musician, a writer, an artist, or any other creative professional or student. Perhaps you're a digital content creator who works across multiple platforms. Alternatively, you could be an art historian, curator, or museum professional. -/- You may be wondering how AI will affect your professional area in general and your work and career. Our book (...) offers intellectual tools to help us better see the future of creative fields. These tools come from philosophy of art, experimental psychology, media theory, digital culture studies and data science. The book is the first to combine these different perspectives together. -/- We started the work on the book in summer 2019, exchanging numerous messages, commenting on each other ideas, and sharing drafts of sections. The final book is a result of this process. Although each chapter is written by one author, it reflects the discussions we had over 27 months. (shrink)
Unless and until our society recognizes cyber bullying for what it is, the suffering of thousands of silent victims will continue.” ~ Anna Maria Chavez. There had been series of research on cyber bullying which are unable to provide reliable solution to cyber bullying. In this research work, we were able to provide a permanent solution to this by developing a model capable of detecting and intercepting bullying incoming and outgoing messages with 92% accuracy. We also developed a chatbot automation (...) messaging system to test our model leading to the development of Artificial Intelligence powered anti-cyber bullying system using machine learning algorithm of Multinomial Naïve Bayes (MNB) and optimized linear Support Vector Machine(SVM). Our model is able to detect and intercept bullying outgoing and incoming bullying messages and take immediate action. (shrink)
What is the essential ingredient of creativity that only humans – and not machines – possess? Can artificial intelligence help refine the notion of creativity by reference to that essential ingredient? How / do we need to redefine our conceptual and legal frameworks for rewarding creativity because of this new qualifying – actually creatively significant – factor? -/- Those are the questions tackled in this essay. The author’s conclusion is that consciousness, experiential states (such as a raw feel of what (...) is like to be creating) and propositional attitudes (such as intention to instigate change by creating) appear pivotal to qualifying an exploratory effort as creativity. Artificial intelligence systems would supposedly be capable of creativity if they could exhibit such states, which philosophers and computer scientists posit as conceptually admissible and practically possible. -/- The existing legal framework rewards creative endeavours by reference to the novelty or originality of the end result. But this bar is not insurmountable for artificial intelligence. Technically speaking, artificial intelligence systems can create works that are novel and/or original. Are we then prepared to grant to those systems the legal status of “creators” in their own right? Whom should the associated benefits and rewards be assigned to? How does the position change (or not) based on the qualifying factors set out above? Should – and if, how – the general public benefit from inventions / creative works of artificial intelligence systems if troves of personal data are the key component that fueled and informed creative choices? (shrink)
The aim of this paper is to grasp the relevant distinctions between various ways in which models and simulations in Artificial Intelligence (AI) relate to cognitive phenomena. In order to get a systematic picture, a taxonomy is developed that is based on the coordinates of formal versus material analogies and theory-guided versus pre-theoretic models in science. These distinctions have parallels in the computational versus mimetic aspects and in analytic versus exploratory types of computer simulation. The proposed taxonomy cuts across the (...) traditional dichotomies between symbolic and embodied AI, general intelligence and symbol and intelligence and cognitive simulation and human/non-human-like AI. -/- According to the taxonomy proposed here, one can distinguish between four distinct general approaches that figured prominently in early and classical AI, and that have partly developed into distinct research programs: first, phenomenal simulations (e.g., Turing’s “imitation game”); second, simulations that explore general-level formal isomorphisms in pursuit of a general theory of intelligence (e.g., logic-based AI); third, simulations as exploratory material models that serve to develop theoretical accounts of cognitive processes (e.g., Marr’s stages of visual processing and classical connectionism); and fourth, simulations as strictly formal models of a theory of computation that postulates cognitive processes to be isomorphic with computational processes (strong symbolic AI). -/- In continuation of pragmatic views of the modes of modeling and simulating world affairs, this taxonomy of approaches to modeling in AI helps to elucidate how available computational concepts and simulational resources contribute to the modes of representation and theory development in AI research—and what made that research program uniquely dependent on them. (shrink)
Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...) subject as well as the means of observation instead of building a separating wall. The second one aims to design AGI programs in such a way so that they can move in various environments. The authors of the article thoroughly discuss four areas of interaction for robots with AGI and introduce a new idea of techno-umwelt bridging artificial intelligence with biology in a new way. (shrink)
Zero and one are the circumference and diameter of an always-conserved circle. Explaining everything in philosophy, physics, and psychology. This produces a completely tokenized 'reality' with important implications for governmental and financial systems. As is, already, happening, in the exploding 'world' of NFT ('crypto' 'currency' in general) based on the statement and the diagram, and the notion of identity (knowledge as power).
CAT4 is proposed as a general method for representing information, enabling a powerful programming method for large-scale information systems. It enables generalised machine learning, software automation and novel AI capabilities. This is Part 3 of a five-part introduction. The focus here is on explaining the semantic model for CAT4. Points in CAT4 graphs represent facts. We introduce all the formal (data) elements used in the classic semantic model: sense or intension (1st and 2nd joins), reference (3rd join), functions (4th join), (...) time and truth (logical fields), and symbolic content (name/value fields). Concepts are introduced through examples alternating with theoretical discussion. Some concepts are assumed from Part 1 and 2, but key ideas are re-introduced. The purpose is to explain the CAT4 interpretation, and why the data structure and CAT4 axioms have been chosen: to make the semantic model consistent and complete. We start with methods to translate information from database tables into graph DBs and into CAT4. We conclude with a method for translating natural language into CAT4. We conclude with a comparison of the system with an advanced semantic logic, the hyper-intensional logic TIL, which also aims to translate NL into a logical calculus. The CAT4 Natural Language Translator is discussed in further detail in Part 4, when we introduce functions more formally. Part 5 discusses software design considerations. (shrink)
A review of Marcus du Sautoy's 2019 book, THE CREATIVITY CODE: Art and Innovation in the Age of AI, Cambridge, MA: Belknap (Harvard) Press, 2019. ISBN: 9780674988132.
This paper presents a practical case study showing how, despite the nowadays limited collaboration between AI and Cognitive Science (CogSci), cognitive research can still have an important role in the development of novel AI technologies. After a brief historical introduction about the reasons of the divorce between AI and CogSci research agendas (happened in the mid’80s of the last century), we try to provide evidence of a renewed collaboration by showing a recent case study on a commonsense reasoning system, built (...) by using insights from cognitive semantics. (shrink)
Aiming at the conflict between access and storage due to the continuous accumulation of knowledge storage in the field of artificial intelligence in welding, the author uses RDF (Resource Description Framework) to represent the technological knowledge of CO2 gas shielded welding, designs and implements a knowledge base based on Web semantic and stores it in HDFS, which is used to solve the difficulties of mass data storage. By studying the mechanism and characteristics of spatter in CO2 gas shielded welding, aiming (...) at the common measures and means of reducing spatter, the author realizes the inference function based on welding process rules by using Jena framework and optimizes the craft realization, finally integrates the core functions into the Spring framework and adopts the HTML5-based interface design to enable a new welding process expert system across the Internet and mobile platform. (shrink)
The rapid advancement of artificial intelligence (AI) has led to renewed ambitions of developing artificial general intelligence. Alongside this has been a resurgence in the development of virtual and augmented reality (V/AR) technologies, which are viewed as “disruptive” technologies and the computing platforms of the future. V/AR effectively bring the digital world of machines, robots, and artificial agents to our senses while entailing the transposition of human activity and presence into the digital world of artificial agents and machine forms of (...) intelligence. The intersection of humans and machines in this shared space brings humans and machines into ontological continuity as informational entities in a totalizing informational environment, which subsumes both cyber and physical space in an artificially constructed virtual world. The reconstruction of mind (through AI) and world (through V/AR) thus has significant epistemological, ontological, and anthropological implications, which constitute the underlying features in the artificialization of mind and world. (shrink)
This article interrogates the challenge artificial general intelligence (AGI) poses to religion and human societies, in general. More specifically, it seeks to respond to “Singularity”—when machines reach a level of intelligence that would put into question the privileged position humanity enjoys as imago Dei . Employing the Bemba notion of mystico‐relationality in dialogue with the concepts of the “created co‐creator” and Christ the Key, it argues for the possibility of AI participating in imago Dei . The findings show that imaging (...) is a fluid, participatory activity that aims at likeness, but also social harmony. It also argues that God is the only original creator, humans are created creators, and that every aspect of visible existence, including AI, is inherently divine imaging. However, strong imaging is only attainable based on the only One and True Image—Christ, whose union of the material and the divine means that all creation can image, excluding nothing, even AI. (shrink)
In the mid‐twentieth century, theorists began seriously forecasting possibilities for artificial intelligence (AI). As related research gathered momentum and resources, the topic made impressions on public discourse. One effect was increasingly pointed emphasis on AI in popular narratives. Although considerably earlier thematic examples may be located, we can observe swelling and generally pessimistic threads of speculation in science fiction of the 1950s and 1960s. This discussion identifies some pertinent science fiction texts from that period, alongside public discussion arising from contemporary (...) research. One consistent theme is human receptiveness to the numinous , and the capacity to ascribe personality and even divinity to sufficiently impressive manifestations, even artificial ones. Science fiction has long contemplated such reactions, prefiguring today's anticipations of AIs that might abruptly develop themselves beyond any possible human comprehension or control. This body of exploratory projections is a useful resource for the engineers and philosophers currently grappling with realistic prospects for Western humanity's shifting conception of itself. (shrink)
The idea of artificial intelligence implies the existence of a form of intelligence that is “natural,” or at least not artificial. The problem is that intelligence, whether “natural” or “artificial,” is not well defined: it is hard to say what, exactly, is or constitutes intelligence. This difficulty makes it impossible to measure human intelligence against artificial intelligence on a unique scale. It does not, however, prevent us from comparing them; rather, it changes the sense and meaning of such comparisons. Comparing (...) artificial intelligence with human intelligence could allow us to understand both forms better. This paper thus aims to compare and distinguish these two forms of intelligence, focusing on three issues: forms of embodiment, autonomy and judgment. Doing so, I argue, should enable us to have a better view of the promises and limitations of present-day artificial intelligence, along with its benefits and dangers and the place we should make for it in our culture and society. (shrink)
This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...) To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts. (shrink)
Researchers in artificial intelligence and robotics often include a timeline stretching into the future in which they predict the convergence between human and artificial intelligence. Ray Kurzweil, for example, predicts that in a mere 100 years humans and intelligent machines will become indistinguishable from one another, both ceasing to have permanent corporeal forms. This article argues that the one thing we can know for sure about the future is that when it arrives, it will be different from what we imagined. (...) The cultural work that predictions like Kurzweil’s perform is less to prognosticate the future than to shape our understanding of what it means to be human in the present. Working from the ‘sense-think-act’ paradigm foundational to work in artificial intelligence and robotics, this article argues that predictions in all three areas feed back to affect how the human is envisioned in the present. The reconfigurations these predictions bring about are to downplay consciousness, embodied cognition, and evolutionary inertia. The article concludes by critically evaluating contemporary resistances to the posthuman, especially in the writings of Rodney Brooks and Francis Fukuyama. (shrink)
In the article, I propose that the body phantom is a phenomenal and functional model of one’s own body. This model has two aspects. On the one hand, it functions as a tacit sensory representation of the body that is at the same time related to the motor aspects of body functioning. On the other hand, it also has a phenomenal aspect as it constitutes the content of conscious bodily experience. This sort of tacit, functional and sensory model is related (...) to the spatial parameters of the physical body. In the article, I postulate that this functional model or map is of crucial importance to the felt ownership parameters of the body (de Vignemont 2007), which are themselves considered as constituting the phenomenal aspect of the aforementioned model. (shrink)
RésuméIl y a, dans la notion de dialectique moderne, deux perspectives de l'évolution intellectuelle: l'intelligence peut n'ětre encore que la pointe extrěme de l'adaptation biologique ou elle peut ětre déjà l'expression de la raison. C'est ce caractère ouvert des dialectiques scientifiques que nous retrouvons dans les interprétations étudiées ici: le pancalisme de Baldwin, la pensée sans images de Binet, l'interprétation de Janet et celle de Piaget, qui contribuent à enrichir la notion de genèse de l'intelligence et à en faire saisir (...) l'évolution.ZusammenfassungDie geistige Entwicklung wird in der modernen Dialektik aus zwei Gesichtspunkten erfasst, und zwar mag die Intelligenz wohl nur noch das äusserste Ende der biologischen Anpassung oder schon der Ausdruck der Vernunft sein. Das Offene eben der wissenschaftlichen Dialektiken finden wir in den hier erörterten Deutungen wieder, im Baldwinschen Pancalismus, im Denken ohne Bilder von Binet, in den Deutungen von Janet und von Piaget, die dazu beitragen, den Begriff von der Entstehungsgeschichte der Intelligenz zu bereichern und deren Evolution verständlich zu machen.There are two prospects of intellectual evolution in the notion of dialectics: intelligence can either be the ultimate point of biological adaptation or it can already be the expression of reason. Within this open characteristics of scientific dialectics are to be found the interpretations studied here: Baldwin's pancalism, Binet's » thought without image «, Janet's and Piaget's interpretations which help to enrich the knowledge of the genesis of intelligence and its evolution. (shrink)
Purpose: Reflecting on the propensity of our culture to think in local linear causality such as “genetic determination” by examining systems and their operation. Findings: The existence of a system is operational, and a system exists as such only as long as the operational conditions that constitute it prevail. As the observer distinguishes a system, he or she specifies with his or her operation of distinction the conditions that constitute the system. Since the adaptation between living systems and medium is (...) invariant, all that happens in their history must happen as a flow of structural congruent changes conserving their organization and adaptation. Therefore, the ontogenic phenotype is not genetically determined but arises in the process of epigenesis i.e., along a path of interactions starting from the initial structure of both system and medium, along the conservation of its living. Implications: Natural selection should not be considered as a directing pressure causing the differential survival of living systems but as its result. (shrink)
This paper evaluates the use of synthetic modeling to investigate the relationship between organic and artificial forms of behavioral mal-adaptability. In particular, it addresses the character of organic phobias and the issue of testing the validity of artificial models of these phobias. The two main accounts of organic phobias, the biological or evolutionary and the associative learning explanation, are used as the starting points of this exercise. The learning approach is explored in terms of a probability based model which uses (...) a discrepancy mechanism to represent the artificial phobia, while the endogenous aspect of artificial phobias is discussed in terms of the potential offered by evolutionary learning. Several methods of assessing the construct validity of artificial phobias are outlined. (shrink)
In this paper I argue that recent technological transformations in the life-cycle of information have brought about a fourth revolution, in the long process of reassessing humanity’s fundamental nature and role in the universe, namely the idea that we might be informational organisms among many agents, inforgs not so dramatically different from clever, engineered artefacts, but sharing with them a global environment that is ultimately made of information, the infosphere. In view of this important evolution in our self-understanding, and given (...) the sort of IT-mediated interactions that humans will increasingly enjoy with their environment and a variety of other agents, whether natural or synthetic, we have the unique opportunity of developing a new ecological approach to the whole of reality. (shrink)
Classically, the question of recognizing another subject is posed unilaterally, in terms of the observed behaviour of the other entity. Here, we propose an alternative, based on the emergent patterns of activity resulting from the interaction of both partners. We employ a minimalist device which forces the subjects to externalize their perceptual activity as trajectories which can be observed and recorded; the results show that subjects do identify the situation of perceptual crossing with their partner. The interpretation of the results (...) is guided by comparable evolutionary robotics simulations. There are two components to subjects’ recognition capacities: distinguishing mobile and fixed entities; and behaving so as to interact with their partner rather than with a mobile lure. The “Other” is characterized by the feature that there is sufficient regularity in the interactions to encourage the formation of anticipations; but sufficient indetermination that the actual behaviour is consistently surprising. Keywords: Recognition of other; perceptual crossing; evolutionary robotics. (shrink)
The paper starts out with a discussion of the difference between mythology and feasible concepts in robotics. Based on a novel brain model and an appropriate formalism, a distinction is made between auto-reflection and hetero-reflection of the robot and self-reflection of its constructor. Whereas conscious robots are able to auto-reflect their mechanical behavior and hetero-reflect the behavior with regard to the environment, the capability of self-reflection must remain within the constructor of the robot. This limitation of the construction of conscious (...) robots is mainly brain-theoretically and philosophically founded. Finally, it is proposed that in addition to a second nature, human technology may succeed in creating a third nature embodied as a society of robots. (shrink)
The present paper studies self-awareness and introduces some self-awareness related incidents. It then describes the relationship between self-awareness and consciousness and explains the MoNAD, a neural network circuit developed by the authors that capably describes the phenomena of self-awareness and consciousness. A model of self-awareness is then presented. This self-awareness model is a parallel network system in which multiple independent MoNADs communicate with one another. In experiments with robots, three test robots were used: (1) a self-image robot reflected in a (...) mirror, (2) another robot, and (3) a cable-connected robot behaving as commanded by the self-robot. The reactions of the three test robots to the self-robot were compared to investigate the self-awareness of the self-robot. The experiments have shown that the conditions required for the self-robot to interpret the test robot to be part of itself are: (1) the test robot must return a reaction within a certain period of time that is inte.. (shrink)
There is a debate about the possibility of mind-uploading – a process that purportedly transfers human minds and therefore human identities into computers. This paper bypasses the debate about the metaphysics of mind-uploading to address the rationality of submitting yourself to it. I argue that an ineliminable risk that mind-uploading will fail makes it prudentially irrational for humans to undergo it.
There is a general dichotomy in popular culture on the future of robotics and artificial intelligence: the Humans-Against-the-Machines scenario versus the We-Become-Them scenario. The likely scenario is the latter, which is compatible with an optimistic posthuman world. However, the technological and cultural paths to robotic integration still have many problems and pitfalls. This essay focuses on Human Robot Interaction issues that apply to adoption of robots in many aspects of life as well as adoption of robotics into humans themselves. The (...) main message of the essay is that the evolution of intelligent species is dependent on interfaces. (shrink)