Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...) intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy. (shrink)
The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...) from mathematics, physics, computer science, philosophy, linguistics, and biology, setting up their book around three central questions: What are the essential marks of human intelligence? What is it that researchers try to do when they attempt to achieve "artificial intelligence" (AI)? And why, after more than 50 years, are our most common interactions with AI, for example with our bank’s computers, still so unsatisfactory? Landgrebe and Smith show how a widespread fear about AI’s potential to bring about radical changes in the nature of human beings and in the human social order is founded on an error. There is still, as they demonstrate in a final chapter, a great deal that AI can achieve which will benefit humanity. But these benefits will be achieved without the aid of systems that are more powerful than humans, which are as impossible as AI systems that are intrinsically "evil" or able to "will" a takeover of human society. (shrink)
The term ‘intelligence’ as used in this paper refers to items of knowledge collected for the sake of assessing and maintaining national security. The intelligence community (IC) of the United States (US) is a community of organizations that collaborate in collecting and processing intelligence for the US. The IC relies on human-machine-based analytic strategies that 1) access and integrate vast amounts of information from disparate sources, 2) continuously process this information, so that, 3) a maximally comprehensive understanding of world actors (...) and their behaviors can be developed and updated. Herein we describe an approach to utilizing outcomes-based learning (OBL) to support these efforts that is based on an ontology of the cognitive processes performed by intelligence analysts. Of particular importance to the Cognitive Process Ontology is the class Representation that is Warranted. Such a representation is descriptive in nature and deserving of trust in its veridicality. The latter is because a Representation that is Warranted is always produced by a process that was vetted (or successfully designed) to reliably produce veridical representations. As such, Representations that are Warranted are what in other contexts we might refer to as ‘items of knowledge’. (shrink)
Barry Smith recently discussed the diagraphs of book eight of Jacob Lorhard’s Ogdoas scholastica under the heading “birth of ontology” (Smith, 2022; this issue). Here, I highlight the commonalities between the original usage of diagraphs in the tradition of Ramus for didactic purposes and the the usage of their present-day successors–modern ontologies–for computational purposes. The modern ideas of ontology and of the universal computer were born just two generations apart in the breakthrough century of instrumental reason.
The view of nature we adopt in the natural attitude is determined by common sense, without which we could not survive. Classical physics is modelled on this common-sense view of nature, and uses mathematics to formalise our natural understanding of the causes and effects we observe in time and space when we select subsystems of nature for modelling. But in modern physics, we do not go beyond the realm of common sense by augmenting our knowledge of what is going on (...) in nature. Rather, we have measurements that we do not understand, so we know nothing about the ontology of what we measure. We help ourselves by using entities from mathematics, which we fully understand ontologically. But we have no ontology of the reality of modern physics; we have only what we can assert mathematically. In this paper, we describe the ontology of classical and modern physics against this background and show how it relates to the ontology of common sense and of mathematics. (shrink)
Some defenders of so-called `artificial intelligence' believe that machines can understand language. In particular, Søgaard has argued in his "Understanding models understanding language" (2022) for a thesis of this sort. His idea is that (1) where there is semantics there is also understanding and (2) machines are not only capable of what he calls `inferential semantics', but even that they can (with the help of inputs from sensors) `learn' referential semantics. We show that he goes wrong because he pays insufficient (...) attention to the difference between language as used by humans and the sequences of inert symbols which arise when language is stored on hard drives or in books in libraries. (shrink)
Here we present what we believe is a novel account of what languages are, along with an axiomatically rich representation of languages and language-related data that is based on this account. We propose an account of languages as aggregates of dispositions distributed across aggregates of persons, and in doing so we address linguistic competences and the processes that realize them. This paves the way for representing additional types of language-related entities. Like demographic data of other sorts, data about languages may (...) be of use to researchers in a number of areas, including biomedical research. Data on the languages used in clinical encounters are typically included in medical records, and capture an important factor in patient-provider interactions. Like many types of patient and demographic data, data on a person’s preferred and primary languages are organized in different ways by different systems. This can be a barrier to data integration. We believe that a robust framework for representing language in general and preferred and primary language in particular – which has been lacking in ontologies thus far – can promote more successful integration of language-related data from disparate data sources. (shrink)
Since the noun phrase `artificial intelligence' (AI) was coined, it has been debated whether humans are able to create intelligence using technology. We shed new light on this question from the point of view of themodynamics and mathematics. First, we define what it is to be an agent (device) that could be the bearer of AI. Then we show that the mainstream definitions of `intelligence' proposed by Hutter and others and still accepted by the AI community are too weak even (...) to capture what is involved when we ascribe intelligence to an insect. We then summarise the highly useful definition of basic (arthropod) intelligence proposed by Rodney Brooks, and we identify the properties that an AI agent would need to possess in order to be the bearer of intelligence by this definition. Finally, we show that, from the perspective of the disciplines needed to create such an agent, namely mathematics and physics, these properties are realisable by neither implicit nor explicit mathematical design nor by setting up an environment in which an AI could evolve spontaneously. (shrink)
Sam Harris is a contemporary illustration of the difficulties standing in the way of coherent interdisciplinary thinking in an age where science and the humanities have drifted so far apart. We are here with Harris’s views on AI, and specifically with his view according to which, with the advance of AI, there will evolve a machine superintelligence with powers that far exceed those of the human mind. This he sees as something that is not merely possible, but rather a matter (...) of inevitability. If, however, we look carefully at what intelligence is, and at how computers really work on the basis of mathematical models, then we can see that it is forever impossible to emulate inside a computer even the intelligence of crows or rabbits, let alone that of human beings. (shrink)
Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity — for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans. -/- According to the theory, advances in AI — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. (...) In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. We’re literally building our soon-to-be-sentient successors. -/- Except that it will never happen. At least, according to the authors of the new book Why Machines Will Never Rule the World: Artificial Intelligence without Fear. (shrink)
Research and engineering in the quantum domain involve long chains of activity involving theory development, hypothesis formation, experimentation, device prototyping, device testing, and many more. At each stage multiple paths become possible, and of the paths pursued, the majority will lead nowhere. Our quantum metascience approach provides a strategy which enables all stakeholders to gain an overview of those developments along these tracks, that are relevant to their specific concerns. It provides a controlled vocabulary, built out of terms that are (...) designed to be maximally comprehensible to all groups of stakeholders and across all the sub-fields of the quantum domain. (shrink)
Das in diesem Aufsatz vorgebrachte Argumentationsmuster hat folgende Schritte: 1. Der menschliche Geist ist vom Körper nicht trennbar, sie bilden ein Kontinuum. 2. Unser Bewusstsein und alle darauf aufbauenden geistigen Phänomene sind die Emanation eines materiellen Prozesses, den ein komplexes System verursacht. 3. Komplexe Systeme lassen sich mathematisch nicht modellieren und nicht kausal verstehen. 4. Computer sind Turing-Maschinen. Sie können nur mathematische Modelle berechnen. Es wird niemals Hyper-Turing Maschinen geben, und wenn es sie gäbe, könnten sie auch nur mathematische Modelle (...) berechnen. 5. Es ist nicht möglich, den Körper als Substrat des Geistes durch einen Computer zu ersetzen. Die digitale Unsterblichkeit ist demzufolge ein Ding der Unmöglichkeit. (shrink)
Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...) a new field called ‘explainable AI’ (XAI). When we examine the XAI literature, however, it becomes apparent that its protagonists have redefined the term ‘explanation’ to mean something else, namely: ‘interpretation’. Interpretations are indeed sometimes possible, but we show that they give at best only a subjective understanding of how a model works. We propose an alternative to XAI, namely certified AI (CAI), and describe how an AI can be specified, realized, and tested in order to become certified. The resulting approach combines ontologies and formal logic with statistical learning to obtain reliable AI systems which can be safely used in technical applications. (shrink)
The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...) should be accepted also as a necessary condition of AGI, and we provide a description of the nature of human dialogue in particular and of human language in general against this background. We then argue that it is for mathematical reasons impossible to program a machine in such a way that it could master human dialogue behaviour in its full generality. This is (1) because there are no traditional explicitly designed mathematical models that could be used as a starting point for creating such programs; and (2) because even the sorts of automated models generated by using machine learning, which have been used successfully in areas such as machine translation, cannot be extended to cope with human dialogue. If this is so, then we can conclude that a Turing machine also cannot possess AGI, because it fails to fulfil a necessary condition thereof. At the same time, however, we acknowledge the potential of Turing machines to master dialogue behaviour in highly restricted contexts, where what is called “narrow” AI can still be of considerable utility. (shrink)
In his “Bridging mainstream and formal ontology”, Augusto (2021) gives an excellent analysis of Dietrich von Freiberg’s idea of using causality as a partitioning principle for upper ontologies. For this Dietrich’s notion of extrinsic principles is crucial. The question whether causation can and indeed should be used as a partitioning principle for ontologies is discussed using mathematics and physics as examples.
Health Level 7 (HL7) is an international standards development organisation in the domain of healthcare information technology. Initially the mission of HL7 was to enable data exchange via the creation of syntactic standards which supported point-to-point messaging. Currently HL7 sees its mission as one of creating standards for semantic interoperability in healthcare IT on the basis of its flagship “version 3” (v3). Unfortunately, v3 has been plagued by quality and consistency issues, and it has not been able to keep pace (...) with recent developments either in semantics and ontology or in computer science and engineering. HL7’s response has been to develop its “Services-Aware Interoperability Framework” (SAIF), which is intended to provide a foundation for work on all aspects of standardization in HL7 henceforth. We here summarise the major design principles that must be satisfied by a semantic interoperability framework – principles relating both to static semantics and to computational behaviour. We then assess the SAIF in light of these principles. We conclude that the SAIF is not in a position to support the needed reform of the HL7 v3 family of standards. (shrink)