This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related

Contents
46 found
Order:
  1. AI-Based Solutions for Environmental Monitoring in Urban Spaces.Hilda Andrea - manuscript
    The rapid advancement of urbanization has necessitated the creation of "smart cities," where information and communication technologies (ICT) are used to improve the quality of urban life. Central to the smart city paradigm is data integration—connecting disparate data sources from various urban systems, such as transportation, healthcare, utilities, and public safety. This paper explores the role of Artificial Intelligence (AI) in facilitating data integration within smart cities, focusing on how AI technologies can enable effective urban governance. By examining the current (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. What is AI safety? What do we want it to be?Jacqueline Harding & Cameron Domenico Kirk-Giannini - manuscript
    The field of AI safety seeks to prevent or reduce the harms caused by AI systems. A simple and appealing account of what is distinctive of AI safety as a field holds that this feature is constitutive: a research project falls within the purview of AI safety just in case it aims to prevent or reduce the harms caused by AI systems. Call this appealingly simple account The Safety Conception of AI safety. Despite its simplicity and appeal, we argue that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Can Word Models be World Models? Language as a Window onto the Conditional Structure of the World.Matthieu Queloz - manuscript
    LLMs are, in the first instance, models of the statistical distribution of tokens in the vast linguistic corpus they have been trained on. But their often surprising emergent capabilities raise the question of how much understanding of the extralinguistic world LLMs can glean from this statistical distribution of words alone. Here, I explore and evaluate the idea that the probability distribution of words in the public corpus offers a window onto the conditional structure of the world. To become a good (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  5. Before the Systematicity Debate: Recovering the Rationales for Systematizing Thought.Matthieu Queloz - manuscript
    Over the course of the twentieth century, the notion of the systematicity of thought has acquired a much narrower meaning than it used to carry for much of its history. The so-called “systematicity debate” that has dominated the philosophy of language, cognitive science, and AI research over the last thirty years understands the systematicity of thought in terms of the compositionality of thought. But there is an older, broader, and more demanding notion of systematicity that is now increasingly relevant again. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - manuscript
    A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but cohesive, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and cohesiveness promise to facilitate progress (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. A Hybrid Approach for Intrusion Detection in IoT Using Machine Learning and Signature-Based Methods.Janet Yan - manuscript
    Internet of Things (IoT) devices have transformed various industries, enabling advanced functionalities across domains such as healthcare, smart cities, and industrial automation. However, the increasing number of connected devices has raised significant concerns regarding their security. IoT networks are highly vulnerable to a wide range of cyber threats, making Intrusion Detection Systems (IDS) critical for identifying and mitigating malicious activities. This paper proposes a hybrid approach for intrusion detection in IoT networks by combining Machine Learning (ML) techniques with Signature-Based Methods. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Representation in large language models.Cameron Yetman - manuscript
    The extraordinary success of recent Large Language Models (LLMs) on a diverse array of tasks has led to an explosion of scientific and philosophical theorizing aimed at explaining how they do what they do. Unfortunately, disagreement over fundamental theoretical issues has led to stalemate, with entrenched camps of LLM optimists and pessimists often committed to very different views of how these systems work. Overcoming stalemate requires agreement on fundamental questions, and the goal of this paper is to address one such (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. ‘Interpretability’ and ‘Alignment’ are Fool’s Errands: A Proof that Controlling Misaligned Large Language Models is the Best Anyone Can Hope For.Marcus Arvan - forthcoming - AI and Society.
    This paper uses famous problems from philosophy of science and philosophical psychology—underdetermination of theory by evidence, Nelson Goodman’s new riddle of induction, theory-ladenness of observation, and “Kripkenstein’s” rule-following paradox—to show that it is empirically impossible to reliably interpret which functions a large language model (LLM) AI has learned, and thus, that reliably aligning LLM behavior with human values is provably impossible. Sections 2 and 3 show that because of how complex LLMs are, researchers must interpret their learned functions largely in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. Will Large Language Models Overwrite Us?Walter Barta - forthcoming - Double Helix.
  12. The Curious Case of Uncurious Creation.Lindsay Brainard - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper seeks to answer the question: Can contemporary forms of artificial intelligence be creative? To answer this question, I consider three conditions that are commonly taken to be necessary for creativity. These are novelty, value, and agency. I argue that while contemporary AI models may have a claim to novelty and value, they cannot satisfy the kind of agency condition required for creativity. From this discussion, a new condition for creativity emerges. Creativity requires curiosity, a motivation to pursue epistemic (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  13. Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback.Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mosse, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde & William S. Zwicker - forthcoming - Proceedings of the Forty-First International Conference on Machine Learning.
    Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about "collective" (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. Conversations with Chatbots.P. Connolly - forthcoming - In Patrick Connolly, Sandy Goldberg & Jennifer Saul (eds.), Conversations Online. Oxford University Press.
    The problem considered in this chapter emerges from the tension we find when looking at the design and architecture of chatbots on the one hand and their conversational aptitude on the other. In the way that LLM chatbots are designed and built, we have good reason to suppose they don't possess second-order capacities such as intention, belief or knowledge. Yet theories of conversation make great use of second-order capacities of speakers and their audiences to explain how aspects of interaction succeed. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. I Contain Multitudes: A Typology of Digital Doppelgängers.William D'Alessandro, Trenton W. Ford & Michael Yankoski - forthcoming - American Journal of Bioethics.
    In "Digital Doppelgängers and Lifespan Extension: What Matters?", Iglesias et al. argue that “some of the aims or ostensible goods of person-span expansion could plausibly be fulfilled in part by creating a digital doppelgänger”. Since person-extension aims are deeply heterogeneous, however, no single type of doppelgänger system is likely to suffice to meet all such needs. We propose a partial typology of doppelgängers—the family heirloom, the research archive, the public legacy, the project surrogate—and suggest appropriate training methods, design features and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. AI Wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - Asian Journal of Philosophy.
    Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  18. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based learning (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  19. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini (ed.), Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. Reflection, confabulation, and reasoning.Jennifer Nagel - forthcoming - In Luis Oliveira & Joshua DiPaolo (eds.), Kornblith and His Critics. Wiley-Blackwell.
    Humans have distinctive powers of reflection: no other animal seems to have anything like our capacity for self-examination. Many philosophers hold that this capacity has a uniquely important guiding role in our cognition; others, notably Hilary Kornblith, draw attention to its weaknesses. Kornblith chiefly aims to dispel the sense that there is anything ‘magical’ about second-order mental states, situating them in the same causal net as ordinary first-order mental states. But elsewhere he goes further, suggesting that there is something deeply (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. Language and thought: The view from LLMs.Daniel Rothschild - forthcoming - In David Sosa & Ernie Lepore (eds.), Oxford Studies in Philosophy of Language Volume 3.
  23. From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic.Katia Schwerzmann - forthcoming - AI and Society.
    This paper reframes the issue of appropriation, extraction, and dispossession through AI—an assemblage of machine learning models trained on big data—in terms of enclosure and foreclosure. While enclosures are the product of a well-studied set of operations pertaining to both the constitution of the sovereign State and the primitive accumulation of capital, here, I want to recover an older form of the enclosure operation to then contrast it with foreclosure to better understand the effects of current algorithmic rationality. I argue (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. (1 other version)Artificial Intelligence (AI) and Global Justice.Siavosh Sahebi & Paul Formosa - 2025 - Minds and Machines 35 (4):1-29.
    This paper provides a philosophically informed and robust account of the global justice implications of Artificial Intelligence (AI). We first discuss some of the key theories of global justice, before justifying our focus on the Capabilities Approach as a useful framework for understanding the context-specific impacts of AI on lowto middle-income countries. We then highlight some of the harms and burdens facing low- to middle-income countries within the context of both AI use and the AI supply chain, by analyzing the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  25. Attributions toward Artificial Agents in a modified Moral Turing Test.Eyal Aharoni, Sharlene Fernandes, Daniel Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias & Victor Crespo - 2024 - Scientific Reports 14 (8458):1-11.
    Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Creative Minds Like Ours? Large Language Models and the Creative Aspect of Language Use.Vincent Carchidi - 2024 - Biolinguistics 18:1-31.
    Descartes famously constructed a language test to determine the existence of other minds. The test made critical observations about how humans use language that purportedly distinguishes them from animals and machines. These observations were carried into the generative (and later biolinguistic) enterprise under what Chomsky in his Cartesian Linguistics, terms the “creative aspect of language use” (CALU). CALU refers to the stimulus-free, unbounded, yet appropriate use of language—a tripartite depiction whose function in biolinguistics is to highlight a species-specific form of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  28. Large language models and linguistic intentionality.Jumbly Grindrod - 2024 - Synthese 204 (2):1-24.
    Do large language models like Chat-GPT or Claude meaningfully use the words they produce? Or are they merely clever prediction machines, simulating language use by producing statistically plausible text? There have already been some initial attempts to answer this question by showing that these models meet the criteria for entering meaningful states according to metasemantic theories of mental content. In this paper, I will argue for a different approach—that we should instead consider whether language models meet the criteria given by (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
  30. (1 other version)Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - 2024 - Techné: Research in Philosophy and Technology 28 (2):219-235.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31. (1 other version)Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - 2024 - Techné: Research in Philosophy and Technology 28 (2):219-235.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  32. Smart Route Optimization for Emergency Vehicles: Enhancing Ambulance Efficiency through Advanced Algorithms.R. Indoria - 2024 - Technosaga 1 (1):1-6.
    Emergency response times play a critical role in saving lives, especially in urban settings where traffic congestion and unpredictable events can delay ambulance arrivals. This paper explores a novel framework for smart route optimization for emergency vehicles, leveraging artificial intelligence (AI), Internet of Things (IoT) technologies, and dynamic traffic analytics. We propose a real-time adaptive routing system that integrates machine learning (ML) for predictive modeling and IoT-enabled communication with traffic infrastructure. The system is evaluated using simulated urban environments, achieving a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. Is Alignment Unsafe?Cameron Domenico Kirk-Giannini - 2024 - Philosophy and Technology 37 (110):1–4.
    Inchul Yum (2024) argues that the widespread adoption of language agent architectures would likely increase the risk posed by AI by simplifying the process of aligning artificial systems with human values and thereby making it easier for malicious actors to use them to cause a variety of harms. Yum takes this to be an example of a broader phenomenon: progress on the alignment problem is likely to be net safety-negative because it makes artificial systems easier for malicious actors to control. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  34. Imagination, Creativity, and Artificial Intelligence.Peter Langland-Hassan - 2024 - In Amy Kind & Julia Langkau (eds.), Oxford Handbook of Philosophy of Imagination and Creativity. Oxford University Press.
    This chapter considers the potential of artificial intelligence (AI) to exhibit creativity and imagination, in light of recent advances in generative AI and the use of deep neural networks (DNNs). Reasons for doubting that AI exhibits genuine creativity or imagination are considered, including the claim that the creativity of an algorithm lies in its developer, that generative AI merely reproduces patterns in its training data, and that AI is lacking in a necessary feature for creativity or imagination, such as consciousness, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - 2024 - Computer Law and Security Review 55.
    The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - 2024 - Studies in Logic, Grammar and Rhetoric 69 (1):365-381.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37. Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Nathaniel Sharadin - 2024 - American Journal of Bioethics 24 (7):62-65.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper assignments (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  39. Chatting with Bots: AI, Speech-Acts, and the Edge of Assertion.Iwan Williams & Tim Bayne - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper addresses the question of whether large language model-powered chatbots are capable of assertion. According to what we call the Thesis of Chatbot Assertion (TCA), chatbots are the kinds of things that can assert, and at least some of the output produced by current-generation chatbots qualifies as assertion. We provide some motivation for TCA, arguing that it ought to be taken seriously and not simply dismissed. We also review recent objections to TCA, arguing that these objections are weighty. We (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  40. The Ethics of LLMs at Universities: A Case for Restriction and Regulation.István Zárdai - 2024 - Toxiv e-Print System.
    ‘Disruptive technologies’ is a euphemism for new technologies released lacking adequate regulation, causing significant unemployment and costly, inefficient additional labour. So it stands with LLMs. They output lookalikes of authored writing. Most output remixes existing materials, effectively stealing, since lacking understanding and intention original meaning is not added. LLMs enable low-cost, high-reward dishonesty. Students attempt to submit these products as their own texts. Some in education propose to use LLMs to allow students to generate text and then revise it. This (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Plurale Autorschaft von Mensch und Künstlicher Intelligenz?David Lauer - 2023 - Literatur in Wissenschaft Und Unterricht 2023 (2):245-266.
    This paper (in German) discusses the question of what is going on when large language models (LLMs) produce meaningful text in reaction to human prompts. Can LLMs be understood as authors or producers of speech acts? I argue that this question has to be answered in the negative, for two reasons. First, due to their lack of semantic understanding, LLMs do not understand what they are saying and hence literally do not know what they are (linguistically) doing. Since the agent’s (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  42. Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. Interdisciplinary Communication by Plausible Analogies: the Case of Buddhism and Artificial Intelligence.Michael Cooper - 2022 - Dissertation, University of South Florida
    Communicating interdisciplinary information is difficult, even when two fields are ostensibly discussing the same topic. In this work, I’ll discuss the capacity for analogical reasoning to provide a framework for developing novel judgments utilizing similarities in separate domains. I argue that analogies are best modeled after Paul Bartha’s By Parallel Reasoning, and that they can be used to create a Toulmin-style warrant that expresses a generalization. I argue that these comparisons provide insights into interdisciplinary research. In order to demonstrate this (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44. (1 other version)Morphic Topology of Numeric Energy: A Fractal Morphism of Topological Counting Shows Real Differentiation of Numeric Energy.Parker Emmerson - unknown
    Published with utmost gratitude to Jehovah the living One Allaha and for all His loving angels. Abstract: INTEGRATION BY CONGRUENCY METHODS. The Mathematical Juncture, M indicates a perpendicular elliptical integral and acts as a linguistic congruence permuter for logical dingbat statements. This mathematical junctor is used to permute dingbat expressions into topolog- ical congruent solve methods as described herein. Fractal morphisms, derived from Energy Numbers, which are of a higher vector dimensional vector space and can be mapped to real or (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. LLMs are Not Just Next Token Predictors.Alex Grzankowski, Stephen M. Downes & Patrick Forber - manuscript
    LLMs are statistical models of language learning through stochastic gradient descent with a next token prediction objective. Prompting a popular view among AI modelers: LLMs are just next token predictors. While LLMs are engineered using next token prediction, and trained based on their success at this task, our view is that a reduction to just next token predictor sells LLMs short. Moreover, there are important explanations of LLM behavior and capabilities that are lost when we engage in this kind of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. Ethics at the Frontier of Human-AI Relationships.Henry Shevlin - manuscript
    The idea that humans might one day form persistent and dynamic relationships in professional, social, and even romantic contexts is a longstanding one. However, developments in machine learning and especially natural language processing over the last five years have led to this possibility becoming actualised at a previously unseen scale. Apps like Replika, Xiaoice, and CharacterAI boast many millions of active long-term users, and give rise to emotionally complex experiences. In this paper, I provide an overview of these developments, beginning (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark