Results for 'Large Language Model'

994 found
Order:
See also
  1. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for (...)
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  4.  71
    Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  57
    Large Language Models Demonstrate the Potential of Statistical Learning in Language.Pablo Contreras Kallens, Ross Deans Kristensen-McLachlan & Morten H. Christiansen - 2023 - Cognitive Science 47 (3):e13256.
    To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  6.  45
    Do Large Language Models Know What Humans Know?Sean Trott, Cameron Jones, Tyler Chang, James Michaelov & Benjamin Bergen - 2023 - Cognitive Science 47 (7):e13309.
    Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models exposed to large quantities of human language display sensitivity to the implied knowledge states of characters in written passages. In pre‐registered analyses, we present a linguistic version of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  7.  14
    Large Language Models: A Historical and Sociocultural Perspective.Eugene Yu Ji - 2024 - Cognitive Science 48 (3):e13430.
    This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information‐based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid‐20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that time. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  23
    Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - forthcoming - Journal of Medical Ethics.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful integration (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  83
    Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  11.  32
    AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  12.  12
    Large Language Models and Inclusivity in Bioethics Scholarship.Sumeeta Varma - 2023 - American Journal of Bioethics 23 (10):105-107.
    In the target article, Porsdam Mann and colleagues (2023) broadly survey the ethical opportunities and risks of using general and personalized large language models (LLMs) to generate academic pros...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Large Language Models” Do Much More than Just Language: Some Bioethical Implications of Multi-Modal AI.Joshua August Skorburg, Kristina L. Kupferschmidt & Graham W. Taylor - 2023 - American Journal of Bioethics 23 (10):110-113.
    Cohen (2023) takes a fair and measured approach to the question of what ChatGPT means for bioethics. The hype cycles around AI often obscure the fact that ethicists have developed robust frameworks...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Personhood and AI: Why large language models don’t understand us.Jacob Browning - forthcoming - AI and Society:1-8.
    Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Evaluating large language models’ ability to generate interpretive arguments.Zaid Marji & John Licato - forthcoming - Argument and Computation.
    In natural language understanding, a crucial goal is correctly interpreting open-textured phrases. In practice, disagreements over the meanings of open-textured phrases are often resolved through the generation and evaluation of interpretive arguments, arguments designed to support or attack a specific interpretation of an expression within a document. In this paper, we discuss some of our work towards the goal of automatically generating and evaluating interpretive arguments. We have curated a set of rules from the code of ethics of various (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16.  78
    Creating a large language model of a philosopher.Eric Schwitzgebel, David Schwitzgebel & Anna Strasser - 2023 - Mind and Language 39 (2):237-259.
    Can large language models produce expert‐quality philosophical texts? To investigate this, we fine‐tuned GPT‐3 with the works of philosopher Daniel Dennett. To evaluate the model, we asked the real Dennett 10 philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry‐picking. Experts on Dennett's work succeeded at distinguishing the Dennett‐generated and machine‐generated answers above chance but substantially short of our expectations. Philosophy blog readers performed similarly (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  17.  32
    Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  7
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20.  55
    The Epistemological Danger of Large Language Models.Elise Li Zheng & Sandra Soo-Jin Lee - 2023 - American Journal of Bioethics 23 (10):102-104.
    The potential of ChatGPT looms large for the practice of medicine, as both boon and bane. The use of Large Language Models (LLMs) in platforms such as ChatGPT raises critical ethical questions of w...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  19
    Do Large Language Models Understand? 천현득 - 2023 - CHUL HAK SA SANG - Journal of Philosophical Ideas 90 (90):75-105.
    이 글은 챗지피티와 같은 생성형 언어모형이 이해를 가지는지 검토한다. 우선, 챗지피티의 기본 골격을 이루는 트랜스포머(Transformer) 구조의 작동방식을 간략히 소개한 후, 나는 이해를 고유하게 언어적인 이해와 인지적인 이해로 구분하며, 더 나아가 인지적 이해는 인식론적 이해와 의미론적 이해로 구분될 수 있음을 보인다. 이러한 구분에 따라, 대형언어모형은 언어적 이해는 가질 수 있지만 좋은 인지적 이해를 가지지 않음을 주장한다. 특히, 목적의미론을 기반으로 대형언어모형이 의미론적 이해를 가질 수 있다고 주장하는 코엘로 몰로와 밀리에르(2023)의 논변을 비판한다.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Creating a Large Language Model of a Philosopher.Eric Schwitzgebel, David Schwitzgebel & Anna Strasser - manuscript
    Can large language models be trained to produce philosophical texts that are difficult to distinguish from texts produced by human philosophers? To address this question, we fine-tuned OpenAI's GPT-3 with the works of philosopher Daniel C. Dennett as additional training data. To explore the Dennett model, we asked the real Dennett ten philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry-picking. We recruited 425 participants (...)
     
    Export citation  
     
    Bookmark  
  23. Scrutinizing the foundations: could large language models be solipsistic?Andreea Esanu - 2024 - Synthese 203 (5):1-20.
    In artificial intelligence literature, “delusions” are characterized as the generation of unfaithful output from reliable source content. There is an extensive literature on computer-generated delusions, ranging from visual hallucinations, like the production of nonsensical images in Computer Vision, to nonsensical text generated by (natural) language models, but this literature is predominantly taxonomic. In a recent research paper, however, a group of scientists from DeepMind successfully presented a formal treatment of an entire class of delusions in generative AI models (i.e., (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  20
    AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  25. Introspective Capabilities in Large Language Models.Robert Long - 2023 - Journal of Consciousness Studies 30 (9):143-153.
    This paper considers the kind of introspection that large language models (LLMs) might be able to have. It argues that LLMs, while currently limited in their introspective capabilities, are not inherently unable to have such capabilities: they already model the world, including mental concepts, and already have some introspection-like capabilities. With deliberate training, LLMs may develop introspective capabilities. The paper proposes a method for such training for introspection, situates possible LLM introspection in the 'possible forms of introspection' (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26.  23
    Vox Populi, Vox ChatGPT: Large Language Models, Education and Democracy.Niina Zuber & Jan Gogoll - 2024 - Philosophies 9 (1):13.
    In the era of generative AI and specifically large language models (LLMs), exemplified by ChatGPT, the intersection of artificial intelligence and human reasoning has become a focal point of global attention. Unlike conventional search engines, LLMs go beyond mere information retrieval, entering into the realm of discourse culture. Their outputs mimic well-considered, independent opinions or statements of facts, presenting a pretense of wisdom. This paper explores the potential transformative impact of LLMs on democratic societies. It delves into the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, or (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - forthcoming - Ethics and Information Technology.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative AI (Artificial Intelligence) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  29.  15
    Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely.Carina Kauf, Anna A. Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko & Alessandro Lenci - 2023 - Cognitive Science 47 (11):e13386.
    Word co‐occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context, leverage these patterns to achieve impressive performance on diverse semantic tasks requiring world knowledge. An important but understudied question about LLMs’ semantic abilities is whether they acquire generalized knowledge of common events. Here, we test whether five pretrained LLMs (from 2018's BERT to 2023's MPT) assign a higher likelihood to plausible descriptions of agent−patient interactions (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Prompting Metalinguistic Awareness in Large Language Models: ChatGPT and Bias Effects on the Grammar of Italian and Italian Varieties.Angelapia Massaro & Giuseppe Samo - 2023 - Verbum 14.
    We explore ChatGPT’s handling of left-peripheral phenomena in Italian and Italian varieties through prompt engineering to investigate 1) forms of syntactic bias in the model, 2) the model’s metalinguistic awareness in relation to reorderings of canonical clauses (e.g., Topics) and certain grammatical categories (object clitics). A further question concerns the content of the model’s sources of training data: how are minor languages included in the model’s training? The results of our investigation show that 1) the (...) seems to be biased against reorderings, labelling them as archaic even though it is not the case; 2) the model seems to have difficulties with coindexed elements such as clitics and their anaphoric status, labeling them as ‘not referring to any element in the phrase’, and 3) major languages still seem to be dominant, overshadowing the positive effects of including minor languages in the model’s training. (shrink)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Does thought require sensory grounding? From pure thinkers to large language models.David J. Chalmers - 2023 - Proceedings and Addresses of the American Philosophical Association 97:22-45.
    Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. Many have argued that AI systems such as large language models cannot think and understand if they lack sensory grounding. I argue that thought does not require sensory grounding: there can be pure thinkers who can think without any sensory capacities. As a result, the absence of sensory grounding does (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32.  24
    Why Personalized Large Language Models Fail to Do What Ethics is All About.Sebastian Laacke & Charlotte Gauckler - 2023 - American Journal of Bioethics 23 (10):60-63.
    Porsdam Mann and colleagues provide an overview of opportunities and risks associated with the use of personalized large language models (LLMs) for text production in bio)ethics (Porsdam Mann et al...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Babbling stochastic parrots? On reference and reference change in large language models.Steffen Koch - manuscript
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models.Luciano Floridi - 2023 - Philosophy and Technology 36 (1):1-7.
  35.  21
    How Can Large Language Models Support the Acquisition of Ethical Competencies in Healthcare?Jilles Smids & Maartje Schermer - 2023 - American Journal of Bioethics 23 (10):68-70.
    Rahimzadeh et al. (2023) provide an interesting and timely discussion of the role of large language models (LLMs) in ethics education. While mentioning broader educational goals, the paper’s main f...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  35
    Assessing the Strengths and Weaknesses of Large Language Models.Shalom Lappin - 2023 - Journal of Logic, Language and Information 33 (1):9-20.
    The transformers that drive chatbots and other AI systems constitute large language models (LLMs). These are currently the focus of a lively discussion in both the scientific literature and the popular media. This discussion ranges from hyperbolic claims that attribute general intelligence and sentience to LLMs, to the skeptical view that these devices are no more than “stochastic parrots”. I present an overview of some of the weak arguments that have been presented against LLMs, and I consider several (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  37.  57
    On pitfalls (and advantages) of sophisticated Large Language Models.Anna Strasser - forthcoming - In Joan Casas-Roma, Santi Caballe & Jordi Conesa (eds.), Ethics in Online AI-Based Systems: Risks and Opportunities in Current Technological Trends. Elsevier.
    Natural language processing based on large language models (LLMs) is a booming field of AI research. After neural networks have proven to outperform humans in games and practical domains based on pattern recognition, we might stand now at a road junction where artificial entities might eventually enter the realm of human communication. However, this comes with serious risks. Due to the inherent limitations regarding the reliability of neural networks, overreliance on LLMs can have disruptive consequences. Since it (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  17
    The Impact of AUTOGEN and Similar Fine-Tuned Large Language Models on the Integrity of Scholarly Writing.David B. Resnik & Mohammad Hosseini - 2023 - American Journal of Bioethics 23 (10):50-52.
    Artificial intelligence (AI), large language models (LLMs), such as Open AI’s ChatGPT, have a remarkable ability to process and generate human language but have also raised complex and novel ethica...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  74
    Conditional and Modal Reasoning in Large Language Models.Wesley H. Holliday & Matthew Mandelkern - manuscript
    The reasoning abilities of large language models (LLMs) are the topic of a growing body of research in artificial intelligence and cognitive science. In this paper, we probe the extent to which a dozen LLMs are able to distinguish logically correct inferences from logically fallacious ones. We focus on inference patterns involving conditionals (e.g., 'If Ann has a queen, then Bob has a jack') and epistemic modals (e.g., 'Ann might have an ace', 'Bob must have a king'). These (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  40.  35
    Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.Michael Balas, Jordan Joseph Wadden, Philip C. Hébert, Eric Mathison, Marika D. Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A. Crawford, Parnian Arjmand & Edsel B. Ing - 2024 - Journal of Medical Ethics 50 (2):90-96.
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  55
    Ontologies in the era of large language models – a perspective.Fabian Neuhaus - 2023 - Applied ontology 18 (4):399-407.
    The potential of large language models (LLM) has captured the imagination of the public and researchers alike. In contrast to previous generations of machine learning models, LLMs are general-purpose tools, which can communicate with humans. In particular, they are able to define terms and answer factual questions based on some internally represented knowledge. Thus, LLMs support functionalities that are closely related to ontologies. In this perspective article, I will discuss the consequences of the advent of LLMs for the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  90
    Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of the source of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  43.  26
    A Shift Towards Oration: Teaching Philosophy in the Age of Large Language Models.Ryan Lemasters & Clint Hurshman - 2024 - AI and Ethics.
    This paper proposes a reevaluation of assessment methods in philosophy higher education, advocating for a shift away from traditional written assessments towards oral evaluation. Drawing attention to the rising ethical concerns surrounding large language models (LLMs), we argue that a renewed focus on oral skills within philosophical pedagogy is both imperative and underexplored. This paper offers a case for redirecting attention to the neglected realm of oral evaluation, asserting that it holds significant promise for fostering students with some (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  44. Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  31
    Assessing the performance of ChatGPT in bioethics: a large language model’s moral compass in medicine.Jamie Chen, Angelo Cadiente, Lora J. Kasselman & Bryan Pilkington - 2024 - Journal of Medical Ethics 50 (2):97-101.
    Chat Generative Pre-Trained Transformer (ChatGPT) has been a growing point of interest in medical education yet has not been assessed in the field of bioethics. This study evaluated the accuracy of ChatGPT-3.5 (April 2023 version) in answering text-based, multiple choice bioethics questions at the level of US third-year and fourth-year medical students. A total of 114 bioethical questions were identified from the widely utilised question banks UWorld and AMBOSS. Accuracy, bioethical categories, difficulty levels, specialty data, error analysis and character count (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47.  15
    A paradigm shift?—On the ethics of medical large language models.Thomas Grote & Philipp Berens - 2024 - Bioethics 38 (5):383-390.
    After a wave of breakthroughs in image‐based medical diagnostics and risk prediction models, machine learning (ML) has turned into a normal science. However, prominent researchers are claiming that another paradigm shift in medical ML is imminent—due to most recent staggering successes of large language models—from single‐purpose applications toward generalist models, driven by natural language. This article investigates the implications of this paradigm shift for the ethical debate. Focusing on issues like trust, transparency, threats of patient autonomy, responsibility (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  8
    A Dynamical, Radically Embodied, and Ecological Theory of Rhythm Development.Parker Tichko, Ji Chul Kim & Edward W. Large - 2022 - Frontiers in Psychology 13.
    Musical rhythm abilities—the perception of and coordinated action to the rhythmic structure of music—undergo remarkable change over human development. In the current paper, we introduce a theoretical framework for modeling the development of musical rhythm. The framework, based on Neural Resonance Theory, explains rhythm development in terms of resonance and attunement, which are formalized using a general theory that includes non-linear resonance and Hebbian plasticity. First, we review the developmental literature on musical rhythm, highlighting several developmental processes related to rhythm (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  23
    Friend or foe? Exploring the implications of large language models on the science system.Benedikt Fecher, Marcel Hebing, Melissa Laufer, Jörg Pohle & Fabian Sofsky - forthcoming - AI and Society:1-13.
    The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  12
    Modeling Structure‐Building in the Brain With CCG Parsing and Large Language Models.Miloš Stanojević, Jonathan R. Brennan, Donald Dunagan, Mark Steedman & John T. Hale - 2023 - Cognitive Science 47 (7):e13312.
    To model behavioral and neural correlates of language comprehension in naturalistic environments, researchers have turned to broad‐coverage tools from natural‐language processing and machine learning. Where syntactic structure is explicitly modeled, prior work has relied predominantly on context‐free grammars (CFGs), yet such formalisms are not sufficiently expressive for human languages. Combinatory categorial grammars (CCGs) are sufficiently expressive directly compositional models of grammar with flexible constituency that affords incremental interpretation. In this work, we evaluate whether a more expressive CCG (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 994