Results for 'Large-scale language models'

999 found
Order:
  1.  20
    A LargeScale Analysis of Variance in Written Language.Brendan T. Johns & Randall K. Jamieson - 2018 - Cognitive Science 42 (4):1360-1374.
    The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language. The models treat knowledge as an interaction of processing mechanisms and the structure of language experience. But language experience is often treated agnostically. We report a distributional semantic analysis that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  2.  17
    The Challenges of LargeScale, Web‐Based Language Datasets: Word Length and Predictability Revisited.Stephan C. Meylan & Thomas L. Griffiths - 2021 - Cognitive Science 45 (6):e12983.
    Language research has come to rely heavily on largescale, web‐based datasets. These datasets can present significant methodological challenges, requiring researchers to make a number of decisions about how they are collected, represented, and analyzed. These decisions often concern long‐standing challenges in corpus‐based language research, including determining what counts as a word, deciding which words should be analyzed, and matching sets of words across languages. We illustrate these challenges by revisiting “Word lengths are optimized for efficient communication” (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  3.  24
    Bringing legal knowledge to the public by constructing a legal question bank using large-scale pre-trained language model.Mingruo Yuan, Ben Kao, Tien-Hsuan Wu, Michael M. K. Cheung, Henry W. H. Chan, Anne S. Y. Cheung, Felix W. H. Chan & Yongxi Chen - forthcoming - Artificial Intelligence and Law:1-37.
    Access to legal information is fundamental to access to justice. Yet accessibility refers not only to making legal documents available to the public, but also rendering legal information comprehensible to them. A vexing problem in bringing legal information to the public is how to turn formal legal documents such as legislation and judgments, which are often highly technical, to easily navigable and comprehensible knowledge to those without legal education. In this study, we formulate a three-step approach for bringing legal knowledge (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4.  7
    A Rasch Model and Rating System for Continuous Responses Collected in Large-Scale Learning Systems.Benjamin Deonovic, Maria Bolsinova, Timo Bechger & Gunter Maris - 2020 - Frontiers in Psychology 11:500039.
    An extension to a rating system for tracking the evolution of parameters over time using continuous variables is introduced. The proposed rating system assumes a distribution for the continuous responses, which is agnostic to the origin of the continuous scores and thus can be used for applications as varied as continuous scores obtained from language testing to scores derived from accuracy and response time from elementary arithmetic learning systems. Large-scale, high-stakes, online, anywhere anytime learning and testing inherently (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  30
    The great Transformer: Examining the role of large language models in the political economy of AI.Wiebke Denkena & Dieuwertje Luitse - 2021 - Big Data and Society 8 (2).
    In recent years, AI research has become more and more computationally demanding. In natural language processing, this tendency is reflected in the emergence of large language models like GPT-3. These powerful neural network-based models can be used for a range of NLP tasks and their language generation capacities have become so sophisticated that it can be very difficult to distinguish their outputs from human language. LLMs have raised concerns over their demonstrable biases, heavy (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7.  24
    Masked prediction and interdependence network of the law using data from large-scale Japanese court judgments.Ryoma Kondo, Takahiro Yoshida & Ryohei Hisano - 2023 - Artificial Intelligence and Law 31 (4):739-771.
    Court judgments contain valuable information on how statutory laws and past court precedents are interpreted and how the interdependence structure among them evolves in the courtroom. Data-mining the evolving structure of such customs and norms that reflect myriad social values from a large-scale court judgment corpus is an essential task from both the academic and industrial perspectives. In this paper, using data from approximately 110,000 court judgments from Japan spanning the period 1998–2018 from the district to the supreme (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8.  40
    Large infinitary languages: model theory.M. A. Dickmann - 1975 - New York: American Elsevier Pub. Co..
  9. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  42
    Unsupervised context sensitive language acquisition from a large corpus.Shimon Edelman - unknown
    We describe a pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of linguistic structures from a plain natural-language corpus. This paper addresses the issues of learning structured knowledge from a large-scale natural language data set, and of generalization to unseen text. The implemented algorithm represents sentences as paths on a graph whose vertices are words. Significant patterns, determined by recursive context-sensitive statistical inference, form new vertices. Linguistic constructions are represented by trees composed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  11.  17
    The LargeScale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.Mark Steyvers & Joshua B. Tenenbaum - 2005 - Cognitive Science 29 (1):41-78.
    We present statistical analyses of the largescale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small‐world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale‐free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   54 citations  
  12. Generative AI entails a credit–blame asymmetry.Sebastian Porsdam Mann, Brian D. Earp, Sven Nyholm, John Danaher, Nikolaj Møller, Hilary Bowman-Smart, Joshua Hatherley, Julian Koplin, Monika Plozza, Daniel Rodger, Peter V. Treit, Gregory Renard, John McMillan & Julian Savulescu - 2023 - Nature Machine Intelligence 5 (5):472-475.
    Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  13. Large-scale brain networks and psychopathology: a unifying triple network model.Vinod Menon - 2011 - Trends in Cognitive Sciences 15 (10):483-506.
  14.  41
    The LargeScale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.Mark Steyvers & Joshua B. Tenenbaum - 2005 - Cognitive Science 29 (1):41-78.
    We present statistical analyses of the largescale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small‐world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale‐free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   80 citations  
  15. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current (...): for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  16.  71
    Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  12
    The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.Mark Steyvers & Joshua B. Tenenbaum - 2005 - Cognitive Science 29 (1):41-78.
    We present statistical analyses of the largescale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small‐world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale‐free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  18.  57
    Large Language Models Demonstrate the Potential of Statistical Learning in Language.Pablo Contreras Kallens, Ross Deans Kristensen-McLachlan & Morten H. Christiansen - 2023 - Cognitive Science 47 (3):e13256.
    To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  19.  14
    Large Language Models: A Historical and Sociocultural Perspective.Eugene Yu Ji - 2024 - Cognitive Science 48 (3):e13430.
    This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information‐based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid‐20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  23
    Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - forthcoming - Journal of Medical Ethics.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  8
    A Dynamical, Radically Embodied, and Ecological Theory of Rhythm Development.Parker Tichko, Ji Chul Kim & Edward W. Large - 2022 - Frontiers in Psychology 13.
    Musical rhythm abilities—the perception of and coordinated action to the rhythmic structure of music—undergo remarkable change over human development. In the current paper, we introduce a theoretical framework for modeling the development of musical rhythm. The framework, based on Neural Resonance Theory, explains rhythm development in terms of resonance and attunement, which are formalized using a general theory that includes non-linear resonance and Hebbian plasticity. First, we review the developmental literature on musical rhythm, highlighting several developmental processes related to rhythm (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Large-scale brain systems in ADHD: Beyond the prefrontal–striatal model.F. Xavier Castellanos & Erika Proal - 2012 - Trends in Cognitive Sciences 16 (1):17-26.
  23. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  32
    AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  25. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. Large Language Models” Do Much More than Just Language: Some Bioethical Implications of Multi-Modal AI.Joshua August Skorburg, Kristina L. Kupferschmidt & Graham W. Taylor - 2023 - American Journal of Bioethics 23 (10):110-113.
    Cohen (2023) takes a fair and measured approach to the question of what ChatGPT means for bioethics. The hype cycles around AI often obscure the fact that ethicists have developed robust frameworks...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  12
    Large Language Models and Inclusivity in Bioethics Scholarship.Sumeeta Varma - 2023 - American Journal of Bioethics 23 (10):105-107.
    In the target article, Porsdam Mann and colleagues (2023) broadly survey the ethical opportunities and risks of using general and personalized large language models (LLMs) to generate academic pros...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28.  80
    Creating a large language model of a philosopher.Eric Schwitzgebel, David Schwitzgebel & Anna Strasser - 2023 - Mind and Language 39 (2):237-259.
    Can large language models produce expert‐quality philosophical texts? To investigate this, we fine‐tuned GPT‐3 with the works of philosopher Daniel Dennett. To evaluate the model, we asked the real Dennett 10 philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry‐picking. Experts on Dennett's work succeeded at distinguishing the Dennett‐generated and machine‐generated answers above chance but substantially short of our expectations. Philosophy blog readers performed similarly to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  29.  32
    Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30.  7
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31.  17
    Training Effectiveness Measurement for Large Scale Programs - Demystified: A 4-Tier Practical Model for Technical Training Managers.Raman K. Attri - 2018 - Singapore: Speed To Proficiency Research: S2Pro©.
    This book addresses the challenges typical technical training managers, and other technical managers face in justifying the return on investment of their programs, particularly for large-scale, investment-intensive programs. This book describes a very intuitive and practical model for the measurement of the effectiveness of technical training programs. The book is based on a 4-tier Return on Effectiveness (ROE) model developed through years of research, observation, and experience. The ROE model uses four simple indices: training reaction index, improvement index, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32.  29
    The semantic structure of emotion words across languages is consistent with componential appraisal models of emotion.Klaus R. Scherer & Johnny R. J. Fontaine - 2019 - Cognition and Emotion 33 (4):673-682.
    ABSTRACTAppraisal theories of emotion, and particularly the Component Process Model, claim that the different components of the emotion process are essentially driven by the results of cognitive appraisals and that the feeling component constitutes a central integration and representation of these processes. Given the complexity of the proposed architecture, comprehensive experimental tests of these predictions are difficult to perform and to date are lacking. Encouraged by the “lexical sedimentation” hypothesis, here we propose an indirect examination of the compatibility of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  33.  45
    Do Large Language Models Know What Humans Know?Sean Trott, Cameron Jones, Tyler Chang, James Michaelov & Benjamin Bergen - 2023 - Cognitive Science 47 (7):e13309.
    Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models exposed to large quantities of human language display sensitivity to the implied knowledge states of characters in written passages. In pre‐registered analyses, we present a linguistic version (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  34.  12
    Governing large-scale farmland acquisitions in Québec: the conventional family farm model questioned.Frantz Gheller - 2018 - Agriculture and Human Values 35 (3):623-636.
    This article argues that the definition of land grabs in public debate is a politically contested process with profound normative consequences for policy recommendations regarding the future of the family farm model. To substantiate this argument, I first explore how different definitions of land grabbing bring into focus different kinds of actors and briefly survey the history of land grabbing in Canada. I then introduce the public debate about land grabbing in Québec and discuss its evolution from its beginning in (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  2
    Large-scale simulations and parameter study for a simple recrystallization model.Matt Elsey, Selim Esedo[Gbar]Lu & Peter Smereka - 2011 - Philosophical Magazine 91 (11):1607-1642.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  10
    M. A. Dickmann. Large infinitary languages. Model theory. Studies in logic and the foundations of mathematics, vol. 83. North-Holland Publishing Company, Amsterdam and Oxford, and American Elsevier Publishing Company, Inc., New York, 1975, xv+ 464 pp. [REVIEW]Michael Makkai - 1978 - Journal of Symbolic Logic 43 (1):144-145.
  37. Creating a Large Language Model of a Philosopher.Eric Schwitzgebel, David Schwitzgebel & Anna Strasser - manuscript
    Can large language models be trained to produce philosophical texts that are difficult to distinguish from texts produced by human philosophers? To address this question, we fine-tuned OpenAI's GPT-3 with the works of philosopher Daniel C. Dennett as additional training data. To explore the Dennett model, we asked the real Dennett ten philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry-picking. We recruited 425 participants to (...)
     
    Export citation  
     
    Bookmark  
  38.  5
    Introducing Meta‐analysis in the Evaluation of Computational Models of Infant Language Development.María Andrea Cruz Blandón, Alejandrina Cristia & Okko Räsänen - 2023 - Cognitive Science 47 (7):e13307.
    Computational models of child language development can help us understand the cognitive underpinnings of the language learning process, which occurs along several linguistic levels at once (e.g., prosodic and phonological). However, in light of the replication crisis, modelers face the challenge of selecting representative and consolidated infant data. Thus, it is desirable to have evaluation methodologies that could account for robust empirical reference data, across multiple infant capabilities. Moreover, there is a need for practices that can compare (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39.  21
    AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  40. Introspective Capabilities in Large Language Models.Robert Long - 2023 - Journal of Consciousness Studies 30 (9):143-153.
    This paper considers the kind of introspection that large language models (LLMs) might be able to have. It argues that LLMs, while currently limited in their introspective capabilities, are not inherently unable to have such capabilities: they already model the world, including mental concepts, and already have some introspection-like capabilities. With deliberate training, LLMs may develop introspective capabilities. The paper proposes a method for such training for introspection, situates possible LLM introspection in the 'possible forms of introspection' (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  42.  15
    Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely.Carina Kauf, Anna A. Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko & Alessandro Lenci - 2023 - Cognitive Science 47 (11):e13386.
    Word co‐occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context, leverage these patterns to achieve impressive performance on diverse semantic tasks requiring world knowledge. An important but understudied question about LLMs’ semantic abilities is whether they acquire generalized knowledge of common events. Here, we test whether five pretrained LLMs (from 2018's BERT to 2023's MPT) assign a higher likelihood to plausible descriptions of agent−patient (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - forthcoming - Ethics and Information Technology.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative AI (Artificial Intelligence) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  44.  49
    Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues.Michele Farisco, Jeanette H. Kotaleski & Kathinka Evers - 2018 - Frontiers in Psychology 9.
    Modelling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain's operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Does thought require sensory grounding? From pure thinkers to large language models.David J. Chalmers - 2023 - Proceedings and Addresses of the American Philosophical Association 97:22-45.
    Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. Many have argued that AI systems such as large language models cannot think and understand if they lack sensory grounding. I argue that thought does not require sensory grounding: there can be pure thinkers who can think without any sensory capacities. As a result, the absence of sensory grounding (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46.  24
    Why Personalized Large Language Models Fail to Do What Ethics is All About.Sebastian Laacke & Charlotte Gauckler - 2023 - American Journal of Bioethics 23 (10):60-63.
    Porsdam Mann and colleagues provide an overview of opportunities and risks associated with the use of personalized large language models (LLMs) for text production in bio)ethics (Porsdam Mann et al...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  83
    Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  48.  21
    LargeScale Modeling of Wordform Learning and Representation.Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2008 - Cognitive Science 32 (4):741-754.
    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed thesequence encoderis used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  49.  9
    Trans-cultural Adaptation and Validation of the “Teacher Job Satisfaction Scale” in Arabic Language Among Sports and Physical Education Teachers (“Teacher of Physical Education Job Satisfaction Inventory”—TPEJSI): Insights for Sports, Educational, and Occupational Psychology.Nasr Chalghaf, Noomen Guelmami, Tania Simona Re, Juan José Maldonado Briegas, Sergio Garbarino, Fairouz Azaiez & Nicola L. Bragazzi - 2019 - Frontiers in Psychology 10.
    Background: Job satisfaction is largely associated with organizational aspects, including improved working environments, worker’s well-being and more effective performance. There are many definitions regarding job satisfaction in the existing scholarly literature: it can be expressed as a positive emotional state, a positive impact of job-related experiences on individuals, and employees’ perceptions regarding their jobs. Aims: No reliable scales in Arabic language to assess job satisfaction in the sports and physical education field exist.This study aimed to trans-culturally adapt and validate (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  46
    LargeScale Modeling of Wordform Learning and Representation.Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2008 - Cognitive Science 32 (4):741-754.
    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed thesequence encoderis used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
1 — 50 / 999