This category has changed name
Related

Contents
967 found
Order:
1 — 50 / 967
Material to categorize
  1. Attention and Power.Carolyn Dicey Jennings - manuscript
    As discussions concerning attention progress from cognition to norms—from the individual to the social—we are left with the question: what is “social” attention? It is typically discussed in scientific papers as attention by an individual in a social setting. This book expands on earlier work to explore something more fundamentally social: attention by a social group, which I will call “collective attention.” (Contact for draft of Chapter 2: The Power of Attention.).
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  2. Machina sapiens.Nello Cristianini - 2024 - Bologna: Il Mulino -.
    Machina sapiens - l;algoritmo che ci ha rubato il segreto della conoscenza. -/- Le macchine possono pensare? Questa domanda inquietante, posta da Alan Turing nel 1950, ha forse trovato una risposta: oggi si può conversare con un computer senza poterlo distinguere da un essere umano. I nuovi agenti intelligenti come ChatGPT si sono rivelati capaci di svolgere compiti che vanno molto oltre le intenzioni iniziali dei loro creatori, e ancora non sappiamo perché: se sono stati addestrati per alcune abilità, altre (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Artificial Psychology.Jay Friedenberg - 2008 - Psychology Press.
    What does it mean to be human? Philosophers and theologians have been wrestling with this question for centuries. Recent advances in cognition, neuroscience, artificial intelligence and robotics have yielded insights that bring us even closer to an answer. There are now computer programs that can accurately recognize faces, engage in conversation, and even compose music. There are also robots that can walk up a flight of stairs, work cooperatively with each other and express emotion. If machines can do everything we (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Operationalising Representation in Natural Language Processing.Jacqueline Harding - forthcoming - British Journal for the Philosophy of Science.
    Despite its centrality in the philosophy of cognitive science, there has been little prior philosophical work engaging with the notion of representation in contemporary NLP practice. This paper attempts to fill that lacuna: drawing on ideas from cognitive science, I introduce a framework for evaluating the representational claims made about components of neural NLP models, proposing three criteria with which to evaluate whether a component of a model represents a property and operationalising these criteria using probing classifiers, a popular analysis (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour.Cameron Buckner - 2023 - British Journal for the Philosophy of Science 74 (3):681-712.
    The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  6. The Great Philoosphical Objections to AI: The History and Legacy of the AI Wars.Eric Dietrich, Chris Fields, John P. Sullins, Van Heuveln Bram & Robin Zebrowski - 2021 - London: Bloomsbury Academic.
    This book surveys and examines the most famous philosophical arguments against building a machine with human-level intelligence. From claims and counter-claims about the ability to implement consciousness, rationality, and meaning, to arguments about cognitive architecture, the book presents a vivid history of the clash between the philosophy and AI. Tellingly, the AI Wars are mostly quiet now. Explaining this crucial fact opens new paths to understanding the current resurgence AI (especially, deep learning AI and robotics), what happens when philosophy meets (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. O "Frame Problem": a sensibilidade ao contexto como um desafio para teorias representacionais da mente.Carlos Barth - 2019 - Dissertation, Federal University of Minas Gerais
    Context sensitivity is one of the distinctive marks of human intelligence. Understanding the flexible way in which humans think and act in a potentially infinite number of circumstances, even though they’re only finite and limited beings, is a central challenge for the philosophy of mind and cognitive science, particularly in the case of those using representational theories. In this work, the frame problem, that is, the challenge of explaining how human cognition efficiently acknowledges what is relevant from what is not (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. From deep learning to rational machines: what the history of philosophy can teach us about the future of artifical intelligence.Cameron J. Buckner - 2023 - New York, NY: Oxford University Press.
    This book provides a framework for thinking about foundational philosophical questions surrounding machine learning as an approach to artificial intelligence. Specifically, it links recent breakthroughs in deep learning to classical empiricist philosophy of mind. In recent assessments of deep learning's current capabilities and future potential, prominent scientists have cited historical figures from the perennial philosophical debate between nativism and empiricism, which primarily concerns the origins of abstract knowledge. These empiricists were generally faculty psychologists; that is, they argued that the active (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. The deep neural network approach to the reference class problem.Oliver Buchholz - 2023 - Synthese 201 (3):1-24.
    Methods of machine learning (ML) are gradually complementing and sometimes even replacing methods of classical statistics in science. This raises the question whether ML faces the same methodological problems as classical statistics. This paper sheds light on this question by investigating a long-standing challenge to classical statistics: the reference class problem (RCP). It arises whenever statistical evidence is applied to an individual object, since the individual belongs to several reference classes and evidence might vary across them. Thus, the problem consists (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10. AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models.Luciano Floridi - 2023 - Philosophy and Technology 36 (1):1-7.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  11. The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience.Birgitta Dresp-Langley - 2023 - Information 14 (2):1-82.
    Two universal functional principles of Grossberg’s Adaptive Resonance Theory decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level, long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience.Birgitta Dresp-Langley - 2023 - Information 14 (2):e82 1-17..
    Two universal functional principles of Grossberg’s Adaptive Resonance Theory [19] decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  14. A Falsificationist Account of Artificial Neural Networks.Oliver Buchholz & Eric Raidl - forthcoming - The British Journal for the Philosophy of Science.
    Machine learning operates at the intersection of statistics and computer science. This raises the question as to its underlying methodology. While much emphasis has been put on the close link between the process of learning from data and induction, the falsificationist component of machine learning has received minor attention. In this paper, we argue that the idea of falsification is central to the methodology of machine learning. It is commonly thought that machine learning algorithms infer general prediction rules from past (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. Occam's Razor For Big Data?Birgitta Dresp-Langley - 2019 - Applied Sciences 3065 (9):1-28.
    Detecting quality in large unstructured datasets requires capacities far beyond the limits of human perception and communicability and, as a result, there is an emerging trend towards increasingly complex analytic solutions in data science to cope with this problem. This new trend towards analytic complexity represents a severe challenge for the principle of parsimony (Occam’s razor) in science. This review article combines insight from various domains such as physics, computational science, data engineering, and cognitive science to review the specific properties (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. Environmental Variability and the Emergence of Meaning: Simulational Studies across Imitation, Genetic Algorithms, and Neural Nets.Patrick Grim - 2006 - In Angelo Loula & Ricardo Gudwin (eds.), Artificial Cognition Systems. Idea Group. pp. 284-326.
    A crucial question for artificial cognition systems is what meaning is and how it arises. In pursuit of that question, this paper extends earlier work in which we show that emergence of simple signaling in biologically inspired models using arrays of locally interactive agents. Communities of "communicators" develop in an environment of wandering food sources and predators using any of a variety of mechanisms: imitation of successful neighbors, localized genetic algorithms and partial neural net training on successful neighbors. Here we (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. Ontology, neural networks, and the social sciences.David Strohmaier - 2020 - Synthese 199 (1-2):4775-4794.
    The ontology of social objects and facts remains a field of continued controversy. This situation complicates the life of social scientists who seek to make predictive models of social phenomena. For the purposes of modelling a social phenomenon, we would like to avoid having to make any controversial ontological commitments. The overwhelming majority of models in the social sciences, including statistical models, are built upon ontological assumptions that can be questioned. Recently, however, artificial neural networks have made their way into (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19. Connectomes as constitutively epistemic objects: critical perspectives on modeling in current neuroanatomy.Philipp Haueis & Jan Slaby - 2017 - In Progress in Brain Research Vol 233: The Making and Use of Animal Models in Neuroscience and Psychiatry. Amsterdam: pp. 149–177.
    in a nervous system of a given species. This chapter provides a critical perspective on the role of connectomes in neuroscientific practice and asks how the connectomic approach fits into a larger context in which network thinking permeates technology, infrastructure, social life, and the economy. In the first part of this chapter, we argue that, seen from the perspective of ongoing research, the notion of connectomes as “complete descriptions” is misguided. Our argument combines Rachel Ankeny’s analysis of neuroanatomical wiring diagrams (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Review of Starkey (1992): Connectionist Natural Language Processing: Readings from ‘Connection Science’. [REVIEW]Ephraim Nissan - 1997 - Pragmatics and Cognition 5 (2):383-384.
  21. Deep learning and cognitive science.Pietro Perconti & Alessio Plebe - 2020 - Cognition 203:104365.
    In recent years, the family of algorithms collected under the term ``deep learning'' has revolutionized artificial intelligence, enabling machines to reach human-like performances in many complex cognitive tasks. Although deep learning models are grounded in the connectionist paradigm, their recent advances were basically developed with engineering goals in mind. Despite of their applied focus, deep learning models eventually seem fruitful for cognitive purposes. This can be thought as a kind of biological exaptation, where a physiological structure becomes applicable for a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  22. A Puzzle concerning Compositionality in Machines.Ryan M. Nefdt - 2020 - Minds and Machines 30 (1):47-75.
    This paper attempts to describe and address a specific puzzle related to compositionality in artificial networks such as Deep Neural Networks and machine learning in general. The puzzle identified here touches on a larger debate in Artificial Intelligence related to epistemic opacity but specifically focuses on computational applications of human level linguistic abilities or properties and a special difficulty with relation to these. Thus, the resulting issue is both general and unique. A partial solution is suggested.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  23. Literal Perceptual Inference.Alex Kiefer - 2017 - In Metzinger Thomas & Wiese Wanja (eds.), Philosophy and Predictive Processing. MIND Group.
    In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. -/- In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences depend on the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   20 citations  
  24. Representation in the Prediction Error Minimization Framework.Alex Kiefer & Jakob Hohwy - 2009 - In Sarah Robins, John Francis Symons & Paco Calvo (eds.), The Routledge Companion to Philosophy of Psychology. New York, NY: Routledge. pp. 384-409.
    This chapter focuses on what’s novel in the perspective that the prediction error minimization (PEM) framework affords on the cognitive-scientific project of explaining intelligence by appeal to internal representations. It shows how truth-conditional and resemblance-based approaches to representation in generative models may be integrated. The PEM framework in cognitive science is an approach to cognition and perception centered on a simple idea: organisms represent the world by constantly predicting their own internal states. PEM theories often stress the hierarchical structure of (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark   10 citations  
  25. Deep learning: A philosophical introduction.Cameron Buckner - 2019 - Philosophy Compass 14 (10):e12625.
    Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally accepted explanation (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  26. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  27. AISC 17 Talk: The Explanatory Problems of Deep Learning in Artificial Intelligence and Computational Cognitive Science: Two Possible Research Agendas.Antonio Lieto - 2018 - In Proceedings of AISC 2017.
    Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical problem (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.Cameron Buckner - 2018 - Synthese (12):1-34.
    In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to (...)
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  29. Systematicity, Conceptual Truth, and Evolution.Brian P. McLaughlin - 1993 - Royal Institute of Philosophy Supplement 34:217-234.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  30. Wittgenstein and Connectionism: a Significant Complementarity?Stephen Mills - 1993 - Royal Institute of Philosophy Supplement 34:137-157.
    Between the later views of Wittgenstein and those of connectionism on the subject of the mastery of language there is an impressively large number of similarities. The task of establishing this claim is carried out in the second section of this paper.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  31. Peter Novak, Mental Symbols: A Defence of the Classical Theory of Mind. [REVIEW]Istvan S. N. Berkeley - 2001 - Minds and Machines 11 (1):148-150.
  32. The combinatorial-connectionist debate and the pragmatics of adjectives.Ran Lahav - 1993 - Pragmatics and Cognition 1 (1):71-88.
    Within the controversy between the combinatorial and the connectionist approaches to cognition it has been argued that our semantic and syntactic capacities provide evidence for the combinatorial approach. In this paper I offer a counter-weight to this argument by pointing out that the same type of considerations, when applied to the pragmatics of adjectives, provide evidence for connectionism.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33. What makes connectionism different?James H. Fetzer - 1994 - Pragmatics and Cognition 2 (2):327-348.
  34. Connectionism, Concepts, and Folk Psychology. [REVIEW]Daniel N. Robinson - 1998 - Review of Metaphysics 51 (4):919-919.
  35. What Systematicity Isn’t.Robert Cummins, Jim Blackmon, David Byrd, Alexa Lee & Martin Roth - 2005 - Journal of Philosophical Research 30:405-408.
    In “On Begging the Systematicity Question,” Wayne Davis criticizes the suggestion of Cummins et al. that the alleged systematicity of thought is not as obvious as is sometimes supposed, and hence not reliable evidence for the language of thought hypothesis. We offer a brief reply.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Smolensky’s Interpretation of Connectionism.Stephen Mills - 1990 - Irish Philosophical Journal 7 (1-2):104-118.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Jerry A. Fodor and Zenon W. Pylyshyn: Minds Without Meanings: An Essay in the Content of Concepts.Sean Welsh - 2016 - Minds and Machines 26 (4):467-471.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Clinical Diagnosis of Creutzfeldt-Jakob Disease Using a Multi-Layer Perceptron Neural Network Classifier.Κ Sutherland, R. De Silva & R. G. Will - 1997 - Journal of Intelligent Systems 7 (1-2):1-18.
  39. Connectionism, Confusion and Cognitive Science.M. R. W. Dawson & K. S. Shamanski - 1994 - Journal of Intelligent Systems 4 (3-4):215-262.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Associationism: Not the Cliff Over Which to Push Connectionism.R. J. Jorna & W. F. G. Haselager - 1994 - Journal of Intelligent Systems 4 (3-4):279-308.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Computability of Logical Neural Networks.T. B. Ludermir - 1992 - Journal of Intelligent Systems 2 (1-4):261-290.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. Dreams and Connectionism: A Critique.D. Kuiken - 1994 - Journal of Intelligent Systems 4 (3-4):263-278.
  43. Cytological Diagnosis Based on Fuzzy Neural Networks.D. Kontoravdis, A. Likas & P. Krakitsos - 1998 - Journal of Intelligent Systems 8 (1-2):55-80.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44. 2.2 Grundlagen neuronaler Netze.Klaus Mainzer - 1994 - In Computer - Neue Flügel des Geistes?: Die Evolution Computergestützter Technik, Wissenschaft, Kultur Und Philosophie. De Gruyter. pp. 247-275.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. Common and distinct neural networks for theory of mind reasoning and inhibitory control.Christoph Rothmayr - unknown
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. On the Systematicity of Language and Thought.Kent Johnson - 2004 - Journal of Philosophy 101 (3):111-139.
  47. Connectionist Minds.Andy Clark - 1990 - Proceedings of the Aristotelian Society 90 (1):83-102.
    Andy Clark; VI*—Connectionist Minds, Proceedings of the Aristotelian Society, Volume 90, Issue 1, 1 June 1990, Pages 83–102, https://doi.org/10.1093/aristotelia.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Neural networks learn highly selective representations in order to overcome the superposition catastrophe.Jeffrey S. Bowers, Ivan I. Vankov, Markus F. Damian & Colin J. Davis - 2014 - Psychological Review 121 (2):248-261.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  49. Critical branching neural networks.Christopher T. Kello - 2013 - Psychological Review 120 (1):230-254.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  50. Postscript: Parallel distributed processing in localist models without thresholds.David C. Plaut & James L. McClelland - 2010 - Psychological Review 117 (1):289-290.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 967