Related

Contents
149 found
Order:
1 — 50 / 149
Material to categorize
  1. Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - forthcoming - Philosophy Compass.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning---as far as they are concerned with reliability.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Understanding Biology in the Age of Artificial Intelligence.Adham El Shazly, Elsa Lawerence, Srijit Seal, Chaitanya Joshi, Matthew Greening, Pietro Lio, Shantung Singh, Andreas Bender & Pietro Sormanni - manuscript
    Modern life sciences research is increasingly relying on artificial intelligence (AI) approaches to model biological systems, primarily centered around the use of machine learning (ML) models. Although ML is undeniably useful for identifying patterns in large, complex data sets, its widespread application in biological sciences represents a significant deviation from traditional methods of scientific inquiry. As such, the interplay between these models and scientific understanding in biology is a topic with important implications for the future of scientific research, yet it (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. Machina sapiens.Nello Cristianini - 2024 - Bologna: Il Mulino -.
    Machina sapiens - l;algoritmo che ci ha rubato il segreto della conoscenza. -/- Le macchine possono pensare? Questa domanda inquietante, posta da Alan Turing nel 1950, ha forse trovato una risposta: oggi si può conversare con un computer senza poterlo distinguere da un essere umano. I nuovi agenti intelligenti come ChatGPT si sono rivelati capaci di svolgere compiti che vanno molto oltre le intenzioni iniziali dei loro creatori, e ancora non sappiamo perché: se sono stati addestrati per alcune abilità, altre (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. From ethics to epistemology and back again: informativeness and epistemic injustice in explanatory medical machine learning.Giorgia Pozzi & Juan M. Durán - forthcoming - AI and Society:1-12.
    In this paper, we discuss epistemic and ethical concerns brought about by machine learning (ML) systems implemented in medicine. We begin by fleshing out the logic underlying a common approach in the specialized literature (which we call the _informativeness account_). We maintain that the informativeness account limits its analysis to the impact of epistemological issues on ethical concerns without assessing the bearings that ethical features have on the epistemological evaluation of ML systems. We argue that according to this methodological approach, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  5. Responding to the Watson-Sterkenburg debate on clustering algorithms and natural kinds.Warmhold Jan Thomas Mollema - manuscript
    In Philosophy and Technology 36, David Watson discusses the epistemological and metaphysical implications of unsupervised machine learning (ML) algorithms. Watson is sympathetic to the epistemological comparison of unsupervised clustering, abstraction and generative algorithms to human cognition and sceptical about ML’s mechanisms having ontological implications. His epistemological commitments are that we learn to identify “natural kinds through clustering algorithms”, “essential properties via abstraction algorithms”, and “unrealized possibilities via generative models” “or something very much like them.” The same issue contains a commentary (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Performance Comparison and Implementation of Bayesian Variants for Network Intrusion Detection.Tosin Ige & Christopher Kiekintveld - 2023 - Proceedings of the IEEE 1:5.
    Bayesian classifiers perform well when each of the features is completely independent of the other which is not always valid in real world applications. The aim of this study is to implement and compare the performances of each variant of the Bayesian classifier (Multinomial, Bernoulli, and Gaussian) on anomaly detection in network intrusion, and to investigate whether there is any association between each variant’s assumption and their performance. Our investigation showed that each variant of the Bayesian algorithm blindly follows its (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  8. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Operationalising Representation in Natural Language Processing.Jacqueline Harding - forthcoming - British Journal for the Philosophy of Science.
    Despite its centrality in the philosophy of cognitive science, there has been little prior philosophical work engaging with the notion of representation in contemporary NLP practice. This paper attempts to fill that lacuna: drawing on ideas from cognitive science, I introduce a framework for evaluating the representational claims made about components of neural NLP models, proposing three criteria with which to evaluate whether a component of a model represents a property and operationalising these criteria using probing classifiers, a popular analysis (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Encoder-Decoder Based Long Short-Term Memory (LSTM) Model for Video Captioning.Adewale Sikiru, Tosin Ige & Bolanle Matti Hafiz - forthcoming - Proceedings of the IEEE:1-6.
    This work demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  11. Making decisions with evidential probability and objective Bayesian calibration inductive logics.Mantas Radzvilas, William Peden & Francesco De Pretis - forthcoming - International Journal of Approximate Reasoning:1-37.
    Calibration inductive logics are based on accepting estimates of relative frequencies, which are used to generate imprecise probabilities. In turn, these imprecise probabilities are intended to guide beliefs and decisions — a process called “calibration”. Two prominent examples are Henry E. Kyburg's system of Evidential Probability and Jon Williamson's version of Objective Bayesianism. There are many unexplored questions about these logics. How well do they perform in the short-run? Under what circumstances do they do better or worse? What is their (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Can AI Abstract the Architecture of Mathematics?Posina Rayudu - manuscript
    The irrational exuberance associated with contemporary artificial intelligence (AI) reminds me of Charles Dickens: "it was the age of foolishness, it was the epoch of belief" (cf. Nature Editorial, 2016; to get a feel for the vanity fair that is AI, see Mitchell and Krakauer, 2023; Stilgoe, 2023). It is particularly distressing—feels like yet another rerun of Seinfeld, which is all about nothing (pun intended); we have seen it in the 60s and again in the 90s. AI might have had (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems.Andrea Ferrario, Alessandro Facchini & Alberto Termine - manuscript
    The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. On Philomatics and Psychomatics for Combining Philosophy and Psychology with Mathematics.Benyamin Ghojogh & Morteza Babaie - manuscript
    We propose the concepts of philomatics and psychomatics as hybrid combinations of philosophy and psychology with mathematics. We explain four motivations for this combination which are fulfilling the desire of analytical philosophy, proposing science of philosophy, justifying mathematical algorithms by philosophy, and abstraction in both philosophy and mathematics. We enumerate various examples for philomatics and psychomatics, some of which are explained in more depth. The first example is the analysis of relation between the context principle, semantic holism, and the usage (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. From deep learning to rational machines: what the history of philosophy can teach us about the future of artifical intelligence.Cameron J. Buckner - 2023 - New York, NY: Oxford University Press.
    This book provides a framework for thinking about foundational philosophical questions surrounding machine learning as an approach to artificial intelligence. Specifically, it links recent breakthroughs in deep learning to classical empiricist philosophy of mind. In recent assessments of deep learning's current capabilities and future potential, prominent scientists have cited historical figures from the perennial philosophical debate between nativism and empiricism, which primarily concerns the origins of abstract knowledge. These empiricists were generally faculty psychologists; that is, they argued that the active (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. Justifying our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.Andrea Ferrario - manuscript
    We address an open problem in the epistemology of artificial intelligence (AI), namely, the justification of the epistemic attitudes we have towards the trustworthiness of AI systems. We start from a key consideration: the trustworthiness of an AI is a time-relative property of the system, with two distinct facets. One is the actual trustworthiness of the AI, and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, namely, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Quantum Intrinsic Curiosity Algorithms.Shanna Dobson & Julian Scaff - manuscript
    We propose a quantum curiosity algorithm as a means to implement quantum thinking into AI, and we illustrate 5 new quantum curiosity types. We then introduce 6 new hybrid quantum curiosity types combining animal and plant curiosity elements with biomimicry beyond human sensing. We then introduce 4 specialized quantum curiosity types, which incorporate quantum thinking into coding frameworks to radically transform problem-solving and discovery in science, medicine, and systems analysis. We conclude with a forecasting of the future of quantum thinking (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Machine Learning, Misinformation, and Citizen Science.Adrian K. Yee - 2023 - European Journal for Philosophy of Science 13 (56):1-24.
    Current methods of operationalizing concepts of misinformation in machine learning are often problematic given idiosyncrasies in their success conditions compared to other models employed in the natural and social sciences. The intrinsic value-ladenness of misinformation and the dynamic relationship between citizens' and social scientists' concepts of misinformation jointly suggest that both the construct legitimacy and the construct validity of these models needs to be assessed via more democratic criteria than has previously been recognized.
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  21. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber who (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23. Machine learning and the quest for objectivity in climate model parameterization.Julie Jebeile, Vincent Lam, Mason Majszak & Tim Räz - 2023 - Climatic Change 176 (101).
    Parameterization and parameter tuning are central aspects of climate modeling, and there is widespread consensus that these procedures involve certain subjective elements. Even if the use of these subjective elements is not necessarily epistemically problematic, there is an intuitive appeal for replacing them with more objective (automated) methods, such as machine learning. Relying on several case studies, we argue that, while machine learning techniques may help to improve climate model parameterization in several ways, they still require expert judgment that involves (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. The deep neural network approach to the reference class problem.Oliver Buchholz - 2023 - Synthese 201 (3):1-24.
    Methods of machine learning (ML) are gradually complementing and sometimes even replacing methods of classical statistics in science. This raises the question whether ML faces the same methodological problems as classical statistics. This paper sheds light on this question by investigating a long-standing challenge to classical statistics: the reference class problem (RCP). It arises whenever statistical evidence is applied to an individual object, since the individual belongs to several reference classes and evidence might vary across them. Thus, the problem consists (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. La scorciatoia.Nello Cristianini - 2023 - Bologna: Il Mulino.
    La scorciatoia - Come le macchine sono diventate intelligenti senza pensare in modo umano -/- Le nostre creature sono diverse da noi e talvolta più forti. Per poterci convivere dobbiamo imparare a conoscerle Vagliano curricula, concedono mutui, scelgono le notizie che leggiamo: le macchine intelligenti sono entrate nelle nostre vite, ma non sono come ce le aspettavamo. Fanno molte delle cose che volevamo, e anche qualcuna in più, ma non possiamo capirle o ragionare con loro, perché il loro comportamento è (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. (Un)Fairness in AI: An Intersectional Feminist Analysis.Youjin Kong - 2022 - Blog of the American Philosophical Association, Women in Philosophy Series.
    Racial, Gender, and Intersectional Biases in AI / -/- Dominant View of Intersectional Fairness in the AI Literature / -/- Three Fundamental Problems with the Dominant View / 1. Overemphasis on Intersections of Attributes / 2. Dilemma between Infinite Regress and Fairness Gerrymandering / 3. Narrow Understanding of Fairness as Parity / -/- Rethinking AI Fairness: from Weak to Strong Fairness.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. Are “Intersectionally Fair” AI Algorithms Really Fair to Women of Color? A Philosophical Analysis.Youjin Kong - 2022 - Facct: Proceedings of the Acm Conference on Fairness, Accountability, and Transparency:485-494.
    A growing number of studies on fairness in artificial intelligence (AI) use the notion of intersectionality to measure AI fairness. Most of these studies take intersectional fairness to be a matter of statistical parity among intersectional subgroups: an AI algorithm is “intersectionally fair” if the probability of the outcome is roughly the same across all subgroups defined by different combinations of the protected attributes. This paper identifies and examines three fundamental problems with this dominant interpretation of intersectional fairness in AI. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. Raising Ethical Machines: Bottom-Up Methods to Implementing Machine Ethics.Marten H. L. Kaas - 2021 - In Steven John Thompson (ed.), Machine Law, Ethics, and Morality in the Age of Artificial Intelligence. IGI Global. pp. 47-68.
    The ethical decision-making and behaviour of artificially intelligent systems is increasingly important given the prevalence of these systems and the impact they can have on human well-being. Many current approaches to implementing machine ethics utilize top-down approaches, that is, ensuring the ethical decision-making and behaviour of an agent via its adherence to explicitly defined ethical rules or principles. Despite the attractiveness of this approach, this chapter explores how all top-down approaches to implementing machine ethics are fundamentally limited and how bottom-up (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping and (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  34. Does Artificial Intelligence Use Private Language?Ryan Miller - forthcoming - In Proceedings of the International Ludwig Wittgenstein Symposium 2021. Vienna: Lit Verlag.
    Wittgenstein’s Private Language Argument holds that language requires rule-following, rule following requires the possibility of error, error is precluded in pure introspection, and inner mental life is known only by pure introspection, thus language cannot exist entirely within inner mental life. Fodor defends his Language of Thought program against the Private Language Argument with a dilemma: either privacy is so narrow that internal mental life can be known outside of introspection, or so broad that computer language serves as a counter-example. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. Machine Learning, Functions and Goals.Patrick Butlin - 2022 - Croatian Journal of Philosophy 22 (66):351-370.
    Machine learning researchers distinguish between reinforcement learning and supervised learning and refer to reinforcement learning systems as “agents”. This paper vindicates the claim that systems trained by reinforcement learning are agents while those trained by supervised learning are not. Systems of both kinds satisfy Dretske’s criteria for agency, because they both learn to produce outputs selectively in response to inputs. However, reinforcement learning is sensitive to the instrumental value of outputs, giving rise to systems which exploit the effects of outputs (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  37. Model-induced escape.Barry Smith - 2022 - Facing the Future, Facing the Screen: 10Th Budapest Visual Learning Conference.
    We can illustrate the phenomenon of model-induced escape by examining the phenomenon of spam filters. Spam filter A is, we can assume, very effective at blocking spam. Indeed it is so effective that it motivates the authors of spam to invent new types of spam that will beat the filters of spam filter A. -/- An example of this phenomenon in the realm of philosophy is illustrated in the work of Nyíri on Wittgenstein's political beliefs. Nyíri writes a paper demonstrating (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. Proceedings of the First Turkish Conference on AI and Artificial Neural Networks.Kemal Oflazer, Varol Akman, H. Altay Guvenir & Ugur Halici - 1992 - Ankara, Turkey: Bilkent Meteksan Publishing.
    This is the proceedings of the "1st Turkish Conference on AI and ANNs," K. Oflazer, V. Akman, H. A. Guvenir, and U. Halici (editors). The conference was held at Bilkent University, Bilkent, Ankara on 25-26 June 1992. -/- Language of contributions: English and Turkish.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  39. THE ROBOTS ARE COMING: What’s Happening in Philosophy (WHiP)-The Philosophers, August 2022.Jeff Hawley - 2022 - Philosophynews.Com.
    Should we fear a future in which the already tricky world of academic publishing is increasingly crowded out by super-intelligent artificial general intelligence (AGI) systems writing papers on phenomenology and ethics? What are the chances that AGI advances to a stage where a human philosophy instructor is similarly removed from the equation? If Jobst Landgrebe and Barry Smith are correct, we have nothing to fear.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Clinical Ethics – To Compute, or Not to Compute?Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (12):W1-W4.
    Can machine intelligence do clinical ethics? And if so, would applying it to actual medical cases be desirable? In a recent target article (Meier et al. 2022), we described the piloting of our advisory algorithm METHAD. Here, we reply to commentaries published in response to our project. The commentaries fall into two broad categories: concrete criticism that concerns the development of METHAD; and the more general question as to whether one should employ decision-support systems of this kind—the debate we set (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  41. A Fuzzy-Cognitive-Maps Approach to Decision-Making in Medical Ethics.Alice Hein, Lukas J. Meier, Alena Buyx & Klaus Diepold - 2022 - 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
    Although machine intelligence is increasingly employed in healthcare, the realm of decision-making in medical ethics remains largely unexplored from a technical perspective. We propose an approach based on fuzzy cognitive maps (FCMs), which builds on Beauchamp and Childress’ prima-facie principles. The FCM’s weights are optimized using a genetic algorithm to provide recommendations regarding the initiation, continuation, or withdrawal of medical treatment. The resulting model approximates the answers provided by our team of medical ethicists fairly well and offers a high degree (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Calculating the mind-change complexity of learning algebraic structures.Luca San Mauro, Nikolay Bazhenov & Vittorio Cipriani - 2022 - In Ulrich Berger, Johanna N. Y. Franklin, Florin Manea & Arno Pauly (eds.), Revolutions and Revelations in Computability. pp. 1-12.
    This paper studies algorithmic learning theory applied to algebraic structures. In previous papers, we have defined our framework, where a learner, given a family of structures, receives larger and larger pieces of an arbitrary copy of a structure in the family and, at each stage, is required to output a conjecture about the isomorphism type of such a structure. The learning is successful if there is a learner that eventually stabilizes to a correct conjecture. Here, we analyze the number of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. On the Turing complexity of learning finite families of algebraic structures.Luca San Mauro & Nikolay Bazhenov - 2021 - Journal of Logic and Computation 7 (31):1891-1900.
    In previous work, we have combined computable structure theory and algorithmic learning theory to study which families of algebraic structures are learnable in the limit (up to isomorphism). In this paper, we measure the computational power that is needed to learn finite families of structures. In particular, we prove that, if a family of structures is both finite and learnable, then any oracle which computes the Halting set is able to achieve such a learning. On the other hand, we construct (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. Learning families of algebraic structures from informant.Luca San Mauro, Nikolay Bazhenov & Ekaterina Fokina - 2020 - Information And Computation 1 (275):104590.
    We combine computable structure theory and algorithmic learning theory to study learning of families of algebraic structures. Our main result is a model-theoretic characterization of the learning type InfEx_\iso, consisting of the structures whose isomorphism types can be learned in the limit. We show that a family of structures is InfEx_\iso-learnable if and only if the structures can be distinguished in terms of their \Sigma^2_inf-theories. We apply this characterization to familiar cases and we show the following: there is an infinite (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. On characterizations of learnability with computable learners.Tom F. Sterkenburg - 2022 - Proceedings of Machine Learning Research 178:3365-3379.
    We study computable PAC (CPAC) learning as introduced by Agarwal et al. (2020). First, we consider the main open question of finding characterizations of proper and improper CPAC learning. We give a characterization of a closely related notion of strong CPAC learning, and provide a negative answer to the COLT open problem posed by Agarwal et al. (2021) whether all decidably representable VC classes are improperly CPAC learnable. Second, we consider undecidability of (computable) PAC learnability. We give a simple general (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. A Falsificationist Account of Artificial Neural Networks.Oliver Buchholz & Eric Raidl - forthcoming - The British Journal for the Philosophy of Science.
    Machine learning operates at the intersection of statistics and computer science. This raises the question as to its underlying methodology. While much emphasis has been put on the close link between the process of learning from data and induction, the falsificationist component of machine learning has received minor attention. In this paper, we argue that the idea of falsification is central to the methodology of machine learning. It is commonly thought that machine learning algorithms infer general prediction rules from past (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Big Data and Artificial Intelligence Based on Personalized Learning – Conformity with Whitehead’s Organismic Theory.Rossitza Kaltenborn & Mintcho Hadjiski - 2022 - In F. Riffert & V. Petrov (eds.), Education and Learning in a World of Accelerated Knowledge Growth: Current Trends in Process Thought.
    The study shows the existence of a broad conformity between Whitehead’s organismic cosmology and the contemporary theory of complex systems at a relevant level of abstraction. One of the most promising directions of educational transformation in the age of big data and artificial intelligence – personalized learning – is conceived as a system of systems and reveals its close congruence with a number of basic Whiteheadian concepts. A new functional structure of personalized learning systems is proposed, including all the core (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. Embedding the assessment of emotion in the learning process with AI-driven technologies.Rossitza Kaltenborn - 2019 - In Petrov Vesselin & Katie Andersen (eds.), Traditional Learning Theories, Process Philosophy and AI.
    This paper examines the possibility of an objective evaluation of emotions occurring within the learning process and methods for embedding such an evaluation in advanced learning systems. The main conceptual understandings of emotion in learning and teaching are systematized, with an emphasis on the process philosophy approach. Different models of emotion are considered and the possible generalization of Whitehead’s approach to the role of emotion in education is examined. Special attention is given to significant developments in artificial intelligence in identifying (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. Stimuli-Based Control of Negative Emotions in a Digital Learning Environment.Rossitza Kaltenborn, Mincho Hadjiski & Stefan Koynov - 2022 - In V. Sgurev, V. Jotsov & J. Kacprzyk (eds.), Advances in Intelligent Systems Research and Innovation. Cambridge, Vereinigtes Königreich:
    The proposed system for coping negative emotions arising during the learning process is considered as an embedded part of the complex intelligent learning system realized in a digital environment. By applying data-driven procedures on the current and retrospective data the main didactic-based stimuli provoking emotion generation are identified. They are examined as dominant negative emotions in the context of learning. Due to the presence of strong internal and output interconnections between teaching and emotional states, an intelligent decoupling multidimensional control scheme (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 149