Related

Contents
130 found
Order:
1 — 50 / 130
  1. Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - manuscript
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2. A statistical learning approach to a problem of induction.Kino Zhao - manuscript
    At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductive inference rule. At the weakest, it challenges our ability to articulate and apply good inductive inference rules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, whether we are in (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  3. Can reinforcement learning learn itself? A reply to 'Reward is enough'.Samuel Allen Alexander - forthcoming - CIFMA 2021.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - forthcoming - Aistats23.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. A Falsificationist Account of Artificial Neural Networks.Oliver Buchholz & Eric Raidl - forthcoming - The British Journal for the Philosophy of Science.
    Machine learning operates at the intersection of statistics and computer science. This raises the question as to its underlying methodology. While much emphasis has been put on the close link between the process of learning from data and induction, the falsificationist component of machine learning has received minor attention. In this paper, we argue that the idea of falsification is central to the methodology of machine learning. It is commonly thought that machine learning algorithms infer general prediction rules from past (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. The epistemic virtues of harnessing rigorous machine learning systems in ethically-sensitive domains.Thomas F. Burns - forthcoming - Journal of Medical Ethics.
  7. Autognorics Approach to the Problem of Defining Life and Artificial Intelligence.Joey Lawsin - forthcoming
    Many thinkers, past, and present, have tried to solve the underlying mystery of Life. Yet, no one has ever categorically expressed its exact concrete essence, scope, or meaning until a new school of thought known as Originemology was conceptualized in 1988 by Joey Lawsin. Life and consciousness can not be explained properly since their theoretical and philosophical bases are wrong. When the bases are incorrect, the outcomes are incorrect. The words associated with life such as alive, aware, conscious, intelligent, and (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  8. Does Artificial Intelligence Use Private Language?Ryan Miller - forthcoming - In Proceedings of the International Ludwig Wittgenstein Symposium 2021.
    Wittgenstein’s Private Language Argument holds that language requires rule-following, rule following requires the possibility of error, error is precluded in pure introspection, and inner mental life is known only by pure introspection, thus language cannot exist entirely within inner mental life. Fodor defends his Language of Thought program against the Private Language Argument with a dilemma: either privacy is so narrow that internal mental life can be known outside of introspection, or so broad that computer language serves as a counter-example. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - forthcoming - ACM Computing Surveys.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. A note on the learning-theoretic characterizations of randomness and convergence.Tomasz Steifer - forthcoming - Review of Symbolic Logic:1-15.
    Recently, a connection has been established between two branches of computability theory, namely between algorithmic randomness and algorithmic learning theory. Learning-theoretical characterizations of several notions of randomness were discovered. We study such characterizations based on the asymptotic density of positive answers. In particular, this note provides a new learning-theoretic definition of weak 2-randomness, solving the problem posed by (Zaffora Blando, Rev. Symb. Log. 2019). The note also highlights the close connection between these characterizations and the problem of convergence on random (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11. On Explaining the Success of Induction.Tom F. Sterkenburg - forthcoming - British Journal for the Philosophy of Science.
    Douven (in press) observes that Schurz's meta-inductive justification of induction cannot explain the great empirical success of induction, and offers an explanation based on computer simulations of the social and evolutionary development of our inductive practices. In this paper, I argue that Douven's account does not address the explanatory question that Schurz's argument leaves open, and that the assumption of the environment's induction-friendliness that is inherent to Douven's simulations is not justified by Schurz's argument.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - forthcoming - Ethik in der Medizin:1-27.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13. Inter‐temporal rationality without temporal representation.Simon A. B. Brown - 2023 - Mind and Language 38 (2):495-514.
    Recent influential accounts of temporal representation—the use of mental representations with explicit temporal contents, such as before and after relations and durations—sharply distinguish representation from mere sensitivity. A common, important picture of inter-temporal rationality is that it consists in maximizing total expected discounted utility across time. By analyzing reinforcement learning algorithms, this article shows that, given such notions of temporal representation and inter-temporal rationality, it would be possible for an agent to achieve inter-temporal rationality without temporal representation. It then explores (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. The deep neural network approach to the reference class problem.Oliver Buchholz - 2023 - Synthese 201 (3):1-24.
    Methods of machine learning (ML) are gradually complementing and sometimes even replacing methods of classical statistics in science. This raises the question whether ML faces the same methodological problems as classical statistics. This paper sheds light on this question by investigating a long-standing challenge to classical statistics: the reference class problem (RCP). It arises whenever statistical evidence is applied to an individual object, since the individual belongs to several reference classes and evidence might vary across them. Thus, the problem consists (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. La scorciatoia.Nello Cristianini - 2023 - Bologna: Il Mulino.
    La scorciatoia - Come le macchine sono diventate intelligenti senza pensare in modo umano -/- Le nostre creature sono diverse da noi e talvolta più forti. Per poterci convivere dobbiamo imparare a conoscerle Vagliano curricula, concedono mutui, scelgono le notizie che leggiamo: le macchine intelligenti sono entrate nelle nostre vite, ma non sono come ce le aspettavamo. Fanno molte delle cose che volevamo, e anche qualcuna in più, ma non possiamo capirle o ragionare con loro, perché il loro comportamento è (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Swansea: Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. Machine Learning, Functions and Goals.Patrick Butlin - 2022 - Croatian Journal of Philosophy 22 (66):351-370.
    Machine learning researchers distinguish between reinforcement learning and supervised learning and refer to reinforcement learning systems as “agents”. This paper vindicates the claim that systems trained by reinforcement learning are agents while those trained by supervised learning are not. Systems of both kinds satisfy Dretske’s criteria for agency, because they both learn to produce outputs selectively in response to inputs. However, reinforcement learning is sensitive to the instrumental value of outputs, giving rise to systems which exploit the effects of outputs (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. Interprétabilité et explicabilité de phénomènes prédits par de l’apprentissage machine.Christophe Denis & Franck Varenne - 2022 - Revue Ouverte d'Intelligence Artificielle 3 (3-4):287-310.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathéma- tique et causale d’un phénomène (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. THE ROBOTS ARE COMING: What’s Happening in Philosophy (WHiP)-The Philosophers, August 2022.Jeff Hawley - 2022 - Philosophynews.Com.
    Should we fear a future in which the already tricky world of academic publishing is increasingly crowded out by super-intelligent artificial general intelligence (AGI) systems writing papers on phenomenology and ethics? What are the chances that AGI advances to a stage where a human philosophy instructor is similarly removed from the equation? If Jobst Landgrebe and Barry Smith are correct, we have nothing to fear.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. A Fuzzy-Cognitive-Maps Approach to Decision-Making in Medical Ethics.Alice Hein, Lukas J. Meier, Alena Buyx & Klaus Diepold - 2022 - 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
    Although machine intelligence is increasingly employed in healthcare, the realm of decision-making in medical ethics remains largely unexplored from a technical perspective. We propose an approach based on fuzzy cognitive maps (FCMs), which builds on Beauchamp and Childress’ prima-facie principles. The FCM’s weights are optimized using a genetic algorithm to provide recommendations regarding the initiation, continuation, or withdrawal of medical treatment. The resulting model approximates the answers provided by our team of medical ethicists fairly well and offers a high degree (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. The Deskilling of Teaching and the Case for Intelligent Tutoring Systems.James Hughes - 2022 - Journal of Ethics and Emerging Technologies 31 (2):1-16.
    This essay describes trends in the organization of work that have laid the groundwork for the adoption of interactive AI-driven instruction tools, and the technological innovations that will make intelligent tutoring systems truly competitive with human teachers. Since the origin of occupational specialization, the collection and transmission of knowledge have been tied to individual careers and job roles, specifically doctors, teachers, clergy, and lawyers, the paradigmatic knowledge professionals. But these roles have also been tied to texts and organizations that can (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. AI Powered Anti-Cyber bullying system using Machine Learning Algorithm of Multinomial Naïve Bayes and Optimized Linear Support Vector Machine.Tosin Ige - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 5.
    Unless and until our society recognizes cyber bullying for what it is, the suffering of thousands of silent victims will continue.” ~ Anna Maria Chavez. There had been series of research on cyber bullying which are unable to provide reliable solution to cyber bullying. In this research work, we were able to provide a permanent solution to this by developing a model capable of detecting and intercepting bullying incoming and outgoing messages with 92% accuracy. We also developed a chatbot automation (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. Implementation of Data Mining on a Secure Cloud Computing over a Web API using Supervised Machine Learning Algorithm.Tosin Ige - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 4.
    Ever since the era of internet had ushered in cloud computing, there had been increase in the demand for the unlimited data available through cloud computing for data analysis, pattern recognition and technology advancement. With this also bring the problem of scalability, efficiency and security threat. This research paper focuses on how data can be dynamically mine in real time for pattern detection in a secure cloud computing environment using combination of decision tree algorithm and Random Forest over a restful (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  27. Stimuli-Based Control of Negative Emotions in a Digital Learning Environment.Rossitza Kaltenborn, Mincho Hadjiski & Stefan Koynov - 2022 - In V. Sgurev, V. Jotsov & J. Kacprzyk (eds.), Advances in Intelligent Systems Research and Innovation. Cambridge, Vereinigtes Königreich:
    The proposed system for coping negative emotions arising during the learning process is considered as an embedded part of the complex intelligent learning system realized in a digital environment. By applying data-driven procedures on the current and retrospective data the main didactic-based stimuli provoking emotion generation are identified. They are examined as dominant negative emotions in the context of learning. Due to the presence of strong internal and output interconnections between teaching and emotional states, an intelligent decoupling multidimensional control scheme (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28. Big Data and Artificial Intelligence Based on Personalized Learning – Conformity with Whitehead’s Organismic Theory.Rossitza Kaltenborn & Mintcho Hadjiski - 2022 - In F. Riffert & V. Petrov (eds.), Education and Learning in a World of Accelerated Knowledge Growth: Current Trends in Process Thought. Cambridge, Vereinigtes Königreich:
    The study shows the existence of a broad conformity between Whitehead’s organismic cosmology and the contemporary theory of complex systems at a relevant level of abstraction. One of the most promising directions of educational transformation in the age of big data and artificial intelligence – personalized learning – is conceived as a system of systems and reveals its close congruence with a number of basic Whiteheadian concepts. A new functional structure of personalized learning systems is proposed, including all the core (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. (Un)Fairness in AI: An Intersectional Feminist Analysis.Youjin Kong - 2022 - Blog of the American Philosophical Association, Women in Philosophy Series.
    Racial, Gender, and Intersectional Biases in AI / -/- Dominant View of Intersectional Fairness in the AI Literature / -/- Three Fundamental Problems with the Dominant View / 1. Overemphasis on Intersections of Attributes / 2. Dilemma between Infinite Regress and Fairness Gerrymandering / 3. Narrow Understanding of Fairness as Parity / -/- Rethinking AI Fairness: from Weak to Strong Fairness.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Are “Intersectionally Fair” AI Algorithms Really Fair to Women of Color? A Philosophical Analysis.Youjin Kong - 2022 - Facct: Proceedings of the Acm Conference on Fairness, Accountability, and Transparency:485-494.
    A growing number of studies on fairness in artificial intelligence (AI) use the notion of intersectionality to measure AI fairness. Most of these studies take intersectional fairness to be a matter of statistical parity among intersectional subgroups: an AI algorithm is “intersectionally fair” if the probability of the outcome is roughly the same across all subgroups defined by different combinations of the protected attributes. This paper identifies and examines three fundamental problems with this dominant interpretation of intersectional fairness in AI. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. Philosophical foundations of intelligence collection and analysis: a defense of ontological realism.William Mandrick & Barry Smith - 2022 - Intelligence and National Security 38.
    There is a common misconception across the lntelligence Community (IC) to the effect that information trapped within multiple heterogeneous data silos can be semantically integrated by the sorts of meaning-blind statistical methods employed in much of artificial intelligence (Al) and natural language processlng (NLP). This leads to the misconception that incoming data can be analysed coherently by relying exclusively on the use of statistical algorithms and thus without any shared framework for classifying what the data are about. Unfortunately, such approaches (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Calculating the mind-change complexity of learning algebraic structures.Luca San Mauro, Nikolay Bazhenov & Vittorio Cipriani - 2022 - In Ulrich Berger, Johanna N. Y. Franklin, Florin Manea & Arno Pauly (eds.), Revolutions and Revelations in Computability. Cham, Svizzera: pp. 1-12.
    This paper studies algorithmic learning theory applied to algebraic structures. In previous papers, we have defined our framework, where a learner, given a family of structures, receives larger and larger pieces of an arbitrary copy of a structure in the family and, at each stage, is required to output a conjecture about the isomorphism type of such a structure. The learning is successful if there is a learner that eventually stabilizes to a correct conjecture. Here, we analyze the number of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. Clinical Ethics – To Compute, or Not to Compute?Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (12):W1-W4.
    Can machine intelligence do clinical ethics? And if so, would applying it to actual medical cases be desirable? In a recent target article (Meier et al. 2022), we described the piloting of our advisory algorithm METHAD. Here, we reply to commentaries published in response to our project. The commentaries fall into two broad categories: concrete criticism that concerns the development of METHAD; and the more general question as to whether one should employ decision-support systems of this kind—the debate we set (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  35. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Model-induced escape.Barry Smith - 2022 - Facing the Future, Facing the Screen: 10Th Budapest Visual Learning Conference.
    We can illustrate the phenomenon of model-induced escape by examining the phenomenon of spam filters. Spam filter A is, we can assume, very effective at blocking spam. Indeed it is so effective that it motivates the authors of spam to invent new types of spam that will beat the filters of spam filter A. -/- An example of this phenomenon in the realm of philosophy is illustrated in the work of Nyíri on Wittgenstein's political beliefs. Nyíri writes a paper demonstrating (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. On characterizations of learnability with computable learners.Tom F. Sterkenburg - 2022 - Proceedings of Machine Learning Research 178:3365-3379.
    We study computable PAC (CPAC) learning as introduced by Agarwal et al. (2020). First, we consider the main open question of finding characterizations of proper and improper CPAC learning. We give a characterization of a closely related notion of strong CPAC learning, and provide a negative answer to the COLT open problem posed by Agarwal et al. (2021) whether all decidably representable VC classes are improperly CPAC learnable. Second, we consider undecidability of (computable) PAC learnability. We give a simple general (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   33 citations  
  40. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  41. Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  42. Correlation Isn’t Good Enough: Causal Explanation and Big Data. [REVIEW]Frank Cabrera - 2021 - Metascience 30 (2):335-338.
    A review of Gary Smith and Jay Cordes: The Phantom Pattern Problem: The Mirage of Big Data. New York: Oxford University Press, 2020.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  43. Making AI Intelligible: Philosophical Foundations.Herman Cappelen & Josh Dever - 2021 - New York, USA: Oxford University Press.
    Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  44. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation outcomes? (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Microethics for healthcare data science: attention to capabilities in sociotechnical systems.Mark Graves & Emanuele Ratti - 2021 - The Future of Science and Ethics 6:64-73.
    It has been argued that ethical frameworks for data science often fail to foster ethical behavior, and they can be difficult to implement due to their vague and ambiguous nature. In order to overcome these limitations of current ethical frameworks, we propose to integrate the analysis of the connections between technical choices and sociocultural factors into the data science process, and show how these connections have consequences for what data subjects can do, accomplish, and be. Using healthcare as an example, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Raising Ethical Machines: Bottom-Up Methods to Implementing Machine Ethics.Marten H. L. Kaas - 2021 - In Steven John Thompson (ed.), Machine Law, Ethics, and Morality in the Age of Artificial Intelligence. IGI Global. pp. 47-68.
    The ethical decision-making and behaviour of artificially intelligent systems is increasingly important given the prevalence of these systems and the impact they can have on human well-being. Many current approaches to implementing machine ethics utilize top-down approaches, that is, ensuring the ethical decision-making and behaviour of an agent via its adherence to explicitly defined ethical rules or principles. Despite the attractiveness of this approach, this chapter explores how all top-down approaches to implementing machine ethics are fundamentally limited and how bottom-up (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. Exploring Machine Learning Techniques for Coronary Heart Disease Prediction.Hisham Khdair - 2021 - International Journal of Advanced Computer Science and Applications 12 (5):28-36.
    Coronary Heart Disease (CHD) is one of the leading causes of death nowadays. Prediction of the disease at an early stage is crucial for many health care providers to protect their patients and save lives and costly hospitalization resources. The use of machine learning in the prediction of serious disease events using routine medical records has been successful in recent years. In this paper, a comparative analysis of different machine learning techniques that can accurately predict the occurrence of CHD events (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
1 — 50 / 130