Related categories

95 found
Order:
1 — 50 / 95
  1. Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - manuscript
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus, an agent's self-reflection ability can be numerically estimated by running the agent through a battery (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. A Statistical Learning Approach to a Problem of Induction.Kino Zhao - manuscript
    At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductive inference rule. At the weakest, it challenges our ability to articulate and apply good inductive inference rules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, whether we are in (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  3. Can Reinforcement Learning Learn Itself? A Reply to 'Reward is Enough'.Samuel Allen Alexander - forthcoming - CIFMA 2021.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Autognorics Approach to the Problem of Defining Life and Artificial Intelligence.Joey Lawsin - forthcoming
    Many thinkers, past and present, have tried to solve the underlying mystery of Life. Yet, no one has ever categorically expressed its exact concrete essence, scope, or meaning until a new school of thought known as Originemology was conceptualized in 1988 by Joey Lawsin. Life and consciousness can not be explained properly because their theoretical and philosophical bases are wrong. When the bases are incorrect, the outcomes are incorrect. Like the words associated with life such as alive, aware, conscious, intelligent, (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  5. Believing in Black Boxes: Must Machine Learning in Healthcare Be Explainable to Be Evidence-Based?Liam McCoy, Connor Brenna, Stacy Chen, Karina Vold & Sunit Das - forthcoming - Journal of Clinical Epidemiology.
    Objective: To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application. Study Design and Setting: This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC. Results: We find that concerns regarding explainability are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - forthcoming - ACM Computing Surveys.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. A Note on the Learning-Theoretic Characterizations of Randomness and Convergence.Tomasz Steifer - forthcoming - Review of Symbolic Logic:1-15.
    Recently, a connection has been established between two branches of computability theory, namely between algorithmic randomness and algorithmic learning theory. Learning-theoretical characterizations of several notions of randomness were discovered. We study such characterizations based on the asymptotic density of positive answers. In particular, this note provides a new learning-theoretic definition of weak 2-randomness, solving the problem posed by (Zaffora Blando, Rev. Symb. Log. 2019). The note also highlights the close connection between these characterizations and the problem of convergence on random (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8. On Explaining the Success of Induction.Tom F. Sterkenburg - forthcoming - British Journal for the Philosophy of Science.
    Douven (in press) observes that Schurz's meta-inductive justification of induction cannot explain the great empirical success of induction, and offers an explanation based on computer simulations of the social and evolutionary development of our inductive practices. In this paper, I argue that Douven's account does not address the explanatory question that Schurz's argument leaves open, and that the assumption of the environment's induction-friendliness that is inherent to Douven's simulations is not justified by Schurz's argument.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - forthcoming - Philosophy of Science.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this paper, I argue that non-epistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2022 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Remove from this list   Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  11. Interprétabilité et explicabilité de phénomènes prédits par de l’apprentissage machine.Christophe Denis & Franck Varenne - 2022 - Revue Ouverte d'Intelligence Artificielle 3 (3-4):287-310.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathéma- tique et causale d’un phénomène (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  12. Philosophical Foundations of Intelligence Collection and Analysis: A Defense of Ontological Realism.William Mandrick & Barry Smith - 2022 - Intelligence and National Security 38.
    There is a common misconception across the lntelligence Community (IC) to the effect that information trapped within multiple heterogeneous data silos can be semantically integrated by the sorts of meaning-blind statistical methods employed in much of artificial intelligence (Al) and natural language processlng (NLP). This leads to the misconception that incoming data can be analysed coherently by relying exclusively on the use of statistical algorithms and thus without any shared framework for classifying what the data are about. Unfortunately, such approaches (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  13. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Understanding From Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  15. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  16. Correlation Isn’T Good Enough: Causal Explanation and Big Data. [REVIEW]Frank Cabrera - 2021 - Metascience 30 (2):335-338.
    A review of Gary Smith and Jay Cordes: The Phantom Pattern Problem: The Mirage of Big Data. New York: Oxford University Press, 2020.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  17. Making AI Intelligible: Philosophical Foundations.Herman Cappelen & Joshua Dever - 2021 - New York, USA: Oxford University Press.
    Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Towards Knowledge-Driven Distillation and Explanation of Black-Box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Fair Machine Learning Under Partial Compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation outcomes? (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Microethics for Healthcare Data Science: Attention to Capabilities in Sociotechnical Systems.Mark Graves & Emanuele Ratti - 2021 - The Future of Science and Ethics 6:64-73.
    It has been argued that ethical frameworks for data science often fail to foster ethical behavior, and they can be difficult to implement due to their vague and ambiguous nature. In order to overcome these limitations of current ethical frameworks, we propose to integrate the analysis of the connections between technical choices and sociocultural factors into the data science process, and show how these connections have consequences for what data subjects can do, accomplish, and be. Using healthcare as an example, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Exploring Machine Learning Techniques for Coronary Heart Disease Prediction.Hisham Khdair - 2021 - International Journal of Advanced Computer Science and Applications 12 (5):28-36.
    Coronary Heart Disease (CHD) is one of the leading causes of death nowadays. Prediction of the disease at an early stage is crucial for many health care providers to protect their patients and save lives and costly hospitalization resources. The use of machine learning in the prediction of serious disease events using routine medical records has been successful in recent years. In this paper, a comparative analysis of different machine learning techniques that can accurately predict the occurrence of CHD events (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  23. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  24. Healthcare and Anomaly Detection: Using Machine Learning to Predict Anomalies in Heart Rate Data.Edin Šabić, David Keeley, Bailey Henderson & Sara Nannemann - 2021 - AI and Society 36 (1):149-158.
    The application of machine learning algorithms to healthcare data can enhance patient care while also reducing healthcare worker cognitive load. These algorithms can be used to detect anomalous physiological readings, potentially leading to expedited emergency response or new knowledge about the development of a health condition. However, while there has been much research conducted in assessing the performance of anomaly detection algorithms on well-known public datasets, there is less conceptual comparison across unsupervised and supervised performance on physiological data. Moreover, while (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. Predicting Me: The Route to Digital Immortality?Paul Smart - 2021 - In Robert W. Clowes, Klaus Gärtner & Inês Hipólito (eds.), The Mind-Technology Problem: Investigating Minds, Selves and 21st Century Artefacts. Cham, Switzerland: Springer. pp. 185–207.
    An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system that relies on generative models to predict the structure of sensory information. Such a view resonates with a body of work in machine learning that has explored the problem-solving capabilities of hierarchically-organized, multi-layer (i.e., deep) neural networks, many of which acquire and deploy generative models of their training data. The present chapter explores the extent to which the ostensible convergence on a common neurocomputational architecture (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. The No-Free-Lunch Theorems of Supervised Learning.Tom F. Sterkenburg & Peter D. Grünwald - 2021 - Synthese 199 (3-4):9979-10015.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Ethical Implications of Alzheimer’s Disease Prediction in Asymptomatic Individuals Through Artificial Intelligence.Frank Ursin, Cristian Timmermann & Florian Steger - 2021 - Diagnostics 11 (3):440.
    Biomarker-based predictive tests for subjectively asymptomatic Alzheimer’s disease (AD) are utilized in research today. Novel applications of artificial intelligence (AI) promise to predict the onset of AD several years in advance without determining biomarker thresholds. Until now, little attention has been paid to the new ethical challenges that AI brings to the early diagnosis in asymptomatic individuals, beyond contributing to research purposes, when we still lack adequate treatment. The aim of this paper is to explore the ethical arguments put forward (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2021 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  29. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2021 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  30. The Archimedean Trap: Why Traditional Reinforcement Learning Will Probably Not Yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Genealogy of Algorithms: Datafication as Transvaluation.Virgil W. Brower - 2020 - le Foucaldien 6 (1):1-43.
    This article investigates religious ideals persistent in the datafication of information society. Its nodal point is Thomas Bayes, after whom Laplace names the primal probability algorithm. It reconsiders their mathematical innovations with Laplace's providential deism and Bayes' singular theological treatise. Conceptions of divine justice one finds among probability theorists play no small part in the algorithmic data-mining and microtargeting of Cambridge Analytica. Theological traces within mathematical computation are emphasized as the vantage over large numbers shifts to weights beyond enumeration in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  33. On Uniform Definability of Types Over Finite Sets for NIP Formulas.Shlomo Eshel & Itay Kaplan - 2020 - Journal of Mathematical Logic 21 (3).
    Combining two results from machine learning theory we prove that a formula is NIP if and only if it satisfies uniform definability of types over finite sets. This settles a conjecture of La...
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Perceptron Connectives in Knowledge Representation.Pietro Galliani, Guendalina Righetti, Daniele Porello, Oliver Kutz & Nicolas Toquard - 2020 - In Knowledge Engineering and Knowledge Management - 22nd International Conference, {EKAW} 2020, Bolzano, Italy, September 16-20, 2020, Proceedings. Lecture Notes in Computer Science 12387. pp. 183-193.
    We discuss the role of perceptron (or threshold) connectives in the context of Description Logic, and in particular their possible use as a bridge between statistical learning of models from data and logical reasoning over knowledge bases. We prove that such connectives can be added to the language of most forms of Description Logic without increasing the complexity of the corresponding inference problem. We show, with a practical example over the Gene Ontology, how even simple instances of perceptron connectives are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Humanistic Interpretation and Machine Learning.Juho Paakkonen & Petri Ylikoski - 2020 - Synthese 199 (1-2):1-37.
    This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that unsupervised modeling does not eliminate the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Machine learning, inductive reasoning, and reliability of generalisations.Petr Spelda - 2020 - AI and Society 35 (1):29-37.
    The present paper shows how statistical learning theory and machine learning models can be used to enhance understanding of AI-related epistemological issues regarding inductive reasoning and reliability of generalisations. Towards this aim, the paper proceeds as follows. First, it expounds Price’s dual image of representation in terms of the notions of e-representations and i-representations that constitute subject naturalism. For Price, this is not a strictly anti-representationalist position but rather a dualist one (e- and i-representations). Second, the paper links this debate (...)
    Remove from this list   Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  38. The Future of Human-Artificial Intelligence Nexus and its Environmental Costs.Petr Spelda & Vit Stritecky - 2020 - Futures 117.
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive inferences. The (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  39. What Can Artificial Intelligence Do for Scientific Realism?Petr Spelda & Vit Stritecky - 2020 - Axiomathes 31 (1):85-104.
    The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. 类人猿或安卓会毁灭地球吗?*雷·库兹韦尔(2012年)关于如何创造心灵的评论 (Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012)) (2019年修订版).Michael Richard Starks - 2020 - In 欢迎来到地球上的地狱: 婴儿,气候变化,比特币,卡特尔,中国,民主,多样性,养成基因,平等,黑客,人权,伊斯兰教,自由主义,繁荣,网络,混乱。饥饿,疾病,暴力,人工智能,战争. Las Vegas, NV USA: Reality Press. pp. 146-158.
    几年前,我通常可以从书名中分辨出什么,或者至少从章节标题中看出,会犯什么样的哲学错误,以及错误的频率。就名义上的科学著作而言,这些可能在很大程度上局限于某些章节,这些章节具有哲学意义或试图得出关于该作 品的意义或长期意义的一般性结论。然而,通常情况下,事实的科学问题慷慨地与哲学的胡言乱语,这些事实意味着什么。维特根斯坦在大约80年前描述的科学问题与各种语言游戏所描述的明确区别很少被考虑,因此人们交替 地被科学所震惊,并因它的不连贯而感到沮丧。分析。因此,这是与这个卷。 如果一个人要创造一个或多或少像我们一样的头脑,一个人需要有一个理性的逻辑结构,并理解两种思想体系(双过程理论)。如果一个人要对此进行哲学思考,就需要理解科学事实问题与语言如何在问题语境中工作,以及如何 避免还原主义和科学主义的陷阱的哲学问题之间的区别,但Kurzweil,如最学生的行为,基本上都是无知的。他被模型、理论和概念所陶醉,以及解释的冲动,而维特根斯坦向我们表明,我们只需要描述,理论、概念等 只是使用语言(语言游戏)的方式,只有它们有明确的价值测试(清晰的真理制造者,或约翰西尔(AI最著名的批评家)喜欢说,明确的满意条件(COS))。我试图在我最近的著作中对此作一个开端。 那些希望从现代两个系统的观点来看为人类行为建立一个全面的最新框架的人,可以查阅我的书《路德维希的哲学、心理学、Mind 和语言的逻辑结构》维特根斯坦和约翰·西尔的《第二部》(2019年)。那些对我更多的作品感兴趣的人可能会看到《会说话的猴子——一个末日星球上的哲学、心理学、科学、宗教和政治——文章和评论2006-201 9年第3次(2019年)和自杀乌托邦幻想21篇世纪4日 (2019) .
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  41. The Meta-Inductive Justification of Induction.Tom F. Sterkenburg - 2020 - Episteme 17 (4):519-541.
    I evaluate Schurz's proposed meta-inductive justification of induction, a refinement of Reichenbach's pragmatic justification that rests on results from the machine learning branch of prediction with expert advice. My conclusion is that the argument, suitably explicated, comes remarkably close to its grand aim: an actual justification of induction. This finding, however, is subject to two main qualifications, and still disregards one important challenge. The first qualification concerns the empirical success of induction. Even though, I argue, Schurz's argument does not need (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Deep Learning: A Philosophical Introduction.Cameron Buckner - 2019 - Philosophy Compass 14 (10).
  43. On the Possibility of Emotional Robots.Godwin Darmanin - 2019 - Revista de Filosofia Aurora 31 (54).
    In this article, I examine whether the possibility exists that in the foreseeable future, robot technology will permit the development of emotional robots. As the title suggests, the content is of a technological as well as of a philosophical nature. As a matter of fact, my aim in writing this paper was that of bridging two distinctive fields in a world where humanity has become accustomed to technological innovations while overlooking any consequential complications arising from such inventions. To this end, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Psychopower and Ordinary Madness: Reticulated Dividuals in Cognitive Capitalism.Ekin Erkan - 2019 - Cosmos and History 15 (1):214-241.
    Despite the seemingly neutral vantage of using nature for widely-distributed computational purposes, neither post-biological nor post-humanist teleology simply concludes with the real "end of nature" as entailed in the loss of the specific ontological status embedded in the identifier "natural." As evinced by the ecological crises of the Anthropocene—of which the 2019 Brazil Amazon rainforest fires are only the most recent—our epoch has transfixed the “natural order" and imposed entropic artificial integration, producing living species that become “anoetic,” made to serve (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Changement de Méthode causé par la Numérisation.Jörn Lengsfeld - 2019
    La numérisation va de pair avec un changement fondamental des méthodes qui a le potentiel de changer la pensée, les décisions et les actions des gens. Sur la base de cette thèse, une structure est proposée pour l’analyse de le changement de méthode induit par la numérisation. L’article donne un bref aperçu des forces motrices, des formes et des effets de ce changement méthodologique.
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  47. Method Change Caused by Digitalization.Jörn Lengsfeld - 2019
    Digitalization goes hand in hand with a fundamental change in methods that has the potential to change people’s thinking, decisions and actions. Departing from this thesis, a structure is proposed for the analysis of the method change induced by digitalization. The article provides a brief outline of the driving forces, the forms and the effects of this method change.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. Methodenwandel durch Digitalisierung.Jörn Lengsfeld - 2019
    Die Digitalisierung geht mit einem fundamentalen Methodenwandel einher, dem das Potential innewohnt, das Denken, Entscheiden und Handeln der Menschen nachhaltig zu verändern. Ausgehend von dieser These wird eine gliedernde Struktur zur näheren Betrachtung des durch die Digitalisierung induzierten Methodenwandels vorgeschlagen. Der Artikel bietet dazu einen kurzen Abriss über die Triebfedern, die Formen und die Auswirkungen dieses Methodenwandels.
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  49. Semantic Information G Theory and Logical Bayesian Inference for Machine Learning.Chenguang Lu - 2019 - Information 10 (8):261.
    An important problem with machine learning is that when label number n>2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G theory consists (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50. The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity.Adrian Mróz - 2019 - Kultura I Historia 36 (2):17-40.
    By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine problem (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 95