Results for 'inductive learning'

988 found
Order:
  1.  4
    Inductive learning of structural descriptions.Thomas G. Dietterich & Ryszard S. Michalski - 1981 - Artificial Intelligence 16 (3):257-294.
  2.  87
    Inductive learning by machines.Stuart Russell - 1991 - Philosophical Studies 64 (October):37-64.
  3.  18
    Constraints and Preferences in Inductive Learning: An Experimental Study of Human and Machine Performance.Douglas L. Medin, William D. Wattenmaker & Ryszard S. Michalski - 1987 - Cognitive Science 11 (3):299-339.
    The paper examines constraints and preferences employed by people in learning decision rules from preclassified examples. Results from four experiments with human subjects were analyzed and compared with artificial intelligence (AI) inductive learning programs. The results showed the people's rule inductions tended to emphasize category validity (probability of some property, given a category) more than cue validity (probability that an entity is a member of a category given that it has some property) to a greater extent than (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  4.  46
    Inductive Learning in Small and Large Worlds.Simon M. Huttegger - 2017 - Philosophy and Phenomenological Research 95 (1):90-116.
  5.  2
    Conceptual inductive learning.Miroslav Kubat - 1991 - Artificial Intelligence 52 (2):169-182.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  9
    Inductive learning from incomplete and imprecise examples.Janusz Kacprzyk & Cezary Iwański - 1991 - In B. Bouchon-Meunier, R. R. Yager & L. A. Zadeh (eds.), Uncertainty in Knowledge Bases. Springer. pp. 423--430.
  7.  1
    Inductive learning of search control rules for planning.Christopher Leckie & Ingrid Zukerman - 1998 - Artificial Intelligence 101 (1-2):63-98.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  17
    Nonmonotonic abductive inductive learning.Oliver Ray - 2009 - Journal of Applied Logic 7 (3):329-340.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9.  12
    A theory of conditioning: Inductive learning within rule-based default hierarchies.Keith J. Holyoak, Kyunghee Koh & Richard E. Nisbett - 1989 - Psychological Review 96 (2):315-340.
  10.  33
    A cognitive theory without inductive learning.Lev Goldfarb - 1992 - Behavioral and Brain Sciences 15 (3):446-447.
  11. Theory-based Bayesian models of inductive learning and reasoning.Joshua B. Tenenbaum, Thomas L. Griffiths & Charles Kemp - 2006 - Trends in Cognitive Sciences 10 (7):309-318.
  12. MDLChunker: A MDL-Based Cognitive Model of Inductive Learning.Vivien Robinet, Benoît Lemaire & Mirta B. Gordon - 2011 - Cognitive Science 35 (7):1352-1389.
    This paper presents a computational model of the way humans inductively identify and aggregate concepts from the low-level stimuli they are exposed to. Based on the idea that humans tend to select the simplest structures, it implements a dynamic hierarchical chunking mechanism in which the decision whether to create a new chunk is based on an information-theoretic criterion, the Minimum Description Length (MDL) principle. We present theoretical justifications for this approach together with results of an experiment in which participants, exposed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  13.  11
    A theory and methodology of inductive learning.Ryszard S. Michalski - 1983 - Artificial Intelligence 20 (2):111-161.
  14.  77
    Emergence of Information Transfer by Inductive Learning.Simon M. Huttegger & Brian Skyrms - 2008 - Studia Logica 89 (2):237-256.
    We study a simple game theoretic model of information transfer which we consider to be a baseline model for capturing strategic aspects of epistemological questions. In particular, we focus on the question whether simple learning rules lead to an efficient transfer of information. We find that reinforcement learning, which is based exclusively on payoff experiences, is inadequate to generate efficient networks of information transfer. Fictitious play, the game theoretic counterpart to Carnapian inductive logic and a more sophisticated (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  15.  19
    Semiotic Trees and Classifications for Inductive Learning Systems.Ana Marostica - 1998 - Semiotics:114-127.
  16.  30
    Transcending inductive category formation in learning.Roger C. Schank, Gregg C. Collins & Lawrence E. Hunter - 1986 - Behavioral and Brain Sciences 9 (4):639-651.
    The inductive category formation framework, an influential set of theories of learning in psychology and artificial intelligence, is deeply flawed. In this framework a set of necessary and sufficient features is taken to define a category. Such definitions are not functionally justified, are not used by people, and are not inducible by a learning system. Inductive theories depend on having access to all and only relevant features, which is not only impossible but begs a key question (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   58 citations  
  17. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  18. Induction: Processes of Inference, Learning, and Discovery.John H. Holland, Keith J. Holyoak, Richard E. Nisbett & Paul R. Thagard - 1991 - British Journal for the Philosophy of Science 42 (2):269-272.
     
    Export citation  
     
    Bookmark   217 citations  
  19. Inductive logic, verisimilitude, and machine learning.Ilkka Niiniluoto - 2005 - In Petr H’Ajek, Luis Vald’es-Villanueva & Dag Westerståhl (eds.), Logic, methodology and philosophy of science. London: College Publications. pp. 295/314.
    This paper starts by summarizing work that philosophers have done in the fields of inductive logic since 1950s and truth approximation since 1970s. It then proceeds to interpret and critically evaluate the studies on machine learning within artificial intelligence since 1980s. Parallels are drawn between identifiability results within formal learning theory and convergence results within Hintikka’s inductive logic. Another comparison is made between the PAC-learning of concepts and the notion of probable approximate truth.
     
    Export citation  
     
    Bookmark   3 citations  
  20. Induction: Processes of Inference, Learning, and Discovery.John H. Holland, Keith J. Holyoak, Richard E. Nisbett & Paul R. Thagard - 1988 - Behaviorism 16 (2):181-184.
     
    Export citation  
     
    Bookmark   140 citations  
  21. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - forthcoming - ACM Computing Surveys.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  68
    Does learning to count involve a semantic induction?Kathryn Davidson, Kortney Eng & David Barner - 2012 - Cognition 123 (1):162-173.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   43 citations  
  23. Machine learning and the foundations of inductive inference.Francesco Bergadano - 1993 - Minds and Machines 3 (1):31-51.
    The problem of valid induction could be stated as follows: are we justified in accepting a given hypothesis on the basis of observations that frequently confirm it? The present paper argues that this question is relevant for the understanding of Machine Learning, but insufficient. Recent research in inductive reasoning has prompted another, more fundamental question: there is not just one given rule to be tested, there are a large number of possible rules, and many of these are somehow (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  26
    Learning and Coordination: Inductive Deliberation, Equilibrium, and Convention.Peter Vanderschraaf - 2001 - Routledge.
    Vanderschraaf develops a new theory of game theory equilibrium selection in this book. The new theory defends general correlated equilibrium concepts and suggests a new analysis of convention.
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  25.  12
    Learning abstract visual concepts via probabilistic program induction in a Language of Thought.Matthew C. Overlan, Robert A. Jacobs & Steven T. Piantadosi - 2017 - Cognition 168 (C):320-334.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  26.  37
    Machines Learn Better with Better Data Ontology: Lessons from Philosophy of Induction and Machine Learning Practice.Dan Li - 2023 - Minds and Machines 33 (3):429-450.
    As scientists start to adopt machine learning (ML) as one research tool, the security of ML and the knowledge generated become a concern. In this paper, I explain how supervised ML can be improved with better data ontology, or the way we make categories and turn information into data. More specifically, we should design data ontology in such a way that is consistent with the knowledge that we have about the target phenomenon so that such ontology can help us (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. Machine learning, inductive reasoning, and reliability of generalisations.Petr Spelda - 2020 - AI and Society 35 (1):29-37.
    The present paper shows how statistical learning theory and machine learning models can be used to enhance understanding of AI-related epistemological issues regarding inductive reasoning and reliability of generalisations. Towards this aim, the paper proceeds as follows. First, it expounds Price’s dual image of representation in terms of the notions of e-representations and i-representations that constitute subject naturalism. For Price, this is not a strictly anti-representationalist position but rather a dualist one (e- and i-representations). Second, the paper (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  53
    Bayesian learning and the psychology of rule induction.Ansgar D. Endress - 2013 - Cognition 127 (2):159-176.
  29. Statistical learning theory as a framework for the philosophy of induction.Gilbert Harman & Sanjeev Kulkarni - manuscript
    Statistical Learning Theory (e.g., Hastie et al., 2001; Vapnik, 1998, 2000, 2006) is the basic theory behind contemporary machine learning and data-mining. We suggest that the theory provides an excellent framework for philosophical thinking about inductive inference.
     
    Export citation  
     
    Bookmark   1 citation  
  30.  15
    Quantifying inductive bias: AI learning algorithms and Valiant's learning framework.David Haussler - 1988 - Artificial Intelligence 36 (2):177-221.
  31. Unsupervised learning and grammar induction.Alex Clark & Shalom Lappin - unknown
    In this chapter we consider unsupervised learning from two perspectives. First, we briefly look at its advantages and disadvantages as an engineering technique applied to large corpora in natural language processing. While supervised learning generally achieves greater accuracy with less data, unsupervised learning offers significant savings in the intensive labour required for annotating text. Second, we discuss the possible relevance of unsupervised learning to debates on the cognitive basis of human language acquisition. In this context we (...)
     
    Export citation  
     
    Bookmark  
  32.  46
    Implicit learning in rule induction and problem solving.Aldo Zanga, Jean-François Richard & Charles Tijus - 2004 - Thinking and Reasoning 10 (1):55-83.
    Using the Chinese Ring Puzzle (Kotovsky & Simon, Citation1990; P. J. Reber & Kotovsky, Citation1997), we studied the effect on rule discovery of having to plan actions or not in order to reach a goal state. This was done by asking participants to predict legal moves as in implicit learning tasks (Experiment 1) and by asking participants to make legal moves as in problem-solving tasks (Experiment 2). Our hypothesis was that having a specific goal state to reach has a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  74
    Reliable Reasoning: Induction and Statistical Learning Theory.Gilbert Harman & Sanjeev Kulkarni - 2007 - Bradford.
    In _Reliable Reasoning_, Gilbert Harman and Sanjeev Kulkarni -- a philosopher and an engineer -- argue that philosophy and cognitive science can benefit from statistical learning theory, the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors -- a central topic in SLT. After discussing philosophical attempts (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  34. A statistical learning approach to a problem of induction.Kino Zhao - manuscript
    At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductive inference rule. At the weakest, it challenges our ability to articulate and apply good inductive inference rules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, whether (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  35. A Computational Learning Semantics for Inductive Empirical Knowledge.Kevin T. Kelly - 2014 - In Alexandru Baltag & Sonja Smets (eds.), Johan van Benthem on Logic and Information Dynamics. Springer International Publishing. pp. 289-337.
    This chapter presents a new semantics for inductive empirical knowledge. The epistemic agent is represented concretely as a learner who processes new inputs through time and who forms new beliefs from those inputs by means of a concrete, computable learning program. The agent’s belief state is represented hyper-intensionally as a set of time-indexed sentences. Knowledge is interpreted as avoidance of error in the limit and as having converged to true belief from the present time onward. Familiar topics are (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. How to Learn the Natural Numbers: Inductive Inference and the Acquisition of Number Concepts.Eric Margolis & Stephen Laurence - 2008 - Cognition 106 (2):924-939.
    Theories of number concepts often suppose that the natural numbers are acquired as children learn to count and as they draw an induction based on their interpretation of the first few count words. In a bold critique of this general approach, Rips, Asmuth, Bloomfield [Rips, L., Asmuth, J. & Bloomfield, A.. Giving the boot to the bootstrap: How not to learn the natural numbers. Cognition, 101, B51–B60.] argue that such an inductive inference is consistent with a representational system that (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  37. Learning Word Meaning From Dictionary Definitions: Sensorimotor Induction Precedes Verbal Instruction.Stevan Harnad - unknown
    Almost all words are the names of categories. We can learn most of our words (and hence our categories) from dictionary definitions, but not all of them. Some have to be learned from direct experience. To understand a word from its definition we need to already understand the words used in the definition. This is the “Symbol Grounding Problem” [1]. How many words (and which ones) do we need to ground directly in sensorimotor experience in order to be able to (...)
     
    Export citation  
     
    Bookmark  
  38.  5
    Inductive Logic Programming: Issues, results and the challenge of Learning Language in Logic.Stephen Muggleton - 1999 - Artificial Intelligence 114 (1-2):283-296.
  39. Induction, algorithmic learning theory, and philosophy.Friend Michele, B. Goethe Norma & Harizanov Valentina (eds.) - 2007 - Springer.
    No categories
     
    Export citation  
     
    Bookmark  
  40.  28
    Logical Induction, Machine Learning, and Human Creativity.Jean-GaBrIel GanascIa - 2011 - In Thomas Bartscherer (ed.), Switching Codes. Chicago University Press. pp. 140.
  41. Learning without negative examples via variable-valued logic characterizations: the uniclass inductive program AQ7UNI.Robert Stepp - 1979 - Urbana: Dept. of Computer Science, University of Illinois at Urbana-Champaign.
  42. Learning from games: Inductive bias and Bayesian inference.Michael H. Coen & Yue Gao - 2009 - In N. A. Taatgen & H. van Rijn (eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society. pp. 2729--2734.
     
    Export citation  
     
    Bookmark   1 citation  
  43.  42
    Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  14
    Induction and explanation: Complementary models of learning.Pat Langley - 1986 - Behavioral and Brain Sciences 9 (4):661-662.
  45.  3
    Induction: Process of inference, learning and discovery.Jeff Shrager - 1989 - Artificial Intelligence 41 (2):249-252.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  60
    Novelty and Inductive Generalization in Human Reinforcement Learning.Samuel J. Gershman & Yael Niv - 2015 - Topics in Cognitive Science 7 (3):391-415.
    In reinforcement learning, a decision maker searching for the most rewarding option is often faced with the question: What is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: How can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and we describe an equivalence between the Bayesian (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  32
    Do children learn the integers by induction?Lance J. Rips, Jennifer Asmuth & Amber Bloomfield - 2008 - Cognition 106 (2):940-951.
  48.  24
    Slow mapping: Color word learning as a gradual inductive process.Katie Wagner, Karen Dobkins & David Barner - 2013 - Cognition 127 (3):307-317.
  49.  15
    What you learn is more than what you see: what can sequencing effects tell us about inductive category learning?Paulo F. Carvalho & Robert L. Goldstone - 2015 - Frontiers in Psychology 6.
  50. The first riddle of induction : Sextus Empiricus and the formal learning theorists.Justin Vlasits - 2020 - In Justin Vlasits & Katja Maria Vogt (eds.), Epistemology after Sextus Empiricus. Oxford University Press.
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 988