Skip to main content
Log in

Nonmonotonic Inferences and Neural Networks

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neural networks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as non-monotonic inferences, and (b) that there is a strict correspondence between the coding of knowledge in Hopfield networks and the knowledge representation in weight-annotated Poole systems. These results show the usefulness of non-monotonic logic as a descriptive and analytic tool for analyzing emerging properties of connectionist networks. Assuming an exponential development of the weight function, the present account relates to optimality theory – a general framework that aims to integrate insights from symbolism and connectionism. The paper concludes with some speculations about extending the present ideas.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • A. d’Avila Garcez K. Broda D. Gabbay (2001) ArticleTitle‘Symbolic Knowledge Extraction from Trained Neural Networks: A Sound Approach’ Artificial Intelligence 125 153–205

    Google Scholar 

  • C. Balkenius P. Gärdenfors (1991) Nonmonotonic Inferences in Neural Networks J.A. Allen R. Fikes E. Sandewall (Eds) Principles of Knowledge Representation and Reasoning Morgan Kaufmann San Mateo, CA

    Google Scholar 

  • R. Bartsch (2002) Consciousness Emerging John Benjamins Amsterdam & Philadelphia

    Google Scholar 

  • W. Bechtel (2002) Connectionism and the Mind Blackwell Oxford

    Google Scholar 

  • P. Boersma B. Hayes (2001) ArticleTitle‘Empirical Tests of the Gradual Learning Algorithm’ Linguistic Inquiry, 32 45–86 Occurrence Handle10.1162/002438901554586

    Article  Google Scholar 

  • B. Boutsinas M. Vrahatis (2001) ArticleTitle‘Artificial Nonmonotonic Neural Networks’ Artificial Intelligence 132 1–38 Occurrence Handle10.1016/S0004-3702(01)00126-6

    Article  Google Scholar 

  • N. Chomsky M. Halle (1968) The Sound Pattern of English Harper and Row New York

    Google Scholar 

  • M. A. Cohen S. Grossberg (1983) ArticleTitleAbsolute Stability of Global Pattern Formation and Parallel Memory Storage by Competetive Neural Networks IEEE Transactions on Systems, Man, and Cybernetics SMC-13 815–826

    Google Scholar 

  • D. C. Dennett (1995) Darwin’s Dangerous Idea Simon & Schuster New York

    Google Scholar 

  • M. Derthick (1990) ArticleTitle‘Mundane Reasoning by Settling on a Plausible Model’ Artificial Intelligence 46 107–157 Occurrence Handle10.1016/0004-3702(90)90006-L

    Article  Google Scholar 

  • J. A. Fodor Z. W. Pylyshyn (1988) ArticleTitle‘Connectionism and Cognitive Architecture: A Critical Analysis’ Cognition 28 3–71 Occurrence Handle10.1016/0010-0277(88)90031-5

    Article  Google Scholar 

  • R. Frank G. Satta (1998) ArticleTitle‘Optimality Theory and the Generative Complexity of Constraint Violability’ Computational Linguistics 24 307–315

    Google Scholar 

  • D. Gabbay (1985) ‘Theoretical Foundations for Non-monotonic Reasoning in Expert Systems’ K. Apt (Eds) Logics and Models of Concurrent Systems Springer-Verlag Berlin 439–459

    Google Scholar 

  • C. Glymour (2001) The Mind’s Arrows The MIT Press Cambridge & London

    Google Scholar 

  • S. Grossberg (1989) ArticleTitle‘Nonlinear Neural Networks: Principles, Mechanisms, and Architectures’ Neural Networks 1 17–66

    Google Scholar 

  • S. Grossberg (1996) ArticleTitle‘The Attentive Brain’ American Scientist 83 438–449

    Google Scholar 

  • J. A. Hendler (1989) ArticleTitle‘Special Issue: Hybrid Systems (Symbolic/Connectionist)’ Connection Science 1 227–342

    Google Scholar 

  • J. A. Hendler (1991) Developing Hybrid Symbolic/Connectionist Models J. Barnden J. Pollack (Eds) High-level connectionist Models, Advances in Connectionist and Neural Computation Theory, Vol. 1 Ablex Publ. Corp Norwood, NJ

    Google Scholar 

  • G. E. Hinton T. J. Sejnowski (1983) ‘Optimal Perceptual Inference’ in Proceedings of the Institute of Electronic and Electrical Engineers Computer Society Conference on Computer Vision and Pattern Recognition IEEE Washington, DC 448–453

    Google Scholar 

  • Hinton, G. E. and T. J. Sejnowski: 1986, ‘Learning and Relearning in Boltzman Machines’, in D. E. Rumelhart, J. L. McClelland, and the PDP research group, pp. 282–317.

  • J. J. Hopfield (1982) ArticleTitle‘Neural Networks and Physical Systems with Emergent Collective Computational Abilities’ Proceedings of the National Academy of Sciences 79 2554–2558

    Google Scholar 

  • L. Karttunen (1998) The Proper Treatment of Optimality in Computational Phonology Manuscript. Xerox Research Centre Europe

    Google Scholar 

  • Kean, M. L.: 1975, The Theory of Markedness in Generative Grammar, Ph.D. thesis, MIT, Cambridge, Mass.

  • M. L. Kean (1981) ‘On a Theory of Markedness’ R. Bandi A. Belletti L. Rizzi (Eds) Theory of markedness in Generative Grammar Estratto Pisa 559–604

    Google Scholar 

  • B. Kokinov (1997) ‘Micro-level Hybridization in the Cognitive Architecture DUAL’ R. Sun F. Alexander (Eds) Connectionist-symbolic Integration: From Unified to Hybrid Approaches Lawrence Erlbaum Associates Hilsdale NJ 197–208

    Google Scholar 

  • S. Kraus D. Lehmann M. Magidor (1990) ArticleTitle‘Nonmonotonic Reasoning, Preferential Models and Cumulative Logics’ Artificial Intelligence 44 167–207 Occurrence Handle10.1016/0004-3702(90)90101-5

    Article  Google Scholar 

  • W. S. McCulloch W. Pitts (1943) ArticleTitle‘A Logical Calculus of the Ideas Immanent in Nervous Activity’ Bulletin of Mathematical Biophysics 5 115–133

    Google Scholar 

  • B. Partee H. L. W. Hendriks (1997) Montague Grammar J. Benthem Particlevan A. Meulen Particleter (Eds) Handbook of Logic and Language MIT Press Cambridge 5–91

    Google Scholar 

  • G. Pinkas (1995) ArticleTitle‘Reasoning, Nonmonotonicity and Learning in Connectionist Networks that Capture Propositional Knowledge’ Artificial Intelligence 77 203–247 Occurrence Handle10.1016/0004-3702(94)00032-V

    Article  Google Scholar 

  • S. Pinker A. Prince (1988) ArticleTitle‘On Language and Connectionism: Analysis of a Parallel Distributed Processing Model of Language Acquisition’ Cognition 28 73–193

    Google Scholar 

  • D. Poole (1988) ArticleTitle‘A Logical Framework for Default Reasoning’ Artificial Intelligence, 36 27–47 Occurrence Handle10.1016/0004-3702(88)90077-X

    Article  Google Scholar 

  • D. Poole (1996) Who Chooses the Assumptions? P. O’Rorke (Eds) Abductive Reasoning MIT Press Cambridge

    Google Scholar 

  • Prince, A. and P. Smolensky: 1993, Optimality Theory: Constraint Interaction in Generative Grammar. Technical Report CU-CS-696-93, Department of Computer Science, University of Colorado at Boulder, and Technical Report TR-2, Rutgers Center for Cognitive Science, Rutgers University, New Brunswick, NJ.

  • Rumelhart, D. E., J. L. McClelland, and the PDP research group: 1986, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume I and II. MIT Press/Bradford Books, Cambridge, MA.

  • D. E. Rumelhart P. Smolensky J. L. McClelland G. E. Hinton (1986) ‘Schemata and Sequential Thought Processes in PDP Models’ D. E. Rumelhart J. L. McClelland (Eds) Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume II MIT Press/Bradford Books Cambridge, MA 7–57

    Google Scholar 

  • L. Shastri V. Ajjanagadde (1993) ArticleTitle‘From Simple Associations to Systematic Reasoning’ Behavioral and Brain Sciences, 16 417–494

    Google Scholar 

  • L. Shastri C. Wendelken (2000) ‘Seeking Coherent Explanations–A Fusion of Structured Connectionism Temporal Synchrony and Evidential Reasoning Proceedings of Cognitive Science Philadelphia, PA

    Google Scholar 

  • R. M. Shiffrin W. Schneider (1977) ArticleTitle‘Controlled and Automatic Human Information Processing: II. Perceptual Learning, Automatic Attending, and A General Theory’ Psychological Review 84 127–190 Occurrence Handle10.1037//0033-295X.84.1.1

    Article  Google Scholar 

  • P. Smolensky (1986) ‘Information Processing in Dynamical Systems: Foundations of Harmony Theory’ D. E. Rumelhart J. L. McClelland (Eds) Parallel distributed processing: Explorations in the microstructure of cognition. Volume I MIT Press/Bradford Books Cambridge, MA 194–281

    Google Scholar 

  • P. Smolensky (1988) ArticleTitle‘On the Proper Treatment of Connectionism’ Behavioral and Brain Sciences 11 1–23

    Google Scholar 

  • P. Smolensky (1990) ArticleTitle‘Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Networks’ Artificial Intelligence 46 159–216 Occurrence Handle10.1016/0004-3702(90)90007-M

    Article  Google Scholar 

  • P. Smolensky (1996) ‘Computational, Dynamical, and Statistical Perspectives on the Processing and Learning Problems in Neural Network Theory’ P. Smolensky M. C. Mozer D. E. Rumelhart (Eds) Mathematical Perspectives on Neural Networks Lawrance Erlbaum Publishers Mahwah, NJ 1–13

    Google Scholar 

  • P. Smolensky (2000) ArticleTitle‘Grammar-Based Connectionist Approaches to Language’ Cognitive Science 23 589–613

    Google Scholar 

  • Smolensky, P., and G. Legendre (to appear), The Harmonic Mind: From Neural Computation to optimality-theoretic grammar, Blackwell, Oxford.

  • Smolensky, P., G. Legendre, and Y. Miyata: 1992, ‘Principles for An Integrated Connectionist/Symbolic Theory of Higher Cognition’, Technical Report CU-CS-600-92, Department of Computer Science, Institute of Cognitive Science, University of Colorado Boulder.

  • B. Tesar P. Smolensky (2000) Learnability in Optimality Theory MIT Press Cambridge, MA

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reinhard Blutner.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Blutner, R. Nonmonotonic Inferences and Neural Networks. Synthese 142, 143–174 (2004). https://doi.org/10.1007/s11229-004-1929-y

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-004-1929-y

Keywords

Navigation