Neural Network Learning as an Inverse Problem

Logic Journal of the IGPL 13 (5):551-559 (2005)
  Copy   BIBTEX

Abstract

Capability of generalization in learning of neural networks from examples can be modelled using regularization, which has been developed as a tool for improving stability of solutions of inverse problems. Such problems are typically described by integral operators. It is shown that learning from examples can be reformulated as an inverse problem defined by an evaluation operator. This reformulation leads to an analytical description of an optimal input/output function of a network with kernel units, which can be employed to design a learning algorithm based on a numerical solution of a system of linear equations

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,990

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2015-02-04

Downloads
4 (#1,645,937)

6 months
4 (#1,006,062)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

Generalization in Learning from Examples.Věra Kůrková - 2007 - In Wlodzislaw Duch & Jacek Mandziuk (eds.), Challenges for Computational Intelligence. Springer. pp. 343--363.

Add more citations

References found in this work

No references found.

Add more references