Towards a Robuster Interpretive Parsing: Learning from Overt Forms in Optimality Theory

Journal of Logic, Language and Information 22 (2):139-172 (2013)
  Copy   BIBTEX

Abstract

The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this problem. We generalize RIP and suggest replacing the winner candidate with a weighted mean violation of the potential winner candidates. A Boltzmann distribution is introduced on the winner set, and the distribution’s parameter $T$ is gradually decreased. Finally, we show that GRIP, the Generalized Robust Interpretive Parsing Algorithm significantly improves the learning success rate in a model with standard constraints for metrical stress assignment

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,349

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2013-04-09

Downloads
64 (#247,260)

6 months
3 (#1,023,809)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

The Harmonie Mind. From Neural Computation to Optimality-Theoretic Grammar.Paul Smolensky & Géraldine Legendre - 2009 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 40 (1):141-147.
Dynamics of Complex Systems.Yaneer Bar-yam - 1997 - Boston: Addison-Wesley.

Add more references