IEEE Transactions on Neural Networks 9:739-755 (1998)
|Abstract||��Natural language understanding involves the simul- taneous consideration of a large number of different sources of information. Traditional methods employed in language analysis have focused on developing powerful formalisms to represent syntactic or semantic structures along with rules for transforming language into these formalisms. However, they make use of only small subsets of knowledge. This article will describe how to use the whole range of information through a neurosymbolic architecture which is a hybridization of a symbolic network and subsymbol vectors generated from a connectionist network. Besides initializing the symbolic network with prior knowledge, the subsymbol vectors are used to enhance the system’s capability in disambiguation and provide ﬂexibility in sentence understand- ing. The model captures a diversity of information including word associations, syntactic restrictions, case-role expectations, semantic rules and context. It attains highly interactive processing by representing knowledge in an associative network on which actual semantic inferences are performed. An integrated use of previously analyzed sentences in understanding is another important feature of our model. The model dynamically se- lects one hypothesis among multiple hypotheses. This notion is supported by three simulations which show the degree of disambiguation relies both on the amount of linguistic rules and the semantic-associative information available to support the inference processes in natural language understanding. Unlike many similar systems, our hybrid system is more sophisticated in tackling language disambiguation problems by using linguistic clues from disparate sources as well as modeling context effects into the sentence analysis. It is potentially more powerful than any systems relying on one processing paradigm.|
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
|Through your library||Configure|
Similar books and articles
James Franklin & S. W. K. Chan (1998). Symbolic Connectionism in Natural Language Disambiguation. IEEE Transactions on Neural Networks 9:739-755.
Samuel W. K. Chan, Dynamic Context Generation for Natural Language Understanding: A Multifaceted Knowledge Approach.
Syed S. Ali & Stuart C. Shapiro (1993). Natural Language Processing Using a Propositional Semantic Network with Structured Variables. Minds and Machines 3 (4):421-451.
Stuart C. Shapiro & William J. Rapaport (1992). The SNePS Family. Computers and Mathematics with Applications 23:243-275.
James Franklin (2003). The Representation of Context: Ideas From Artiﬁcial Intelligence. Law, Probability and Risk 2:191-199.
Joachim Quantz & Birte Schmitz (1994). Knowledge-Based Disambiguation for Machine Translation. Minds and Machines 4 (1):39-57.
William J. Rapaport (1988). Syntactic Semantics: Foundations of Computational Natural Language Understanding. In James H. Fetzer (ed.), Aspects of AI. Kluwer.
Brian R. Gaines (2009). Designing Visual Languages for Description Logics. Journal of Logic, Language and Information 18 (2):217-250.
Lucja Iwańska (1993). Logical Reasoning in Natural Language: It is All About Knowledge. [REVIEW] Minds and Machines 3 (4):475-510.
Eugen Fischer (1997). On the Very Idea of a Theory of Meaning for a Natural Language. Synthese 111 (1):1-8.
Martha Stone Palmer (2006). Semantic Processing for Finite Domains. Cambridge University Press.
Gerard O'Brien & Jonathan Opie (2002). Radical Connectionism: Thinking with (Not in) Language. Language and Communication 22 (3):313-329.
Added to index2010-12-22
Total downloads6 ( #154,676 of 722,774 )
Recent downloads (6 months)0
How can I increase my downloads?