CLIP: concept learning from inference patterns

https://doi.org/10.1016/0004-3702(94)00066-AGet rights and content
Under an Elsevier user license
open archive

Abstract

A new concept-learning method called CLIP (concept learning from inference patterns) is proposed that learns new concepts from inference patterns, not from positive/negative examples that most conventional concept learning methods use. The learned concepts enable an efficient inference on a more abstract level. We use a colored digraph to represent inference patterns. The graph representation is expressive enough and enables the quantitative analysis of the inference pattern frequency. The learning process consists of the following two steps: (1) Convert the original inference patterns to a colored digraph, and (2) Extract a set of typical patterns which appears frequently in the digraph. The basic idea is that the smaller the digraph becomes, the smaller the amount of data to be handled becomes and, accordingly, the more efficient the inference process that uses these data. Also, we can reduce the size of the graph by replacing each frequently appearing graph pattern with a single node, and each reduced node represents a new concept. Experimentally, CLIP automatically generates multilevel representations from a given physical/single-level representation of a carry-chain circuit. These representations involve abstract descriptions of the circuit, such as mathematical and logical descriptions.

Cited by (0)