Most existing inductive learning systems form concept descriptions in propositional languages from vectors of basic features. However, many concepts are characterized by the relationships of individual examples to general domain knowledge. We describe a system that constructs relational terms efficiently to augment the description language of standard inductive systems. In our approach, examples and domain knowledge are combined into an inheritance network, and a form of spreading activation is used to find relevant relational terms. Since there is an equivalence between inheritance networks and relational databases, this yields a method for exploring tables in the database and finding relevant relationships among data to characterize concepts. We also describe the implementation of a prototype system on the CM-2 parallel computer and some experiments with large data sets.
Efficiently Constructing Relational Features from Background Knowledge for Inductive Machine Learning
- John Aronis
- Foster Provost
- Venue: AAAI-94 Workshop on Knowledge Discovery in Databases - KDD Workshop 1994
- 1994
- Type: Other Workshop/Symposium Paper