Scope classification: An instance-based learning algorithm with a rule-based characterisation

  • Nicolas Lachiche
  • Pierre Marquise
Instance Based Learning
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1398)


Scope classification is a new instance-based learning (IBL) technique with a rule-based characterisation. Within the scope approach, the classification of an object o is based on the examples that are closer to o than every example labelled with another class. In contrast to standard distance-based IBL classifiers, scope classification relies on partial preorderings ≤o between examples, indexed by objects. Interestingly, the notion of closeness to o that is used characterises the classes predicted by all the rules that cover o and are relevant and consistent for the training set. Accordingly, scope classification is an IBL technique with a rule-based characterisation. Since rules do not have to be explicitly generated, the scope approach applies to classification problems where the number of rules prevents them from being exhaustively computed.


Execution Time Learning Phase Consistent Rule Simple Majority Vote Scope Approach 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. [Clark and Niblett, 1989]
    Peter Clark and Tim Niblett. The CN2 induction algorithm. Machine Learning, 3(4):261–283, 1989.Google Scholar
  2. [Cost and Salzberg, 1993]
    Scott Cost and Steven Salzberg. A weighted nearest neighbor algorithm for learning with symbolic features. Machine Learning, 10:57–78, 1993.Google Scholar
  3. [Demiroz and Guvenir, 1997]
    Gulsen Demiroz and H. Altay Guvenir. Classification by voting feature intervals. In Proc. of the Ninth European Conference on Machine Learning, pages 85–92, Prague, Czech Republic, 1997. LNAI 1224, Springer-Verlag.Google Scholar
  4. [Domingos, 1996]
    Pedro Domingos. Unifying instance-based and rule-based induction. Machine Learning, 24:141–168, 1996.Google Scholar
  5. [Kohavi and John, 1995]
    Ron Kohavi and George H. John. Automatic parameter selection by minimizing estimated error. In Proc. of the Twelfth International Conference on Machine Learning, 1995. Morgan Kaufmann.Google Scholar
  6. [Merz and Murphy, 1996]
    C.J. Merz and P.M. Murphy. UCI repository of machine learning databases.≈mleaxn/MLRepository.html, 1996.Google Scholar
  7. [Quinlan, 1993]
    John Ross Quinlan. C4.5: Programs for machine learning. Series in machine learning. Morgan Kaufmann, 1993.Google Scholar
  8. [Rymon, 1993]
    Ron Rymon. An SE-tree based characterization of the induction problem. In Proc. of the Tenth International Conference on Machine Learning, pages 268–275, 1993.Google Scholar
  9. [Rymon, 1996a]
    Ron Rymon. SE-Learn home page.≈rymon/SE-Learn.html, 1996.Google Scholar
  10. [Rymon, 1996b]
    Ron Rymon. SE-trees outperform decision trees in noisy domains. In Proc. of the Second International Conference on Knowledge Discovery in Databases, pages 331–334, 1996. AAAI Press.Google Scholar
  11. [Salzberg, 1991]
    Steven Salzberg. A nearest hyperrectangle learning method. Machine Learning, 6:251–276, 1991.Google Scholar
  12. [Sebag, 1996]
    Michèle Sebag. Delaying the choice of bias: A disjunctive version space approach. In Proc. of the Thirteenth International Conference on Machine Learning, pages 444–452, 1996. Morgan Kaufmann.Google Scholar
  13. [Wettschereck and Dietterich, 1995]
    Dietrich Wettschereck and Thomas G. Dietterich. An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms. Machine Learning, 19:5–27, 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Nicolas Lachiche
    • 1
  • Pierre Marquise
    • 2
  1. 1.LORIAVandoeuvre-lès-Nancy CedexFrance
  2. 2.GRIL/Université d'Artois, Rue de l'UniversitéLens CedexFrance

Personalised recommendations