Advertisement

Tracking Linear-Threshold Concepts with Winnow

  • Chris Mesterharm
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2375)

Abstract

In this paper, we give a mistake-bound for learning arbitrary linear-threshold concepts that are allowed to change over time in the on-line model of learning. We use a standard variation of the Winnow algorithm and show that the bounds for learning shifting linear-threshold functions have many of the same advantages that the traditional Winnow algorithm has on fixed concepts. These benefits include a weak dependence on the number of irrelevant attributes, inexpensive runtime, and robust behavior against noise. In fact, we show that the bound for the tracking version of Winnow has even better performance with respect to irrelevant attributes. Let X ∈ [0,1] n be an instance of the learning problem. In the traditional algorithm, the bound depends on ln n. In this paper, the shifting concept bound depends approximately on max ln (‖X1).

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Littlestone, N.: Mistake bounds and linear-threshold learning algorithms. PhD thesis, University of California, Santa Cruz (1989) Technical Report UCSC-CRL-89-11.Google Scholar
  2. [2]
    Rosenblatt, F.: Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, DC (1962)zbMATHGoogle Scholar
  3. [3]
    Minsky, M. L., Papert, S. A.: Perceptrons. MIT Press, Cambridge, MA (1969)zbMATHGoogle Scholar
  4. [4]
    Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2 (1988) 285–318Google Scholar
  5. [5]
    Littlestone, N., Warmuth, M. K.: The weighted majority algorithm. Information and Computation 108 (1994) 212–261zbMATHCrossRefMathSciNetGoogle Scholar
  6. [6]
    Herbster, M., Warmuth, M. K.: Tracking the best expert. Machine Learning 32 (1998) 151–178zbMATHCrossRefGoogle Scholar
  7. [7]
    Auer, P., Warmuth, M. K.: Tracking the best disjunction. Machine Learning 32 (1998) 127–150zbMATHCrossRefGoogle Scholar
  8. [8]
    Helmbold, D. P., Long, D. D., Sconyers, T. L., Sherrod, B.: Adaptive disk spin-down for mobile computers. Mobile Networks and Applications 5 (2000) 285–297zbMATHCrossRefGoogle Scholar
  9. [9]
    Blum, A., Burch, C.: On-line learning and the metrical task system problem. Machine Learning 39 (2000) 35–58zbMATHCrossRefGoogle Scholar
  10. [10]
    Blum, A., Hellerstein, L., Littlestone, N.: Learning in the presence of finitely or infinitely many irrelevant attributes. In: COLT-91. (1991) 157–166Google Scholar
  11. [11]
    Herbster, M., Warmuth, M. K.: Tracking the best linear predictor. Journal of Machine Learning Research 1 (2001) 281–309zbMATHCrossRefMathSciNetGoogle Scholar
  12. [12]
    Kuh, A., Petsche, T., Rivest, R. L.: Learning time-varying concepts. In: NIPS-3, Morgan Kaufmann Publishers, Inc. (1991) 183–189Google Scholar
  13. [13]
    Littlestone, N.: Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow. In: COLT-91. (1991) 147–156Google Scholar
  14. [14]
    Grove, A. J., Littlestone, N., Schuurmans, D.: General convergence results for linear discriminant updates. In: COLT-97. (1997) 171–183Google Scholar
  15. [15]
    Littlestone, N.: (1998) Unpublished research that generalizes Winnow algorithm.Google Scholar
  16. [16]
    Mesterharm, C.: A multi-class linear learning algorithm related to winnow. In: NIPS-12, MIT Press (2000) 519–525Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Chris Mesterharm
    • 1
  1. 1.Rutgers Computer Science DepartmentPiscataway

Personalised recommendations