Skip to main content

Learning when to trust which experts

  • Conference paper
  • First Online:
Computational Learning Theory (EuroCOLT 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1208))

Included in the following conference series:

Abstract

The standard model for prediction using a pool of experts has an underlying assumption that one of the experts performs well. In this paper, we show that this assumption does not take advantage of situations where both the outcome and the experts' predictions are based on some input which the learner gets to observe too. In particular, we exhibit a situation where each individual expert performs badly but collectively they perform well, and show that the traditional weighted majority techniques perform poorly.

To capture this notion of ‘the whole is often greater than the sum of its parts’, we propose an approach to measure the overall competency of a pool of experts with respect to a competency class or structure. A competency class or structure is a set of decompositions of the instance space where each expert is associated with a ‘competency region’ in which we assume he is competent. Our goal is to perform close to the performance of a predictor who knows the best decomposition in the competency class or structure where each expert performs reasonably well in its competency region. We present both positive and negative results in our model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. P. Auer and M. K. Warmuth. Tracking the best disjunction. In Proc. of the 36th Symposium on the Foundations of Comp. Sci., pages 312–321. IEEE Computer Society Press, Los Alamitos, CA, 1995.

    Google Scholar 

  2. N. Cesa-Bianchi, Y. Freund, D. P. Helmbold, and M. K. Warmuth. Online prediction and conversion strategies. Machine Learning, 1995. To appear, an extended abstract appeared in Eurocolt '93.

    Google Scholar 

  3. N. Littlestone. Learning when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning, 2:285–318, 1988.

    Google Scholar 

  4. N. Littlestone. Redundant noisy attributes, attribute errors, and linear threshold learning using Winnow. In Proc. 4th Annu. Workshop on Cornput. Learning Theory, pages 147–156, San Mateo, CA, 1991. Morgan Kaufmann.

    Google Scholar 

  5. N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Shai Ben-David

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Helmbold, D., Kwek, S., Pitt, L. (1997). Learning when to trust which experts. In: Ben-David, S. (eds) Computational Learning Theory. EuroCOLT 1997. Lecture Notes in Computer Science, vol 1208. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-62685-9_12

Download citation

  • DOI: https://doi.org/10.1007/3-540-62685-9_12

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-62685-5

  • Online ISBN: 978-3-540-68431-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics