Advertisement

The Futility of Bias-Free Learning and Search

  • George D. MontañezEmail author
  • Jonathan Hayase
  • Julius Lauw
  • Dominique Macias
  • Akshay Trikha
  • Julia Vendemiatti
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11919)

Abstract

Building on the view of machine learning as search, we demonstrate the necessity of bias in learning, quantifying the role of bias (measured relative to a collection of possible datasets, or more generally, information resources) in increasing the probability of success. For a given degree of bias towards a fixed target, we show that the proportion of favorable information resources is strictly bounded from above. Furthermore, we demonstrate that bias is a conserved quantity, such that no algorithm can be favorably biased towards many distinct targets simultaneously. Thus bias encodes trade-offs. The probability of success for a task can also be measured geometrically, as the angle of agreement between what holds for the actual task and what is assumed by the algorithm, represented in its bias. Lastly, finding a favorably biasing distribution over a fixed set of information resources is provably difficult, unless the set of resources itself is already favorable with respect to the given task and algorithm.

Keywords

Machine learning Inductive bias Algorithmic search 

References

  1. 1.
    Goldberg, D.: Genetic Algorithms in Search Optimization and Machine Learning. Addison-Wesley Longman Publishing Company, Boston (1999)Google Scholar
  2. 2.
    Gülçehre, Ç., Bengio, Y.: Knowledge matters: importance of prior information for optimization. J. Mach. Learn. Res. 17(8), 1–32 (2016)MathSciNetzbMATHGoogle Scholar
  3. 3.
    McDermott, J.: When and why metaheuristics researchers can ignore “no free lunch” theorems. Metaheuristics, March 2019.  https://doi.org/10.1007/s42257-019-00002-6
  4. 4.
    Mitchell, T.D.: The need for biases in learning generalizations. CBM-TR-117. Rutgers University (1980)Google Scholar
  5. 5.
    Montañez, G.D.: The famine of forte: few search problems greatly favor your algorithm. In: 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 477–482. IEEE (2017)Google Scholar
  6. 6.
    Montañez, G.D.: Why machine learning works. Dissertation, pp. 52–59. Carnegie Mellon University (2017)Google Scholar
  7. 7.
    Montañez, G.D., Hayase, J., Lauw, J., Macias, D., Trikha, A., Vendemiatti, J.: The futility of bias-free learning and search. arXiv e-prints arXiv:1907.06010, July 2019
  8. 8.
    Rasmussen, C.E., Ghahramani, Z.: Occam’s Razor. In: Proceedings of the 13th International Conference on Neural Information Processing Systems, NIPS 2000, pp. 276–282. MIT Press, Cambridge, MA, USA (2000)Google Scholar
  9. 9.
    Reeves, C., Rowe, J.E.: Genetic Algorithms: Principles and Perspectives: A Guide to GA Theory, vol. 20. Springer, Heidelberg (2002).  https://doi.org/10.1007/b101880CrossRefzbMATHGoogle Scholar
  10. 10.
    Runarsson, T., Yao, X.: Search biases in constrained evolutionary optimization. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 35, 233–243 (2005).  https://doi.org/10.1109/TSMCC.2004.841906CrossRefGoogle Scholar
  11. 11.
    Schaffer, C.: A conservation law for generalization performance. In: Machine Learning Proceedings 1994, pp. 259–265. Elsevier (1994)Google Scholar
  12. 12.
    Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9446–9454 (2018)Google Scholar
  13. 13.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. Trans. Evol. Comput. 1(1), 67–82 (1997).  https://doi.org/10.1109/4235.585893CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.AMISTAD LabHarvey Mudd CollegeClaremontUSA

Personalised recommendations