Hypothesis Tests

  • Luca Lista
Part of the Lecture Notes in Physics book series (LNP, volume 941)


A key task in most of the physics measurements is to discriminate between two or more hypotheses on the basis of the observed experimental data.


  1. 1.
    Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugen. 7, 179–188 (1936)CrossRefGoogle Scholar
  2. 2.
    Neyman, J., Pearson, E.: On the problem of the most efficient tests of statistical hypotheses. Philos. Trans. R. Soc. Lond. Ser. A 231, 289–337 (1933)ADSCrossRefzbMATHGoogle Scholar
  3. 3.
    Kolmogorov, A.: Sulla determinazione empirica di una legge di distribuzione. G. Ist. Ital. Attuari 4, 83–91 (1933)zbMATHGoogle Scholar
  4. 4.
    Smirnov, N.: Table for estimating the goodness of fit of empirical distributions. Ann. Math. Stat. 19, 279–281 (1948)CrossRefzbMATHMathSciNetGoogle Scholar
  5. 5.
    Chakravarti, I.M., Laha, R.G., Roy, J.: Handbook of Methods of Applied Statistics, vol. I. Wiley, New York (1967)zbMATHGoogle Scholar
  6. 6.
    Marsaglia, G., Tsang, W.W., Wang, J.: Evaluating Kolmogorov’s distribution. J. Stat. Softw. 8, l–4 (2003)Google Scholar
  7. 7.
    Brun, R., Rademakers, F.: Root—an object oriented data analysis framework. Proceedings AIHENP96 Workshop, Lausanne (1996). Nucl. Instrum. Methods A 389 81–86 (1997). Google Scholar
  8. 8.
    Stephens, M.A.: EDF statistics for goodness of fit and some comparisons. J. Am. Stat. Assoc. 69, 730–737 (1974)CrossRefGoogle Scholar
  9. 9.
    Anderson, T.W., Darling, D.A.: Asymptotic theory of certain “goodness-of-fit” criteria based on stochastic processes. Ann. Math. Stat. 23, 193–212 (1952)CrossRefzbMATHMathSciNetGoogle Scholar
  10. 10.
    Cramér, H.: On the composition of elementary errors. Scand. Actuar. J. 1928(1), 13–74 (1928)CrossRefzbMATHGoogle Scholar
  11. 11.
    von Mises, R.E.: Wahrscheinlichkeit, Statistik und Wahrheit. Julius Springer, Vienna (1928)zbMATHGoogle Scholar
  12. 12.
    Wilks, S.: The large-sample distribution of the likelihood ratio for testing composite hypotheses. Ann. Math. Stat. 9, 60–62 (1938)CrossRefzbMATHGoogle Scholar
  13. 13.
    LeCun, Y., Bottou, L., Orr, G.B., Müller, K.R.: Neural Networks: Tricks of the Trade. Springer, Berlin/Heidelberg (1998)Google Scholar
  14. 14.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323 533–536 (1986)ADSCrossRefzbMATHGoogle Scholar
  15. 15.
    Peterson, C., Rgnvaldsson, T.S.: An introduction to artificial neural networks. In: LU-TP-91-23. LUTP-91-23. 14th CERN School of Computing, Ystad (1991)Google Scholar
  16. 16.
    Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural Comput. 8(1), 164–177 (1996)CrossRefGoogle Scholar
  17. 17.
    Reed, R., Marks, R.: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. A Bradford book. MIT Press, Cambridge (1999)Google Scholar
  18. 18.
    Le Cun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521 436–444 (2015)ADSCrossRefGoogle Scholar
  19. 19.
    Cireşan, D.C., et al.: Deep, big, simple neural nets for handwritten digit recognition. Neural Comput. 22, 3207–20 (2010)CrossRefGoogle Scholar
  20. 20.
    Baldi, P., Sadowski, P., Whiteson, D.: Searching for exotic particles in high-energy physics with deep learning. Nature Commun. 5, 4308 (2014)ADSCrossRefGoogle Scholar
  21. 21.
    Baldi, P., Bauer, K., Eng, C., Sadowski, P., Whiteson, D.: Jet substructure classification in high-energy physics with deep neural networks. Phys. Rev. D 93, 094034 (2016)ADSCrossRefGoogle Scholar
  22. 22.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Proces. Syst. 25, 1097–1105 (2012)Google Scholar
  23. 23.
    Dugas, C., Bengio, Y., Bélisle, F., Nadeau, C., Garcia, R.: Incorporating second-order functional knowledge for better option pricing. In: Proceedings of NIPS’2000: Advances in Neural Information Processing Systems (2001)Google Scholar
  24. 24.
    Aurisano, A., et al.: A convolutional neural network neutrino event classifier. J. Instrum. 11, P09001 (2016)CrossRefGoogle Scholar
  25. 25.
    Photo by Angela Sorrentino: (2007)
  26. 26.
    Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001). CrossRefzbMATHGoogle Scholar
  27. 27.
    Roe, B.P., Yang, H.J., Zhu, J., Liu, Y., Stancu, I., McGregor, G.: Boosted decision trees as an alternative to artificial neural networks for particle identification. Nucl. Instrum. Methods A 543, 577–584 (2005)ADSCrossRefGoogle Scholar
  28. 28.
    Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of EuroCOLT’94: European Conference on Computational Learning Theory (1994)Google Scholar
  29. 29.
    Hoecker, A., et al.: TMVA – Toolkit for Multivariate Data Analysis. PoS ACAT 040, arXiv:physics/0703039 (2007)Google Scholar
  30. 30.
    Jia, Y., et al.: Convolutional architecture for fast feature embedding. arXiv:1408.5093 (2014)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Luca Lista
    • 1
  1. 1.INFN Sezione di NapoliNapoliItaly

Personalised recommendations