Multivariate Analysis Algorithms

  • Thomas KeckEmail author
Part of the Springer Theses book series (Springer Theses)


In recent years, the field of multivariate analysis and machine learning evolved rapidly, and provided powerful techniques, which are currently adopted in all fields of science. Prominent use-cases include: image and speech recognition, stock market trading, fraud detection, and medical diagnosis.


  1. 1.
    C.M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics) (Springer-Verlag, New York, Inc., 2006). ISBN: 0387310738Google Scholar
  2. 2.
    T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer-Verlag, New York, Inc., 2001). ISBN: 978-0-387-84858-7CrossRefGoogle Scholar
  3. 3.
    V. Vapnik, Principles of risk minimization for learning theory, in NIPS (1991)Google Scholar
  4. 4.
    L. Rosasco et al., Are loss functions all the same? Neural Comput. 16(5), 1063–1076 (2004). Scholar
  5. 5.
    P. McCullagh, J.A. Nelder, Generalized Linear Models, 2nd edn. Chapman & Hall (1989). ISBN: 9780412317606Google Scholar
  6. 6.
    E. Parzen, On estimation of a probability density function and mode. Ann. Math. Stat. 33(3), 1065–1076 (1962). Scholar
  7. 7.
    R.A. Rigby, D.M. Stasinopoulos, Generalized additive models for location, scale and shape. J. R. Stat. Soc. Ser. C (Appl. Stat.) 54(3), 507–554 (2005). Scholar
  8. 8.
    R.W. Koenker, G. Bassett, Regression quantiles. Econometrica 46(1), 33–50 (1978)MathSciNetCrossRefGoogle Scholar
  9. 9.
    J. Neyman, E.S. Pearson, On the problem of the most efficient tests of statistical hypotheses. Philos. Trans. R. Soc. Lond. Ser. A Contain. Pap. Math. Phys. Character 231, 289–337 (1933). Scholar
  10. 10.
    R.A. Fisher, The use of multiple measurements in taxonomic problems. Ann. Eugen. 7(7), 179–188 (1936). Scholar
  11. 11.
    J.H. Friedman, Stochastic gradient boosting. Comput. Stat. Data Anal. 38(4), 367–378 (2002). Scholar
  12. 12.
    J. Bergstra, Y. Bengio, Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012)MathSciNetzbMATHGoogle Scholar
  13. 13.
    D. Maclaurin, D. Duvenaud, R. Adams, Gradient-based hyperparameter optimization through reversible learning, in Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 2113–2122 (2015),
  14. 14.
    J. Snoek, H. Larochelle, R.P. Adams, Practical Bayesian optimization of machine learning algorithms. Adv. Neural Inf. Process. Syst. 25, 2951–2959 (2012), arXiv: 1206.2944 [stat.ML]
  15. 15.
    O. Behnke, K. Kroeninger, T. Schoerner-Sadenius, G. Schott, Data Analysis in High Energy Physics. Wiley-VCH (2013). ISBN: 9783527410583Google Scholar
  16. 16.
    K. Cranmer, I. Yavin, RECAST: extending the impact of existing analyses. JHEP 04, 038 (2011). Scholar
  17. 17.
    M. Feindt et al., A hierarchical NeuroBayes-based algorithm for full reconstruction of B mesons at B factories. Nucl. Instrum. Methods A654, 432–440 (2011). Scholar
  18. 18.
    K. Hornik, Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991). Scholar
  19. 19.
    H.W. Lin, M. Tegmark, D. Rolnick, Why does deep and cheap learning work so well? J. Stat. Phys. (2017). Scholar
  20. 20.
    P. Baldi, P. Sadowski, D. Whiteson, Searching for exotic particles in high-energy physics with deep learning. Nat. Commun. 5, 4308 (2014). Scholar
  21. 21.
    Y. Lecun, Y. Bengio, G. Hinton, Deep learning. Nature 2014, 436–444 (2015). Scholar
  22. 22.
    I. Goodfellow et al., Generative adversarial nets, in Advances in Neural Information Processing Systems 27, pp. 2672–2680. Curran Associates Inc. (2014),
  23. 23.
    G. Louppe, M. Kagan, K. Cranmer, Learning to pivot with adversarial networks, in NIPS (2016), arXiv: 1611.01046 [stat.ME]
  24. 24.
    Y. Bengio, A. Courville, P. Vincent, Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013). Scholar
  25. 25.
    O. Vinyals, A. Toshev, S. Bengio, D. Erhan, Show and tell: lessons learned from the 2015 MSCOCO image captioning challenge. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 652–663 (2017). Scholar
  26. 26.
    M. Pivk, F.R. Le Diberder, SPlot: a statistical tool to unfold data distributions. Nucl. Instrum. Methods A555, 356–369 (2005). Scholar
  27. 27.
    D. Martschei, M. Feindt, S. Honc, J. Wagner-Kuhr, Advanced event reweighting using multivariate analysis, in Proceedings, 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011), vol. 368, p. 012028 (2012). Scholar
  28. 28.
    T. Keck, FastBDT: a speed-optimized multivariate classification algorithm for the Belle II experiment. Comput. Softw. Big Sci. 1(1) (2017).
  29. 29.
  30. 30.
    J. Therhaag et al., TMVA–Toolkit for multivariate data analysis. AIP Conf. Proc. 1504(1), 1013–1016 (2012). Scholar
  31. 31.
    S. Nissen, Implementation of a fast artificial neural network library (FANN). Technical report, Department of Computer Science University of Copenhagen (DIKU) (2003),
  32. 32.
    M. Feindt, U. Kerzel, The NeuroBayes neural network package. Nucl. Instrum. Methods A559, 190–194 (2006). Scholar
  33. 33.
    F. Pedregosa et al., Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  34. 34.
    T. Chen, C. Guestrin, XGBoost: a scalable tree boosting system, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016).
  35. 35.
    M. Abadi et al., TensorFlow: a system for large-scale machine learning, in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283 (2016),
  36. 36.
    F. Chollet et al., Keras (2015),
  37. 37.
    I.J. Goodfellow et al., Pylearn2: a machine learning research library (2013), arXiv: 1308.4214 [stat.ML]
  38. 38.
    R. Al-Rfou et al., Theano: a Python framework for fast computation of mathematical expressions, arXiv: 1605.02688 [cs.SC]
  39. 39.
    C. Patrignani et al., Review of particle physics. Chin. Phys. C40(10), 100001 (2016). Scholar
  40. 40.
    J.A. Hanley, B.J. McNeil, The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143(1), 29–36 (1982). Scholar
  41. 41.
    M. Gelb, Neutral B Meson flavor tagging for Belle II. MA thesis, KIT (2015),
  42. 42.
    J. Gemmler, Study of B Meson flavor tagging with deep neural networks at Belle and Belle II. MA thesis, KIT (2016),
  43. 43.
    D.M. Asner, M. Athanas, D.W. Bliss et al., Search for exclusive charmless hadronic B decays. Phys. Rev. D 53, 1039–1050 (1996). Scholar
  44. 44.
    G.C. Fox, S. Wolfram, Observables for the analysis of event shapes in \({e}^{+}{e}_{-}\) annihilation and other processes. Phys. Rev. Lett. 41, 1581–1585 (1978). Scholar
  45. 45.
    A.J. Bevan et al., The physics of the B factories. Eur. Phys. J. C 74, 3026 (2014). Scholar
  46. 46.
    D. Weyland, Continuum suppression with deep learning techniques for the Belle II experiment. MA thesis, KIT (2017),
  47. 47.
    A. Rogozhnikov et al., New approaches for boosting to uniformity. JINST 10(03), T03002 (2015). Scholar
  48. 48.
    M. Feindt, M. Prim, An algorithm for quantifying dependence in multivariate data sets. Nucl. Instrum. Methods A698, 84–89 (2013). Scholar
  49. 49.
    J. Dolen et al., Thinking outside the ROCs: designing decorrelated taggers (DDT) for jet substructure. JHEP 05, 156 (2016). Scholar
  50. 50.
    J. Stevens, M. Williams, uBoost: a boosting method for producing uniform selection efficiencies from multivariate classiffiers. JINST 8, P12013 (2013). Scholar
  51. 51.
    B. Lipp, sPlot-based training of multivariate classifiers in the Belle II analysis software framework. BA thesis, KIT (2015),

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Institute of Experimental Particle PhysicsKarlsruhe Institute of TechnologyKarlsruheGermany

Personalised recommendations