Skip to main content

Abstract

This chapter classifies the different machine learning algorithms into domains and provides a formal definition of machine learning. In addition, the chapter describes briefly a common set of the classic machine learning techniques. These sets span from time series forecasting to different clustering methods including trees and Bayesian networks. The special domain of deep learning is addressed in the following chapter (Liermann, Li, & Schaudinnus, Deep learning—An introduction, 2019b).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 59.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Typically, in semi-supervised learning, a small amount of labeled data with a large amount of unlabeled data.

  2. 2.

    Unweighted pair group method with arithmetic mean or UPGMA .

  3. 3.

    Ordering points to identify the clustering structure (see Ankerst, Breunig, Kriegel, & Sander, 1999).

  4. 4.

    Density-based spacial clustering of applications with noise (see Ester, Kriegel, Sander, & Xu, 1996).

  5. 5.

    Density-link clustering (see Achtert, Böhm, & Kröger, 2006).

  6. 6.

    Access method for multi-dimensional data, made to structure indexed records (see Guttman, 1984).

  7. 7.

    DTW (algorithm) see Sakoe and Chiba (1978), Berndt and Clifford (1994).

  8. 8.

    See Cuturi (2011).

  9. 9.

    See Paparrizos and Gravano (2015).

  10. 10.

    Tsamardinos, Aliferis, and Statnikov (2003), Yaramakala and Margaritis (2005).

  11. 11.

    Hard or crisp clustering involves strict and excluding placement of a data point in relation to a particular cluster, meaning that an observation cannot belong to two or more clusters at the same time. This would be possible under using fuzzy or soft partitioning and one point would belong to clusters to differing extents.

  12. 12.

    The list of CVIs is taken from Sarda-Espinosa (2019).

  13. 13.

    C stands for criterion.

  14. 14.

    In case of interest, one can refer to Desgraupes (2019).

  15. 15.

    The importance of the deep learning models described in Liermann et al., Deep learning—An introduction (2019b) could possible increase more dynamically.

Literature

  • Achtert, E., Böhm, C., & Kröger, P. (2006). DeLi-Clu: Boosting robustness, completeness, usability, and efficiency of hierarchical clustering by a closest pair ranking. In W.-K. Ng, M. Kitsuregawa, & J. L. Chang (Eds.), Advances in knowledge discovery and data mining (pp. 119–128). Singapore: Springer.

    Chapter  Google Scholar 

  • Ankerst, M., Breunig, M. M., Kriegel, H.-P., & Sander, J. (1999). OPTICS: Ordering points to identify the clustering structure. Munich: Institute for Computer Science, University of Munich.

    Google Scholar 

  • Babbar, S., & Chawla, S. (2006). On Bayesian network and outlier detection. Sydney: School of Information Technologies, University of Sydney.

    Google Scholar 

  • Berndt, D., & Clifford, J. (1994). Using dynamic time warping to find patterns in time series. In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining (pp. 359–370). Palo Alto, CA: AAAI Press.

    Google Scholar 

  • Brockwell, P. J., & Davis, R. A. (2002). Introduction to time series and forecasting. New York: Springer.

    Book  Google Scholar 

  • Chakraborty, C., & Joseph, A. (2017). Staff working paper no. 674—Machine learning at central banks. London: Bank of England.

    Google Scholar 

  • Cuturi, M. (2011). Fast global alignment kernels. In L. Getoor & T. Scheffer (Eds.), Proceedings of the 28th International Conference on International Conference on Machine Learning (pp. 929–936). Bellevue, Washington, USA: Omnipress.

    Google Scholar 

  • Desgraupes, B. (2019, January). Clustering indices. Retrieved from The Comprehensive R Archive Network https://cran.r-project.org/web/packages/clusterCrit/vignettes/clusterCrit.pdf.

  • Ester, M., Kriegel, H.-P., Sander, J., & Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In E. Simoudis, J. Han, & U. M. Fayyad (Eds.), Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (pp. 226–231). Menlo Park, CA: The AAAI Press.

    Google Scholar 

  • Google. (2019). Home. Retrieved from AlphaGo https://deepmind.com/research/alphago/.

  • Guttman, A. (1984). R-trees: A dynamic index structure for spatial searching. Berkeley: University of California.

    Google Scholar 

  • Liermann, V., Li, S., & Dobryashkina, V. (2019). Intraday liquidity—Forecast using pattern recognition. In V. Liermann & C. Stegmann (Eds.), The impact of digital transformation and fintech on the finance professional. New York: Palgrave Macmillan.

    Google Scholar 

  • Liermann, V., Li, S., & Schaudinnus, N. (2019a). Batch processing—Pattern recognition. In V. Liermann & C. Stegmann (Eds.), The impact of digital transformation and fintech on the finance professional. New York: Palgrave Macmillan.

    Google Scholar 

  • Liermann, V., Li, S., & Schaudinnus, N. (2019b). Deep learning—An introduction. In V. Liermann & C. Stegmann (Eds.), The impact of digital transformation and fintech on the finance professional. New York: Palgrave Macmillan.

    Google Scholar 

  • Liermann, V., & Viets, N. (2019). Integrated scenario analysis and integrated planning. In V. Liermann & C. Stegmann (Eds.), The impact of digital transformation and fintech on the finance professional. New York: Palgrave Macmillan.

    Google Scholar 

  • McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. Retrieved from http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.

  • Montgomery, D. C., Jennings, C. L., & Kulahci, M. (2008). Introduction to time series analysis and forecasting. Hoboken, NJ: Wiley.

    Google Scholar 

  • Paparrizos, J., & Gravano, L. (2015). k-Shape: Efficient and accurate clustering of time series. In T. Sellis, S. B. Davidson, & Z. Ives (Eds.), Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (pp. 69–76). New York: ACM.

    Google Scholar 

  • Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach. Upper Saddle River, NJ: Prentice Hall.

    Google Scholar 

  • Sakoe, H., & Chiba, S. (1978, February). Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26, 43–49.

    Google Scholar 

  • Sarda-Espinosa, A. (2019, January 29). Package ‘dtwclust’. Retrieved from The Comprehensive R Archive Network https://cran.r-project.org/web/packages/dtwclust/dtwclust.pdf.

  • Schaudinnus, N., & Liermann, V. (2019). Real estate risk—Appraisals capture. In V. Liermann & C. Stegmann (Eds.), The impact of digital transformation and fintech on the finance professional. New York: Palgrave Macmillan.

    Google Scholar 

  • Smith, M. K. (2019, February 2). The University of Texas at Austin—Department of Mathematics. Retrieved from COMMON MISTEAKS MISTAKES IN USING STATISTICS—Overfitting https://web.ma.utexas.edu/users/mks/statmistakes/ovefitting.html.

  • Tsamardinos, I., Aliferis, C. F., & Statnikov, A. (2003, May 12). Algorithms for large scale Markov Blanket discovery. In I. Russell & S. Haller (Eds.), Proceedings of the Sixteenth International Florida Artificial Intelligence Research Society Conference (pp. 376–381). Menlo Park, CA: AAAI Press.

    Google Scholar 

  • Valjanow, S., Enzinger, P. & Dinges, F. (2019). Digital planning—Driver-based planning leveraged by predictive analytics. In V. Liermann & C. Stegmann (Eds.), The impact of digital transformation and fintech on the finance professional. New York: Palgrave Macmillan.

    Google Scholar 

  • Yaramakala, S., & Margaritis, D. (2005). Speculative Markov Blanket discovery for optimal feature selection. In Proceedings of the Fifth IEEE International Conference on Data Mining (ICDM) (pp. 809–812). Washington, DC: IEEE Computer Society.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Volker Liermann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 The Author(s)

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Liermann, V., Li, S., Dobryashkina, V. (2019). Mathematical Background of Machine Learning. In: Liermann, V., Stegmann, C. (eds) The Impact of Digital Transformation and FinTech on the Finance Professional. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-23719-6_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23719-6_16

  • Published:

  • Publisher Name: Palgrave Macmillan, Cham

  • Print ISBN: 978-3-030-23718-9

  • Online ISBN: 978-3-030-23719-6

  • eBook Packages: Economics and FinanceEconomics and Finance (R0)

Publish with us

Policies and ethics