Skip to main content

Big Data Modeling Approaches for Engineering Applications

  • Chapter
  • First Online:
Nonlinear Approaches in Engineering Applications

Abstract

Engineering is intrinsically a field in which the application of science and mathematics is utilized to solve problems in pursuit of the design, operation, maintenance, and other faculties of systems in complex systems. Many of these systems contain nonlinear interactions and as such, require tools of varying robustness and power to describe them. Forecasting of future states or designing such systems is very costly, time consuming, and computationally intensive, due to finite project timelines and technical constraints within industry. Increasing the calculation power will provide us with daily data production in modeling and analysis of complex dynamic systems to exceed 2.5 exabyte by 2020, which is a 44-fold increase from those seen in 2010, illustrating the rapid changes in this area. “Big data” is a relatively amorphous term used to describe the rise in data volumes that are difficult to capture, store, manage, process, and analyze, using traditional database methods and tools. The new reality of big data has and shall continue to have profound implications on modeling, as new and highly valuable information can be extracted for decision-making.

Volume, often considered to be the primary characteristic of big data, refers to the absolute size of the dataset being considered. Variety in big datasets also provides additional challenges. Given the great diversity of data sources, including sensors, images, video feeds, financial transactions, location data, text documents, and others, reconciling these sources into unified modeling strategies is not straightforward. When considering so many different types of data, big data modeling strategies typically address three distinct types of data: structures data, semi-structured data, and unstructured data.

In this chapter, a review of classical machine learning methods will be provided, including a selection of clustering, classification, and regression methods. Then it will detail the six approaches for applying scalable machine learning solutions to big data, specifically, representation learning methods for data reduction. Deep learning for capturing highly nonlinear behavior, distributed and parallel learning, transfer learning for cross-domain and cross-task learning activities, active learning, and kernel-based learning will address the challenges associated with big data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dobre, T. G., & Sanchez Marcano, J. G. (2007). Chemical engineering: Modelling, simulation and similitude. Weinheim: Wiley-VCH Verlag GmbH & KGaA.

    Book  Google Scholar 

  2. Rodrigues, A. E., & Minceva, M. (2005). Modelling and simulation in chemical engineering: Tools for process innovation. Computers & Chemical Engineering, 29, 1167–1183.

    Article  Google Scholar 

  3. Rasmuson, A., Andersson, B., Olsson, L., & Andersson, R. (2014). Mathematical modeling in chemical engineering. New York: Cambridge University Press.

    Book  Google Scholar 

  4. Chen, M., Mao, S., & Liu, Y. (2014). Big data: A survey. Mobile Networks and Applications, 19(2), 171–209.

    Article  Google Scholar 

  5. Alvaro, A., Manadhata, P., & Rajan, S. (2013). Big data analytics for security intelligence. Cloud Security Alliance. Retrieved from https://downloads.cloudsecurityalliance.org/initiatives/bdwg/Big_Data_Analytics_for_Security_Intelligence.pdf

  6. McAfee, A., Brynjolfsson, E., Davenport, T. H., Patil, D., & Barton, D. (2012). Big data. The management revolution. Harvard Business Review, 90(10), 61–67.

    Google Scholar 

  7. Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Khan, S. U. (2015). The rise of “big data” on cloud computing: Review and open research issues. Information Systems, 47, 98–115.

    Article  Google Scholar 

  8. IBM. (2015). What is the big data analysis. Retrieved from http://www-01.ibm.com/software/data/infosphere/hadoop/what-is-big-data-analytics.html

  9. Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., & Byers, A. H. (2011). Big data: The next frontier for innovation, competition, and productivity. McKinsey & Company. Retrieved September 12, 2018, from https://bigdatawg.nist.gov/pdf/MGI_big_data_full_report.pdf

  10. Lund, S. (2013). Game changers: Five opportunities for US growth and renewal. McKinsey & Company. Retrieved September 26, 2018, from https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Americas/US%20game%20changers/MGI_US_game_changers_Executive_Summary_July_2013.ashx

  11. Chen, C., & Zhang, C. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Journal of Information Sciences, 275, 314–347.

    Article  Google Scholar 

  12. Dijcks, J. P. (2012). Oracle: Big data for the enterprise. Oracle White Paper.

    Google Scholar 

  13. Abawajy, J. (2015). Comprehensive analysis of big data variety landscape. International Journal of Parallel, Emergent and Distributed Systems, 30(1), 5–14.

    Article  MathSciNet  Google Scholar 

  14. Minelli, M., Chambers, M., & Dhiraj, A. (2012). Big data, big analytics: Emerging business intelligence and analytic trends for today’s businesses. Wiley.

    Google Scholar 

  15. IBM. (2015). Bringing big data to the enterprise. Retrieved from http://www-01.ibm.com/software/in/data/bigdata/

  16. Sathi, A. (2012). Big data analytics: Disruptive technologies for changing the game. MC Press.

    Google Scholar 

  17. Khayyam, H., Naebe, M., Zabihi, O., Zamani, R., Atkiss, S., & Fox, B. (2015). Dynamic prediction models and optimization of Polyacrylonitrile (PAN) stabilization processes for production of carbon fiber. IEEE Transactions on Industrial Informatics, 11, 887–896.

    Article  Google Scholar 

  18. Alag, S., Agogino, A., & Morjaria, M. (2001). A methodology for intelligent sensor measurement, validation, fusion, and fault detection for equipment monitoring and diagnostics. Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AIEDAM), Special Issue on AI in Equipment Service, 15(4), 307–319.

    MATH  Google Scholar 

  19. Hernán, D., Loaiza, B. H., Caicedo, E., Ibarra-Castanedo, C., Bendada, A. H., & Maldague, X. (2009). Defect characterization in infrared non-destructive testing with learning machines. Non-Destructive Testing and Evaluation International, 42(7), 630–643.

    Google Scholar 

  20. Khayyam, H., Fakhrhoseini, S. M., Church, J. S., Milani, A. S., Bab-Hadiashar, A., & Jazar, R. N. (2017). Predictive modelling and optimization of carbon fiber mechanical properties through high temperature furnace. Applied Thermal Engineering, 125, 1539–1554.

    Article  Google Scholar 

  21. Kabir, G., Sadiq, R., & Tesfamariam, S. (2016). A fuzzy Bayesian belief network for safety assessment of oil and gas pipelines. Structure and Infrastructure Engineering, 12(8), 874–889.

    Article  Google Scholar 

  22. Madsen, A. L., & Kjærulff, U. B. (2007). Applications of HUGIN to diagnosis and control of autonomous vehicles. Studies in fuzziness and soft computing. Berlin: Springer.

    MATH  Google Scholar 

  23. Fernandes, H., Zhang, H., Figueiredo, A., Malheiros, F., Ignacio, L. H., Sfarra, S., & Guimaraes, G. (2018). Machine learning and infrared thermography for fiber orientation assessment on randomly-oriented strands parts. In Sensors 2018 (pp. 288–306).

    Google Scholar 

  24. Hu, H., Wen, Y., Chua, T., & Li, X. (2014). Toward scalable systems for big data analytics: A technology tutorial. IEEE Access, 2, 652–687.

    Article  Google Scholar 

  25. Chen, X. W., & Lin, X. (2014). Big data deep learning: Challenges and perspectives. IEEE Access, 2, 514–525.

    Article  Google Scholar 

  26. Jolliffe, I. T., & Cadima, J. (2016). Principal component analysis: A review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065), 20150202.

    Article  MathSciNet  MATH  Google Scholar 

  27. Sander, J., Ester, M., Kriegel, H. P., & Xu, X. (1998). Density-based clustering in spatial databases: The algorithm GDBSCAN and its applications. Data Mining and Knowledge Discovery, 2(2), 169–194.

    Article  Google Scholar 

  28. Han, J., Pei, J., & Kamber, M. (2011). Data mining: Concepts and techniques. New York: Elsevier.

    MATH  Google Scholar 

  29. Vapnik, V. N. (1999). An overview of statistical learning theory. IEEE Transactions on Neural Networks, 10, 988–999.

    Article  Google Scholar 

  30. Abd, A. M., & Abd, S. M. (2017). Modelling the strength of lightweight foamed concrete using support vector machine (SVM). Case Studies in Construction Materials, 6, 8–15.

    Article  Google Scholar 

  31. Chapelle, O., Vapnik, V., Bousquet, O., & Mukherjee, S. (2002). Choosing multiple parameters for support vector machines. Machine Learning, 46, 131–159.

    Article  MATH  Google Scholar 

  32. Cholette, M. E., Borghesani, P., Gialleonardo, E. D., & Braghin, F. (2017). Using support vector machines for the computationally efficient identification of acceptable design parameters in computer-aided engineering applications. Expert Systems with Applications, 81, 39–52.

    Article  Google Scholar 

  33. Rokach, L., & Maimon, O. (2014). Data mining with decision trees: Theory and applications. World Scientific.

    Google Scholar 

  34. Pourret, O., Naim, P., & Marcot, B. (2008). Bayesian networks: A practical guide to applications. West Sussex: Wiley.

    Book  MATH  Google Scholar 

  35. Shatovskaya, T., Repka, V., & Good, A. (2006). Application of the Bayesian networks in the informational modeling. In 2006 international conference—Modern problems of radio engineering, telecommunications, and computer science (pp. 108–108).

    Google Scholar 

  36. Catal, C., Sevim, U., & Diri, B. (2011). Practical development of an eclipse-based software fault prediction tool using naïve Bayes algorithm. Expert Systems with Applications, 38(3), 2347–2353.

    Article  Google Scholar 

  37. Pearl, J. (2014). Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann.

    Google Scholar 

  38. Janssens, D., Wets, G., Brijs, T., Vanhoof, K., Arentze, T., & Timmermans, H. (2006). Integrating Bayesian networks and decision trees in a sequential rule-based transportation model. European Journal of Operational Research, 175, 16–34.

    Article  MATH  Google Scholar 

  39. Cosma, G., Brown, D., Archer, M., Khan, M., & Graham Pockley, A. (2017). A survey on computational intelligence approaches for predictive modeling in prostate cancer. Expert Systems with Applications, 70, 1–19.

    Article  Google Scholar 

  40. Liaw, A., & Wiener, M. (2002). Classification and regression by randomForest. R News, 2(3), 18–22.

    Google Scholar 

  41. Breiman, L. (2001). Random forests. Machine Learning, 45, 5–32.

    Article  MATH  Google Scholar 

  42. Datla, M. V. (2015). Bench marking of classification algorithms: Decision trees and random forests—A case study using R. In 2015 international conference on trends in automation, communications and computing technology (I-TACT-15) (pp. 1–7).

    Google Scholar 

  43. Schapire, R. E. (2003). The boosting approach to machine learning: An overview. In Nonlinear estimation and classification. Springer.

    Google Scholar 

  44. Zhang, C. X., Zhang, J. S., & Zhang, G. Y. (2008). An efficient modified boosting method for solving classification problems. Journal of Computational and Applied Mathematics, 214, 381–392.

    Article  MathSciNet  MATH  Google Scholar 

  45. Li, F., & Pengfei, L. (2013). The research survey of system identification method. In 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC).

    Google Scholar 

  46. Hunter, D., Yu, H., Pukish, M. S., III, Kolbusz, J., & Wilamowski, B. M. (2012). Selection of proper neural network sizes and architectures—A comparative study. IEEE Transactions on Industrial Informatics, 8, 228–240.

    Article  Google Scholar 

  47. Davim, P. (2012). Computational methods for optimizing manufacturing technology models and techniques. Hershey: Engineering Science Reference.

    Book  Google Scholar 

  48. Cherkassky, V., & Mulier, F. M. (2007). Learning from data: Concepts, theory, and methods. Chichester: Wiley.

    Book  MATH  Google Scholar 

  49. Kermani, B. G., Schiffman, S. S., & Nagle, H. T. (2005). Performance of the Levenberg–Marquardt neural network training method in electronic nose applications. Sensors and Actuators B: Chemical, 110, 13–22.

    Article  Google Scholar 

  50. Hagan, M. T., & Menhaj, M. B. (1994). Training feedforward networks with the Marquardt algorithm. IEEE Transactions on Neural Networks, 5, 989–993.

    Article  Google Scholar 

  51. Keerthi, S. S., & Lin, C.-J. (2003). Asymptotic behaviors of support vector machines with Gaussian kernel. Neural Computation, 15, 1667–1689.

    Article  MATH  Google Scholar 

  52. Gunn, S. R. (1998). ISIS Technical Report—Support vector machines for classification and regression. Retrieved July 8, 2018, from http://svms.org/tutorials/Gunn1998.pdf

  53. Vapnik, V., Golowich, S. E., & Smola, A. (1996). Support vector method for function approximation, regression estimation, and signal processing. In Advances in neural information processing systems (Vol. 9).

    Google Scholar 

  54. Hu, W., Yan, L., Liu, K., & Wang, H. (2016). A short-term traffic flow forecasting method based on the hybrid PSO-SVR. Neural Processing Letters, 43, 155–172.

    Article  Google Scholar 

  55. Cai, Z. J., Lu, S., & Zhang, X. B. (2009). Tourism demand forecasting by support vector regression and genetic algorithm. In 2nd IEEE international Conference on Computer science and information technology (ICCSIT) 2009 (pp. 144–146).

    Google Scholar 

  56. Vapnik, V. (1998). Statistical learning theory. Wiley.

    Google Scholar 

  57. Tu, W., & Sun, S. (2012). Cross-domain representation-learning framework with combination of class-separate and domain-merge objectives. In Proceedings of the 1st International Workshop on Cross Domain Knowledge Discovery in Web and Social Network Mining, 2012 (pp. 18–25). Beijing.

    Google Scholar 

  58. Girvan, M., & Newman, M. E. (2002). Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12), 7821–7826.

    Article  MathSciNet  MATH  Google Scholar 

  59. Zhang, C., Zhang, K., Yuan, Q., Peng, H., Zheng, Y., Hanratty, T., Wang, S., & Han, J. (2017). Regions, periods, activities: Uncovering urban dynamics via cross-modal representation learning. In Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2017 (pp. 361–370).

    Google Scholar 

  60. Natarajan, N., & Dhillon, I. S. (2014). Inductive matrix completion for predicting gene–disease associations. Bioinformatics, 30(12), 60–68.

    Article  Google Scholar 

  61. Wang, S., Tang, J., Aggarwal, C., & Liu, H. (2016). Linked document embedding for classification. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management 2016 (pp. 115–124).

    Google Scholar 

  62. Wang, D., Cui, P., & Zhu, W. (2016). Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016 (pp. 1225–1234).

    Google Scholar 

  63. Ou, M., Cui, P., Pei, J., Zhang, Z., & Zhu, W. (2016). Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016 (pp. 1105–1114).

    Google Scholar 

  64. Tang, J., Qu, M., Wang, M., Zhang, M., Yan, J., & Mei, Q. (2015). LINE: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web 2015 (pp. 1067–1077).

    Google Scholar 

  65. LeCun, Y., Bengio, Y., & Hinton, G. E. (2015). Deep learning. Nature, 521, 436–444.

    Article  Google Scholar 

  66. Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In IEEE Computer Society Conference On Computer Vision and Pattern Recognition, 2005 (CVPR 2005), IEEE (Vol. 1. pp. 886–893).

    Google Scholar 

  67. Hinton, G., Deng, L., Yu, D., Mohamed, A.-R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., Dahl, G., & Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82–97.

    Article  Google Scholar 

  68. Seide, F., Li, G., & Yu, D. (2011). Conversational speech transcription using context-dependent deep neural networks. In INTERSPEECH. ISCA (pp. 437–440).

    Google Scholar 

  69. Suthaharan, S. (2013). Big data classification: Problems and challenges in network intrusion prediction with machine learning. In ACM Sigmetrics: Big Data Analytics Workshop. Pittsburgh: ACM.

    Google Scholar 

  70. Fan, C., Xiao, F., & Zhao, Y. (2017). A short-term building cooling load prediction method using deep learning algorithms. Applied Energy, 195, 222–233.

    Article  Google Scholar 

  71. Chong, E., Han, C., & Park, F. C. (2017). Deep learning networks for stock market analysis and prediction: Methodology, data representations, and case studies. Expert Systems with Applications, 83, 187–205.

    Article  Google Scholar 

  72. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv technical report. Retrieved August 12, 2018, from http://arxiv.org/pdf/1409.1556

  73. Mathworks. (2018). Practical deep learning examples with MATLAB. Retrieved August 14, 2018, from https://au.mathworks.com/campaigns/offers/deep-learning-examples-with-matlab.html

  74. Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507.

    Article  MathSciNet  MATH  Google Scholar 

  75. Peteiro-Barral, D., & Guijarro-Berdiñas, B. (2013). A survey of methods for distributed machine learning. Progress in Artificial Intelligence, 2, 1–11.

    Article  Google Scholar 

  76. Giordana, A., & Neri, F. (1995). Search-intensive concept induction. Evolutionary Computation, 3(4), 375–416.

    Article  Google Scholar 

  77. Tsoumakas, G., & Vlahavas, I. (2009). Distributed data mining. In J. Erickson (Ed.), Database technologies: Concepts, methodologies, tools, and applications (pp. 157–171). Hershey: IGI Global.

    Chapter  Google Scholar 

  78. Hand, D. J., Mannila, H., & Smyth, P. (2001). Principles of data mining. Cambridge: The MIT Press.

    Google Scholar 

  79. Kittler, J., Hatef, M., Duin, R. P. W., & Matas, J. (1998). On combining classifiers. IEEE Transactions on Pattern Analysis in Machine Intelligence, 20(3), 226–239.

    Article  Google Scholar 

  80. Breiman, L. (1999). Pasting small votes for classification in large databases and on-line. Machine Learning, 36(1), 85–103.

    Article  Google Scholar 

  81. Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge Data Engineering, 22(10), 1345–1359.

    Article  Google Scholar 

  82. Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In ECCV 2016: European Conference on Computer Vision (pp. 694–711).

    Google Scholar 

  83. Dai, W., Xue, G., Yang, Q. & Yu, Y. (2007). Transferring naive bayes classifiers for text classification. In Proceedings of the 22rd AAAI Conference on Artificial Intelligence, Vancouver, Canada (pp. 540–545).

    Google Scholar 

  84. Ling, X., Xue, G.-R., Dai, W., Jiang, Y., Yang, Q., & Yu, Y. (2008). Can Chinese web pages be classified with English data source? In Proceedings of the 17th International Conference on World Wide Web (pp. 969–978). Beijing: ACM

    Google Scholar 

  85. Pan, S. J., Kwok, J. T., Yang, Q., & Pan, J. J. (2007). Adaptive localization in a dynamic WiFi environment through multi-view learning. In Proceedings of the 22nd AAAI Conference on Artificial Intelligence (pp. 1108–1113). Vancouver.

    Google Scholar 

  86. Kuhlmann, G. & Stone, P. (2007). Graph-based domain mapping for transfer learning in general games. In 18th European Conference on Machine Learning, ser. Lecture Notes in Computer Science (pp. 188–200). Warsaw: Springer.

    Google Scholar 

  87. Jiang, J., & Zhai, C. (2007). Instance weighting for domain adaptation in NLP. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (pp. 264–271). Prague: Association for Computational Linguistics.

    Google Scholar 

  88. Lawrence, N. D., & Platt, J. C. (2004). Learning to learn with the informative vector machine. In Proceedings of the 21st International Conference on Machine Learning. Banff: ACM.

    Google Scholar 

  89. Schwaighofer, A., Tresp, V., & Yu, K. (2005). Learning Gaussian process kernels via hierarchical Bayes. In Proceedings of the 17th Annual Conference on Neural Information Processing Systems (pp. 1209–1216). Cambridge: MIT Press.

    Google Scholar 

  90. Evgeniou, T., & Pontil, M. (2004). Regularized multi-task learning. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 109–117). Seattle: ACM.

    Google Scholar 

  91. Hoi, S. C. H., Jin, R., Zhu, J., & Lyu, M. R. (2006). Batch mode active learning and its application to medical image classification. In Proceedings of the International Conference on Machine Learning (ICML) 2006 (pp. 417–424).

    Google Scholar 

  92. King, R. D., Rowland, J., Oliver, S. G., Young, M., Aubrey, W., Byrne, E., Liakata, M., Markham, M., Pir, P., Soldatova, L. N., Sparkes, A., Whelan, K. E., & Clare, A. (2009). The automation of science. Science, 324(5923), 85–89.

    Article  Google Scholar 

  93. Berger, A. L., Della Pietra, V. J., & Della Pietra, S. A. (1996). A maximum entropy approach to natural language processing. Computational Linguistics, 22(1), 39–71.

    Google Scholar 

  94. Flaherty, P., Jordan, M., & Arkin, A. (2006). Robust design of biological experiments. In Proceedings of Advances in Neural Information Processing Systems (NIPS) (Vol. 18, pp. 363–370). MIT Press.

    Google Scholar 

  95. Grira, N., Crucianu, M., & Boujemaa, N. (2005). Active semi-supervised fuzzy clustering for image database categorization. In Proceedings of the ACM Workshop on Multimedia Information Retrieval (MIR) (pp. 9–16). ACM Press.

    Google Scholar 

  96. Hauptmann, A., Lin, W., Yan, R., Yang, J., & Chen, M. Y. (2006). Extreme video retrieval: Joint maximization of human and computer performance. In Proceedings of the ACM Workshop on Multimedia Image Retrieval (pp. 385–394). ACM Press.

    Google Scholar 

  97. Moskovitch, R., Nissim, N., Stopel, D., Feher, C., Englert, R., & Elovici. Y. (2007). Improving the detection of unknown computer worms activity using active learning. In Proceedings of the German Conference on AI (pp. 489–493). Springer Publishing.

    Google Scholar 

  98. Settles, B. (2010). Active learning literature survey. Madison: University of Wisconsin. Retrieved August 2, 2018, from http://burrsettles.com/pub/settles.activelearning.pdf.

    MATH  Google Scholar 

  99. Lang, K. (1995). Newsweeder: Learning to filter netnews. In Proceedings of the International Conference on Machine Learning (ICML) (pp. 331–339). Morgan Kaufmann.

    Google Scholar 

  100. Müller, K. R., Mika, S., Rätsch, G., Tsuda, K., & Schölkopf, B. (2001). An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks, 12(2), 181–201.

    Article  Google Scholar 

  101. Slavakis, K., Bouboulis, P., & Theodoridis, S. (2012). Adaptive multiregression in reproducing kernel Hilbert spaces: The multiaccess MIMO channel case. IEEE Transactions on Neural Network Learning Systems, 23(2), 260–276.

    Article  Google Scholar 

  102. Theodoridis, S., Slavakis, K., & Yamada, I. (2011). Adaptive learning in a world of projections. IEEE Signal Processing Magazine, 28(1), 97–123.

    Article  Google Scholar 

  103. Slavakis, K., Theodoridis, S., & Yamada, I. (2008). Online kernel-based classification using adaptive projection algorithms. IEEE Transactions on Signal Processing, 56(7), 2781–2796.

    Article  MathSciNet  MATH  Google Scholar 

  104. Mika, S., Schölkopf, B., Smola, A. J., Müller, K.-R., Scholz, M., & Rätsch, G. (1999). Kernel PCA and de-noising in feature spaces. In Advances in Neural Information Processing Systems 11 (pp. 536–542). Cambridge: MIT Press.

    Google Scholar 

  105. LeCun, Y. A., Jackel, L. D., Bottou, L., Brunot, A., Cortes, C., Denker, J. S., Drucker, H., Guyon, I., Müller, U. A., Säckinger, E., Simard, P. Y., & Vapnik, V. N. (1995). Learning algorithms for classification: A comparison on handwritten digit recognition. In Neural networks: The statistical mechanics perspective (pp. 261–276). Singapore: World Scientific Publishing.

    Google Scholar 

  106. Zien, A., Rätsch, G., Mika, S., Schölkopf, B., Lengauer, T., & Müller, K. R. (2000). Engineering support vector machine kernels that recognize translation initiation sites in DNA. Bioinformatics, 16, 799–807.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abbas S. Milani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Crawford, B., Khayyam, H., Milani, A.S., Jazar, R.N. (2020). Big Data Modeling Approaches for Engineering Applications. In: Jazar, R., Dai, L. (eds) Nonlinear Approaches in Engineering Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-18963-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-18963-1_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-18962-4

  • Online ISBN: 978-3-030-18963-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics