Dynamic Learning of Multiple Time Series in a Nonstationary Environment

Chapter

Abstract

This chapter introduces two distinct solutions to the problem of capturing the dynamics of multiple time series and the extraction of useful knowledge over time. As these dynamics would change in a nonstationary environment, the key characteristic of the methods is the ability to evolve their structure continuously over time. In addition, reviews of existing methods of dynamic single time series analysis and modeling such as the dynamic neuro-fuzzy inference system and the neuro-fuzzy inference method for transductive reasoning, which inspired the proposed methods, are presented. This chapter also presents a comprehensive evaluation of the performance of the proposed methods on a real-world problem, which consists of predicting movement of global stock market indexes over time.

Keywords

Autocorrelation Malaysia Egypt Indonesia 

Notes

Acknowledgements

The authors would like to thank the Knowledge Engineering and Discovery Research Institute (KEDRI) and all the members for their supports, constructive discussions, and inspirational ideas. The authors would also like to thank the School of Computing and Mathematical Sciences of Auckland University of Technology, New Zealand for the scholarship granted to Harya Widiputra.

References

  1. 1.
    Amari, S.: Mathematical foundations of neuro-computing. Proceedings of the IEEE 78 (9), pp. 1443–1463 (1990)CrossRefGoogle Scholar
  2. 2.
    Antoniou, A., Pescetto, G., Violaris, A.: Modelling international price relationships and interdependencies between the stock index and stock index futures markets of three EU countries: A multivariate analysis. Journal of Business Finance & Accounting, 30, pp. 645–667 (2003)CrossRefGoogle Scholar
  3. 3.
    Ben-Dor, A., Shamir, R., Yakhini, Z.: Clustering gene expression patterns. Journal of Computational Biology, 6 (3/4), pp. 281–297 (1999)CrossRefGoogle Scholar
  4. 4.
    Bosnic, Z., Kononenko, I., Robnik-Sikonja, M., Kukar, M.: Evaluation of prediction reliability in regression using the transduction principle. In The IEEE region 8 EUROCON 2003, computer as a tool, 2, pp. 99–103 (2003)Google Scholar
  5. 5.
    Collins, D., Biekpe, N.: Contagion and interdependence in African stock markets. South African Journal of Economics, 71 (1), pp. 181–194 (2003)CrossRefGoogle Scholar
  6. 6.
    Friedman, L., Nachman, P.: Using Bayesian networks to analyze expression data. Journal of Computational Biology, 7, pp. 601–620 (2000)CrossRefGoogle Scholar
  7. 7.
    Hastie, T., Tibshirani, R., Friedman, J.: The elements of statistical learning: Data mining, inference, and prediction — second edition. Springer, New York Berlin Heidelberg, Germany (2009)Google Scholar
  8. 8.
    Holland, J., Holyoak, K., Nisbett, R., Thagard, P.: Induction processes of inference, learning, and discovery. Cambridge University Press, Cambridge, U.S.A. (1989)Google Scholar
  9. 9.
    Joachims, T.: Transductive inference for text classification using support vector machines. In Proceedings of the sixteenth international conference on machine learning, ICML 1999, pp. 200–209. San Francisco, CA, USA (1999)Google Scholar
  10. 10.
    Joachims, T.: Transductive learning via spectral graph partitioning. In International conference on machine learning (ICML), pp. 290–297. Washington, DC U.S.A. (2003)Google Scholar
  11. 11.
    Kasabov, N. (2001).: Evolving fuzzy neural networks for supervised/unsupervised on-line knowledge-based learning. IEEE Transactions on Systems, Man and Cybernetics, 31, pp. 902–918 (2001)Google Scholar
  12. 12.
    Kasabov, N. (2007).: Global, local and personalised modelling and pattern discovery in Bioinformatics: An integrated approach. Pattern Recognition Letters, 28 (6), pp. 673–685 (2007)Google Scholar
  13. 13.
    Kasabov, N., Chan, Z., Jain, V., Sidorov, I., Dimitrov, D.: Gene regulatory network discovery from time-series gene expression data: a computational intelligence approach. In Lecture Notes in Computer Science 3316, pp. 1333–1353. Springer Berlin / Heidelberg (2004)Google Scholar
  14. 14.
    Kasabov, N., Pang, S.: Transductive support vector machines and applications in Bioinformatics for promoter recognition. In Proceedings of the 2003 International Conference on Neural Networks and Signal Processing, 1, pp. 1–6. IEEE Press (2003)Google Scholar
  15. 15.
    Kasabov, N., Song, Q.: DENFIS: dynamic evolving neural fuzzy inference system and its application for time-series prediction. IEEE Transactions on Fuzzy Systems, 10, pp. 144–154 (2002)CrossRefGoogle Scholar
  16. 16.
    Kim, T., Adali, T.: Approximation by fully complex multilayer perceptrons. Neural Computing, 15, pp. 1641–1666 (2003)MATHCrossRefGoogle Scholar
  17. 17.
    Kukar, M.: Transductive reliability estimation for medical diagnosis. Artificial Intelligence in Medicine, 29 (1-2), pp. 81–106 (2003)CrossRefGoogle Scholar
  18. 18.
    Lei, Z., Yang, Y., Wu, Z.: Ensemble of support vector machine for text-independent speaker recognition. International Journal of Computer Science and Network Security, 6 (5), pp. 163–167 (2006)Google Scholar
  19. 19.
    Li, C., Yuen, P.: Transductive learning: Learning iris data with two labelled data. In G. Dorffner, H. Bischof, K. Hornik (Eds.), Lecture Notes in Computer Science, Artificial neural networks, ICANN 2001, 2130, pp. 231–236. Springer Berlin / Heidelberg (2001)Google Scholar
  20. 20.
    Li, F., Wechsler, H.: Watch list face surveillance using transductive inference. In D. Zhang A. Jain (Eds.), Lecture Notes in Computer Science, Biometric authentication, 3072, pp. 1–15. Springer Berlin/Heidelberg (2004)Google Scholar
  21. 21.
    Li, J., Chua, C.: Transductive inference for color-based particle filter tracking. In Proceedings of international conference on image processing, ICIP 2003, 3, pp. 949–952 (2003)Google Scholar
  22. 22.
    Lucks, M., Oki, N.: A radial basis function network (RBFN) for function approximation. In Proceedings of the 42nd Midwest symposium on circuits and systems 2, pp. 1099–1101 (1999)Google Scholar
  23. 23.
    Masih, A., Masih, R.: Dynamic modeling of stock market interdependencies: An empirical investigation of Australia and the Asian NICs. Review of Pacific Basin Financial Markets and Policies, 4 (2), pp. 235–264 (2001)CrossRefGoogle Scholar
  24. 24.
    Poggio, F.: Regularization theory, radial basis functions and networks. In From statistics to neural networks: Theory and pattern recognition applications, pp. 83–104. NATO ASI Series.Google Scholar
  25. 25.
    Proedrou, K., Nouretdinov, I., Vovk, V., Gammerman, A.: Transductive confidence machines for pattern recognition. In T. Elomaa, H. Mannila, H. Toivonen (Eds.), Lecture Notes in Computer Science, machine learning: ECML 2002, 2430, pp. 221–231. Springer Berlin/Heidelberg (2002)Google Scholar
  26. 26.
    Psillaki, M., Margaritis, D.: Long-run interdependence and dynamic linkages in international stock markets: Evidence from France, Germany and the U.S. Journal of Money, Investment and Banking, 4, pp. 59–73. EuroJournals Publishing (2008)Google Scholar
  27. 27.
    Rodrigues, P., Gama, J., Pedroso, J.: Hierarchical clustering of time-series data streams. IEEE Trans. on Knowl. and Data Eng., 20, pp. 615–627 (2008)CrossRefGoogle Scholar
  28. 28.
    Song, Q., Kasabov, N.: ECM - a novel on-line, evolving clustering method and its applications. In M. Posner (Ed.), Foundations of cognitive science, pp. 631–682. The MIT Press, Massachusetts, USA (2001)Google Scholar
  29. 29.
    Song, Q., Kasabov, N.: NFI: a neuro-fuzzy inference method for transductive reasoning. IEEE Transactions on Fuzzy Systems, 13 (6), pp. 799–808 (2005)CrossRefGoogle Scholar
  30. 30.
    Soucy, P., Mineau, G.: A simple kNN algorithm for text categorization. In Proceedings IEEE international conference on data mining, ICDM 2001, pp. 647–649 (2001)CrossRefGoogle Scholar
  31. 31.
    Takagi, T., Sugeno, M.: Fuzzy identification of systems and its applications to modelling and control. IEEE Transactions on Systems, Man, and Cybernetics, 15 (1), pp. 116–132 (1985)MATHGoogle Scholar
  32. 32.
    Vapnik, V.: Statistical learning theory. Wiley-Interscience, Chichester (2008)Google Scholar
  33. 33.
    Weston, J., Perez-Cruz, F., Bousquet, O., Chapelle, O., Elisseeff, A., Scholkopf, B.: Feature selection and transduction for prediction of molecular bioactivity for drug design Bioinformatics, 19 (6), pp. 764–771 (2003)Google Scholar
  34. 34.
    Widiputra, H., Kho, H., Lukas, Pears, R., Kasabov, N.: A novel evolving clustering algorithm with polynomial regression for chaotic time-series prediction. In C. Leung, M. Lee, J. Chan (Eds.), Lecture Notes in Computer Science, neural information processing 5864, pp. 114–121. Springer Berlin/Heidelberg (2009)Google Scholar
  35. 35.
    Widiputra, H., Pears, R., Kasabov, N.: Personalised modelling for multiple time-series data prediction: a preliminary investigation in Asia Pacific stock market indexes movement. In Proceedings of the 15th international conference on advances in neuro-information processing part I, ICONIP 2008, 5506, pp. 1237–1244. Springer, Berlin Heidelberg (2008)Google Scholar
  36. 36.
    Widiputra, H., Pears, R., Kasabov, N.: Dynamic interaction networks versus local trend models for multiple time-series prediction. Cybernetics and Systems, 42, pp. 1–24 (2011)CrossRefGoogle Scholar
  37. 37.
    Widiputra, H., Pears, R., Kasabov, N.: Multiple time-series prediction through multiple time-series relationships profiling and clustered recurring trends. In Proceedings of the pacific asia conference on knowledge discovery and data mining, PAKDD (2011)Google Scholar
  38. 38.
    Wooldridge, J.: Introductory econometrics: a modern approach, 3rd edition. Cengage Learning Services, South Western College, Florence KY, USA (2005)Google Scholar
  39. 39.
    Wu, D., Bennett, K., Cristianini, N., Shawe-Taylor, J.: Large margin trees for induction and transduction. In Proceedings of the sixteenth international conference on machine learning, ICML 1999, pp. 474–483. Morgan Kaufmann Publishers Inc., San Francisco, CA (1999)Google Scholar
  40. 40.
    Yamada, T., Yamashita, K., Ishii, N., Iwata, K.: Text classification by combining different distance functions with weights. In Seventh ACIS international conference on software engineering, artificial intelligence, networking, and parallel/distributed computing, SNPD 2006, pp. 85–90 (2006)Google Scholar
  41. 41.
    Yang, H., Chan, L., King, I.: Support vector machine regression for volatile stock market prediction. In Third international conference on intelligent data engineering and automated learning, IDEAL 2002, pp.391–396. Springer (2002)Google Scholar
  42. 42.
    Zadeh, L.: Outline of a new approach to the analysis of complex systems and decision processes. IEEE Transactions on Systems, Man and Cybernetics, 3 (1), pp. 28–44 (1973)MathSciNetMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  1. 1.The Knowledge Engineering and Discovery Research InstituteAucklandNew Zealand

Personalised recommendations