Advertisement

Multitask Learning for Sparse Failure Prediction

  • Simon LuoEmail author
  • Victor W. Chu
  • Zhidong Li
  • Yang Wang
  • Jianlong Zhou
  • Fang Chen
  • Raymond K. Wong
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11439)

Abstract

Sparsity is a problem which occurs inherently in many real-world datasets. Sparsity induces an imbalance in data, which has an adverse effect on machine learning and hence reducing the predictability. Previously, strong assumptions were made by domain experts on the model parameters by using their experience to overcome sparsity, albeit assumptions are subjective. Differently, we propose a multi-task learning solution which is able to automatically learn model parameters from a common latent structure of the data from related domains. Despite related, datasets commonly have overlapped but dissimilar feature spaces and therefore cannot simply be combined into a single dataset. Our proposed model, namely hierarchical Dirichlet process mixture of hierarchical beta process (HDP-HBP), learns tasks with a common model parameter for the failure prediction model using hierarchical Dirichlet process. Our model uses recorded failure history to make failure predictions on a water supply network. Multi-task learning is used to gain additional information from the failure records of water supply networks managed by other utility companies to improve prediction in one network. We achieve superior accuracy for sparse predictions compared to previous state-of-the-art models and have demonstrated the capability to be used in risk management to proactively repair critical infrastructure.

Keywords

Multi-task learning Sparse predictions Dirichlet process Beta process Failure predictions 

References

  1. 1.
    Bonilla, E.V., Chai, K.M.A., Williams, C.K.: Multi-task Gaussian process prediction. In: NIPs, vol. 20, pp. 153–160 (2007)Google Scholar
  2. 2.
    Dai, W., Yang, Q., Xue, G.R., Yu, Y.: Self-taught clustering. In: Proceedings of the 25th International Conference on Machine Learning, pp. 200–207. ACM (2008)Google Scholar
  3. 3.
    David, C.R., et al.: Regression models and life tables (with discussion). J. Roy. Stat. Soc. 34, 187–220 (1972)Google Scholar
  4. 4.
    Gupta, S., Phung, D., Venkatesh, S.: Factorial multi-task learning: a Bayesian nonparametric approach. In: International conference on machine learning, pp. 657–665 (2013)Google Scholar
  5. 5.
    Ibrahim, J.G., Chen, M.H., Sinha, D.: Bayesian Survival Analysis. Wiley, Hoboken (2005)zbMATHGoogle Scholar
  6. 6.
    Kemp, C., Tenenbaum, J.B., Griffiths, T.L., Yamada, T., Ueda, N.: Learning systems of concepts with an infinite relational model. In: AAAI, vol. 3, p. 5 (2006)Google Scholar
  7. 7.
    Kumar, A., et al.: Using machine learning to assess the risk of and prevent water main breaks. arXiv preprint arXiv:1805.03597 (2018)
  8. 8.
    Li, B., Zhang, B., Li, Z., Wang, Y., Chen, F., Vitanage, D.: Prioritising water pipes for condition assessment with data analytics (2015)Google Scholar
  9. 9.
    Li, Z., et al.: Water pipe condition assessment: a hierarchical beta process approach for sparse incident data. Mach. Learn. 95(1), 11–26 (2014)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Lin, P., et al.: Data driven water pipe failure prediction: a Bayesian nonparametric approach. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 193–202. ACM (2015)Google Scholar
  11. 11.
    Luo, S., Chu, V.W., Zhou, J., Chen, F., Wong, R.K., Huang, W.: A multivariate clustering approach for infrastructure failure predictions. In: 2017 IEEE International Congress on Big Data (BigData Congress), pp. 274–281. IEEE (2017)Google Scholar
  12. 12.
    Schwaighofer, A., Tresp, V., Yu, K.: Learning Gaussian process kernels via hierarchical bayes. In: Advances in Neural Information Processing Systems, pp. 1209–1216 (2005)Google Scholar
  13. 13.
    Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Hierarchical dirichlet processes. J. Am. Stat. Assoc. 101(476), 1566–1581 (2006)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Thibaux, R., Jordan, M.I.: Hierarchical beta processes and the Indian buffet process. In: AISTATS, vol. 2, pp. 564–571 (2007)Google Scholar
  15. 15.
    Xue, Y., Liao, X., Carin, L., Krishnapuram, B.: Multi-task learning for classification with dirichlet process priors. J. Mach. Learn. Res. 8(Jan), 35–63 (2007)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Simon Luo
    • 1
    • 2
    Email author
  • Victor W. Chu
    • 3
  • Zhidong Li
    • 2
    • 4
  • Yang Wang
    • 2
    • 4
  • Jianlong Zhou
    • 2
    • 4
  • Fang Chen
    • 2
    • 4
  • Raymond K. Wong
    • 5
  1. 1.The University of SydneySydneyAustralia
  2. 2.Data61, CSIROSydneyAustralia
  3. 3.Nanyang Technological UniversitySingaporeSingapore
  4. 4.University of Technology SydneyUltimoAustralia
  5. 5.The University of New South WalesKensingtonAustralia

Personalised recommendations