Advertisement

A Classical-Quantum Hybrid Approach for Unsupervised Probabilistic Machine Learning

  • Prasanna DateEmail author
  • Catherine Schuman
  • Robert Patton
  • Thomas Potok
Conference paper
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 70)

Abstract

For training unsupervised probabilistic machine learning models, matrix computation and sample generation are the two key steps. While GPUs excel at matrix computation, they use pseudo-random numbers to generate samples. Contrarily, Adiabatic Quantum Processors (AQP) use quantum mechanical systems to generate samples accurately and quickly, but are not suited for matrix computation. We present a Classical-Quantum Hybrid Approach for training unsupervised probabilistic machine learning models, leveraging GPUs for matrix computations and the D-Wave quantum sampling library for sample generation. We compare this approach to classical and quantum approaches across four performance metrics. Our results indicate that while the hybrid approach–which uses one AQP and one GPU–outperforms quantum and one of the classical approaches, it performs comparably to the GPU approach, and is outperformed by the CPU approach, which uses 56 high-end CPUs. Lastly, we compare sampling on AQP versus sampling library and show that AQP performs better.

Keywords

Quantum computing Machine learning Restricted boltzmann machines Deep belief networks MNIST 

References

  1. 1.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436 (2015)Google Scholar
  2. 2.
    Iandola, F.N., Moskewicz, M.W., Ashraf, K., Keutzer, K.: Firecaffe: near-linear acceleration of deep neural network training on compute clusters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2592–2600 (2016)Google Scholar
  3. 3.
    Young, S.R., Rose, D.C., Karnowski, T.P., Lim, S.-H., Patton, R.M.: Optimizing deep learning hyper-parameters through an evolutionary algorithm. In: Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments, p. 4, ACM (2015)Google Scholar
  4. 4.
    Kish, L.B.: End of moore’s law: thermal (noise) death of integration in micro and nano electronics. Phys. Lett. A 305(3–4), 144–149 (2002)CrossRefGoogle Scholar
  5. 5.
    Potok, T.E., Schuman, C.D., Young, S.R., Patton, R.M., Spedalieri, F., Liu, J., Yao, K.-T., Rose, G., Chakma, G.: A study of complex deep learning networks on high performance, neuromorphic, and quantum computers. In: Machine Learning in HPC Environments (MLHPC), Workshop on, pp. 47–55, IEEE (2016)Google Scholar
  6. 6.
    Amin, M.H., Andriyash, E., Rolfe, J., Kulchytskyy, B., Melko, R.: Quantum boltzmann machine arXiv preprint arXiv:1601.02036 (2016)
  7. 7.
    Gruska, J.: Quantum computing, vol. 2005. McGraw-Hill London (1999)Google Scholar
  8. 8.
    Rabitz, H., de Vivie-Riedle, R., Motzkus, M., Kompa, K.: Whither the future of controlling quantum phenomena? Science 288(5467), 824–828 (2000)CrossRefGoogle Scholar
  9. 9.
    Salakhutdinov, R., Mnih, A., Hinton, G.: Restricted boltzmann machines for collaborative filtering. In: Proceedings of the 24th international conference on Machine learning, pp. 791–798 ACM (2007)Google Scholar
  10. 10.
    Sarikaya, R., Hinton, G.E., Deoras, A.: Application of deep belief networks for natural language understanding. IEEE/ACM Trans. Audio, Speech, and Lang. Process. 22(4), 778–784 (2014)CrossRefGoogle Scholar
  11. 11.
    Watrous, J.: Quantum computational complexity. In: Encyclopedia of Complexity and Systems Science, pp. 7174–7201. Springer (2009)Google Scholar
  12. 12.
    Frisch, A.: Ibm qintroduction into quantum computing with live demo. In: System-on-Chip Conference (SOCC), 2017 30th IEEE International, pp. 1–2, IEEE (2017)Google Scholar
  13. 13.
    2018 CES: Intel advances quantum and neuromorphic computing research’ 2018. https://newsroom.intel.com/news/intel-advances-quantum-neuromorphic-computing-research/
  14. 14.
    Terhal, B.M.: Quantum supremacy, here we come. Nat. Phys. p. 1 (2018)Google Scholar
  15. 15.
    Johnson, M.W., Amin, M.H., Gildert, S., Lanting, T., Hamze, F., Dickson, N., Harris, R., Berkley, A.J., Johansson, J., Bunyk, P., et al.: Quantum annealing with manufactured spins. Nature 473(7346), 194 (2011)CrossRefGoogle Scholar
  16. 16.
    Denchev, V.S., Boixo, S., Isakov, S.V., Ding, N., Babbush, R., Smelyanskiy, V., Martinis, J., Neven, H.: What is the computational value of finite-range tunneling? Phys. Rev. X 6(3), 031015 (2016)Google Scholar
  17. 17.
    Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.: Quantum machine learning. Nature 549(7671), 195 (2017)CrossRefGoogle Scholar
  18. 18.
    DeBenedictis, E.P.: A future with quantum machine learning. Computer 51(2), 68–71 (2018)CrossRefGoogle Scholar
  19. 19.
    Smolensky, P.: Information processing in dynamical systems: foundations of harmony theory. COLORADO UNIV AT BOULDER DEPT OF COMPUTER SCIENCE, Tech. Rep. (1986)Google Scholar
  20. 20.
    Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)CrossRefGoogle Scholar
  21. 21.
    Fiore, U., Palmieri, F., Castiglione, A., De Santis, A.: Network anomaly detection with the restricted boltzmann machine. Neuro Comput. 122, 13–23 (2013)Google Scholar
  22. 22.
    Jaitly, N., Hinton, G.: Learning a better representation of speech soundwaves using restricted boltzmann machines. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5884–5887, IEEE (2011)Google Scholar
  23. 23.
    Le Roux, N., Bengio, Y.: Representational power of restricted boltzmann machines and deep belief networks. Neural Comput. 20(6), 1631–1649 (2008)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Lee, H., Grosse, R., Ranganath, R., Ng, A.Y.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th annual international conference on machine learning, pp. 609–616, ACM, 2009Google Scholar
  26. 26.
    Mohamed, A.-R., Yu, D., Deng, L.: Investigation of full-sequence training of deep belief networks for speech recognition. In: Eleventh Annual Conference of the International Speech Communication Association (2010)Google Scholar
  27. 27.
    Zhou, S., Chen, Q., Wang, X.: Fuzzy deep belief networks for semi-supervised sentiment classification. Neuro Comput. 131, 312–322 (2014)Google Scholar
  28. 28.
    Masci, J., Meier, U., Cireşan, D., Schmidhuber, J.: Stacked convolutional auto-encoders for hierarchical feature extraction. In: International Conference on Artificial Neural Networks, pp. 52–59. Springer (2011)Google Scholar
  29. 29.
    Oliphant, T.E.: A guide to NumPy, vol. 1. Trelgol Publishing USA (2006)Google Scholar
  30. 30.
    Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al.: Tensorflow: a system for large-scale machine learning. OSDI 16, 265–283 (2016)Google Scholar
  31. 31.
    D-Wave Systems Inc.: Training probabilistic models using d-wave sampling libraries (2018)Google Scholar
  32. 32.
    D-Wave Systems Inc.: Developer guide for python (2018)Google Scholar
  33. 33.
    Bierhorst, P., Knill, E., Glancy, S., Zhang, Y., Mink, A., Jordan, S., Rommal, A., Liu, Y.-K., Christensen, B., Nam, S.W., et al.: Experimentally generated randomness certified by the impossibility of superluminal signals. Nature 556(7700), 223 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Prasanna Date
    • 1
    Email author
  • Catherine Schuman
    • 2
  • Robert Patton
    • 2
  • Thomas Potok
    • 2
  1. 1.Rensselaer Polytechnic InstituteTroyUSA
  2. 2.Oak Ridge National LaboratoryOak RidgeUSA

Personalised recommendations