Advertisement

A Machine Learning Framework for Volume Prediction

  • Umutcan ÖnalEmail author
  • Zafeirakis Zafeirakopoulos
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11544)

Abstract

Computing the exact volume of a polytope is a #P-hard problem, which makes the computation for high dimensional polytopes computationally expensive. Due to this cost of computation, randomized approximation algorithms is an acceptable solution in practical applications. On the other hand, machine learning techniques, such as neural networks, saw a lot of success in recent years. We propose machine learning approaches to volume prediction and volume comparison. We employ various network architectures such as feed-forward networks, autoencoders and end-to-end networks. We develop different types of models with these architectures that emphasize different parts of the problem, such as representation of polytopes, volume comparison between polytopes and volume prediction. Our results have varying rate of success depending on model and experimentation parameters. This work intends to start the discussion about applying machine learning techniques to computationally hard geometric problems.

Keywords

Machine learning Autoencoders Neural networks Polytope Volume 

Notes

Acknowledgements

This work was supported by the project 117E501 under the program 3001 of the Scientific and Technological Research Council of Turkey.

References

  1. 1.
    Calès, L., Chalkis, A., Emiris, I.Z., Fisikopoulos, V.: Practical volume computation of structured convex bodies, and an application to modeling portfolio dependencies and financial crises. In: Speckmann, B., Tóth, C.D. (eds.) 34th International Symposium on Computational Geometry, SoCG 2018, 11–14 June 2018, Budapest, Hungary. LIPIcs, vol. 99, pp. 19:1–19:15. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2018).  https://doi.org/10.4230/LIPIcs.SoCG.2018.19
  2. 2.
    Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734. Association for Computational Linguistics (2014).  https://doi.org/10.3115/v1/D14-1179, http://aclweb.org/anthology/D14-1179
  3. 3.
    Dai, A.M., Le, Q.V.: Semi-supervised sequence learning. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 3079–3087. Curran Associates, Inc. (2015). http://papers.nips.cc/paper/5949-semi-supervised-sequence-learning.pdf
  4. 4.
    Dyer, M.E., Frieze, A.M.: On the complexity of computing the volume of a polyhedron. SIAM J. Comput. 17(5), 967–974 (1988).  https://doi.org/10.1137/0217060MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Emiris, I.Z., Fisikopoulos, V.: Efficient random-walk methods for approximating polytope volume. In: Proceedings of the Thirtieth Annual Symposium on Computational Geometry, SOCG 2014, pp. 318:318–318:327. ACM, New York (2014).  https://doi.org/10.1145/2582112.2582133, http://doi.acm.org/10.1145/2582112.2582133
  6. 6.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition, pp. 770–778, June 2016.  https://doi.org/10.1109/CVPR.2016.90
  7. 7.
    Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
  8. 8.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997).  https://doi.org/10.1162/neco.1997.9.8.1735CrossRefGoogle Scholar
  9. 9.
    Jaekel, U.: A monte carlo method for high-dimensional volume estimation and application to polytopes. In: Sato, M., Matsuoka, S., Sloot, P.M.A., van Albada, G.D., Dongarra, J.J. (eds.) Proceedings of the International Conference on Computational Science, ICCS 2011. Procedia Computer Science. Nanyang Technological University, Singapore, 1–3 June 2011, vol. 4, pp. 1403–1411. Elsevier (2011).  https://doi.org/10.1016/j.procs.2011.04.151CrossRefGoogle Scholar
  10. 10.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates, Inc. (2012). http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
  11. 11.
    Paffenholzr, A.: Smooth reflexive lattice polytopes. https://polymake.org/polytopes/paffenholz/www/fano.html
  12. 12.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533 (1986).  https://doi.org/10.1038/323533a0CrossRefzbMATHGoogle Scholar
  13. 13.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)Google Scholar
  14. 14.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Proceeding of the NIPS. Montreal, CA (2014). http://arxiv.org/abs/1409.3215
  15. 15.
    Szegedy, C., et al.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015). http://arxiv.org/abs/1409.4842
  16. 16.
    The Sage Developers: SageMath, the Sage Mathematics Software System (Version 8.3.0) (2019). https://www.sagemath.org
  17. 17.
    Ziegler, G.M.: Lectures on polytopes. Springer, New York (1995).  https://doi.org/10.1007/978-1-4613-8431-1CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Gebze Technical UniversityGebzeTurkey
  2. 2.Institute of Information TechnologiesGebze Technical UniversityGebzeTurkey

Personalised recommendations