Advertisement

Pattern Recognition and Image Analysis

, Volume 28, Issue 4, pp 676–683 | Cite as

Dataless Black-Box Model Comparison

  • C. Theiss
  • C. A. Brust
  • J. Denzler
Proceedings of the 6th International Workshop
  • 9 Downloads

Abstract

In a time where the training of new machine learning models is extremely time-consuming and resource-intensive and the sale of these models or the access to them is more popular than ever, it is important to think about ways to ensure the protection of these models against theft. In this paper, we present a method for estimating the similarity or distance between two black-box models. Our approach does not depend on the knowledge about specific training data and therefore may be used to identify copies of or stolen machine learning models. It can also be applied to detect instances of license violations regarding the use of datasets. We validate our proposed method empirically on the CIFAR-10 and MNIST datasets using convolutional neural networks, generative adversarial networks and support vector machines. We show that it can clearly distinguish between models trained on different datasets. Theoretical foundations of our work are also given.

Keywords

model comparison function space black-box 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Yossi Adi, C. Baum, M. Cisse, B. Pinkas, and J. Keshet. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In 27th USENIX Security Symposium (USENIX 7 Security 18), Baltimore, MD, 2018. USENIX Association.Google Scholar
  2. 2.
    M. G Bellemare, I. Danihelka, W. Dabney, Shakir Mohamed, Balaji Lakshminarayanan, S. Hoyer, and R. Munos. The cramer distance as a solution to biased wasserstein gradients. arXiv preprint arXiv:1705.10743, 2017.Google Scholar
  3. 3.
    I. Borg and P. Groenen. Modern Multidimen–sional Scaling: Theory and Applications (Springer Series in Statistics). Springer, 1996.Google Scholar
  4. 4.
    C. Cortes and V. Vapnik. Support–vector networks. Machine learning, 20(3):273–297, 1995.CrossRefzbMATHGoogle Scholar
  5. 5.
    Navneet Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886–893. IEEE, 2005.Google Scholar
  6. 6.
    J. Deng, W. Dong, R. Socher, L.–J. Li, K. Li, and L. Fei–Fei. ImageNet: A Large–Scale Hierarchical Image Database. In CVPR09, 2009.Google Scholar
  7. 7.
    I. Goodfellow, J. Pouget–Abadie, Mehdi Mirza, Bing Xu, D. Warde–Farley, Sherjil Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.Google Scholar
  8. 8.
    D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.Google Scholar
  9. 9.
    A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.Google Scholar
  10. 10.
    Y. LeCun, L. Bottou, Y. Bengio, and P. Haner. Gradient–based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, Nov 1998.CrossRefGoogle Scholar
  11. 11.
    Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989.Google Scholar
  12. 12.
    Tsung–Yi Lin, M. Maire, S. Belongie, James Hays, P. Perona, Deva Ramanan, Piotr Doll ar, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.Google Scholar
  13. 13.
    E. Le Merrer, P. Perez, and G. Tredan. Adversarial frontier stitching for remote neural network watermarking. arXiv preprint arXiv:1711.01894, 2017.Google Scholar
  14. 14.
    Y. Nagai, Y. Uchida, S. Sakazawa, and S. Satoh. Digital watermarking for deep neural networks. International Journal of Multimedia Information Retrieval, 7(1):3–16, 2018.Google Scholar
  15. 15.
    F. Rocha and M. Correia. Lucy in the sky without diamonds: Stealing con_dential data in the cloud. In 2011 IEEE/IFIP 41st International Conference on Dependable Systems and Networks Workshops (DSN–W), pages 129–134, June 2011.Google Scholar
  16. 16.
    B. Darvish Rouhani, H. Chen, and F. Koushanfar. Deepsigns: A generic watermarking framework for ip protection of deep learning models. arXiv preprint arXiv:1804.00750, 2018.Google Scholar
  17. 17.
    W. Rudin. Principles of Mathematical Analysis. McGraw–Hill Education–Europe, 1976.zbMATHGoogle Scholar
  18. 18.
    O. Russakovsky, Jia Deng, Hao Su, J. Krause, S. Satheesh, Sean Ma, Zhiheng Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and Li Fei–Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Com–puter Vision (IJCV), 115(3): 211–252, 2015.MathSciNetCrossRefGoogle Scholar
  19. 19.
    F. Tramèr, Fan Zhang, Ari Juels, M. K. Reiter, and T. Ristenpart. Stealing machine learning models via prediction apis. In 25th USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10–12, 2016, pp. 601–618, 2016.Google Scholar
  20. 20.
    Y. Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin’ichi Satoh. Embedding watermarks into deep neural networks. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, pages 269–277. ACM, 2017.Google Scholar
  21. 21.
    S. Van der Walt, J. L. Schönberger, J. Nunez–Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu. scikitimage: image processing in python. PeerJ, 2:e453, 2014.Google Scholar

Copyright information

© Pleiades Publishing, Ltd. 2018

Authors and Affiliations

  1. 1.Computer Vision GroupFriedrich Schiller University JenaJenaGermany

Personalised recommendations