Advertisement

DeepKey: Towards End-to-End Physical Key Replication from a Single Photograph

  • Rory SmithEmail author
  • Tilo Burghardt
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11269)

Abstract

This paper describes DeepKey, an end-to-end deep neural architecture capable of taking a digital RGB image of an ‘everyday’ scene containing a pin tumbler key (e.g. lying on a table or carpet) and fully automatically inferring a printable 3D key model. We report on the key detection performance and describe how candidates can be transformed into physical prints. We show an example opening a real-world lock. Our system is described in detail, providing a breakdown of all components including key detection, pose normalisation, bitting segmentation and 3D model inference. We provide an in-depth evaluation and conclude by reflecting on limitations, applications, potential security risks and societal impact. We contribute the DeepKey Datasets of 5, 300+ images covering a few test keys with bounding boxes, pose and unaligned mask data.

References

  1. 1.
    Adelson, E.H., Bergen, J.R., Burt, P.J., Ogden, J.M.: Pyramid methods in image processing. RCA Eng. 29(4), 33–41 (1984)Google Scholar
  2. 2.
    Burgess, B., Wustrow, E., Halderman, J.A.: Replication prohibited: attacking restricted keyways with 3D-printing. In: WOOT (2015)Google Scholar
  3. 3.
    Dai, J., et al.: Deformable convolutional networks. CoRR abs/1703.06211 (2017).  https://doi.org/10.1109/ICCV.2017.89, http://arxiv.org/abs/1703.06211
  4. 4.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC 2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html
  5. 5.
    Girshick, R.B.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448 (2015).  https://doi.org/10.1109/ICCV.2015.169
  6. 6.
    Girshick, R.B., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR abs/1311.2524 (2013).  https://doi.org/10.1109/CVPR.2014.81, http://arxiv.org/abs/1311.2524
  7. 7.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.B.: Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988 (2017).  https://doi.org/10.1109/ICCV.2017.322
  8. 8.
    He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. CoRR abs/1406.4729 (2014).  https://doi.org/10.1109/TPAMI.2015.2389824, http://arxiv.org/abs/1406.4729CrossRefGoogle Scholar
  9. 9.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016).  https://doi.org/10.1109/CVPR.2016.90
  10. 10.
    Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K.: Spatial transformer networks. In: NIPS (2015)Google Scholar
  11. 11.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012).  https://doi.org/10.1145/3065386CrossRefGoogle Scholar
  12. 12.
    Laxton, B., Wang, K., Savage, S.: Reconsidering physical key secrecy: teleduplication via optical decoding. In: ACM Conference on Computer and Communications Security (2008).  https://doi.org/10.1145/1455770.1455830
  13. 13.
    Lin, T.Y., Dollár, P., Girshick, R.B., He, K., Hariharan, B., Belongie, S.J.: Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 936–944 (2017).  https://doi.org/10.1109/CVPR.2017.106
  14. 14.
    Lin, T., et al.: Microsoft COCO: common objects in context. CoRR abs/1405.0312 (2014). http://arxiv.org/abs/1405.0312
  15. 15.
    Marius Kintel, C.W.: Openscad. http://www.blender.org/
  16. 16.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. CoRR abs/1612.08242 (2016). http://arxiv.org/abs/1612.08242
  17. 17.
    Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2015).  https://doi.org/10.1109/TPAMI.2016.2577031CrossRefGoogle Scholar
  18. 18.
    Schmidhuber, J.: Deep learning in neural networks: an overview. CoRR abs/1404.7828 (2014).  https://doi.org/10.1016/j.neunet.2014.09.003, http://arxiv.org/abs/1404.7828CrossRefGoogle Scholar
  19. 19.
    Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626 (2017).  https://doi.org/10.1109/ICCV.2017.74
  20. 20.
    Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: OverFeat: integrated recognition, localization and detection using convolutional networks. CoRR abs/1312.6229 (2013)Google Scholar
  21. 21.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)Google Scholar
  22. 22.
    Ultimaker BV: Ultimaker 2 plus technical specifications (2016). https://ultimaker.com/en/products/ultimaker-2-plus/specifications
  23. 23.
    Univeristy of Bristol Advanced Computing Research Centre: Blue crystal phase 4 (2017). https://www.acrc.bris.ac.uk/acrc/phase4.htm
  24. 24.
    Zakka, K.: Spatial transformer network implementation, January 2017. https://github.com/kevinzakka/spatial-transformer-network

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer Science, SCEEMUniversity of BristolBristolUK

Personalised recommendations