Advertisement

Acquire, Augment, Segment and Enjoy: Weakly Supervised Instance Segmentation of Supermarket Products

  • Patrick FollmannEmail author
  • Bertram Drost
  • Tobias Böttger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11269)

Abstract

Grocery stores have thousands of products that are usually identified using barcodes with a human in the loop. For automated checkout systems, it is necessary to count and classify the groceries efficiently and robustly. One possibility is to use a deep learning algorithm for instance-aware semantic segmentation. Such methods achieve high accuracies but require a large amount of annotated training data.

We propose a system to generate the training annotations in a weakly supervised manner, drastically reducing the labeling effort. We assume that for each training image, only the object class is known. The system automatically segments the corresponding object from the background. The obtained training data is augmented to simulate variations similar to those seen in real-world setups.

Our experiments show that with appropriate data augmentation, our approach obtains competitive results compared to a fully-supervised baseline, while drastically reducing the amount of manual labeling.

References

  1. 1.
    Borji, A., Cheng, M.M., Jiang, H., Li, J.: Salient object detection: a benchmark. IEEE Trans. Image Process. 24(12), 5706–5722 (2015)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3213–3223 (2016)Google Scholar
  3. 3.
    Deselaers, T., Alexe, B., Ferrari, V.: Weakly supervised localization and learning with generic knowledge. Int. J. Comput. Vis. 100(3), 275–293 (2012)MathSciNetCrossRefGoogle Scholar
  4. 4.
  5. 5.
    Follmann, P., Böttger, T., Härtinger, P., König, R., Ulrich, M.: MVTec D2S: densely segmented supermarket dataset. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 581–597. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01249-6_35CrossRefGoogle Scholar
  6. 6.
    Follmann, P., König, R., Härtinger, P., Klostermann, M.: Learning to see the invisible: end-to-end trainable amodal instance segmentation. CoRR abs/1804.08864 (2018). http://arxiv.org/abs/1804.08864
  7. 7.
    Girshick, R., Radosavovic, I., Gkioxari, G., Dollár, P., He, K.: Detectron (2018). https://github.com/facebookresearch/detectron
  8. 8.
    He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1059–1067 (2017)Google Scholar
  9. 9.
    Hu, R., Dollár, P., He, K., Darrell, T., Girshick, R.: Learning to segment every thing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  10. 10.
    ITAB: HyperFLOW. https://itab.com/en/itab/checkout/self-checkouts/. Accessed 20 June 2018
  11. 11.
    Khoreva, A., Benenson, R., Hosang, J., Hein, M., Schiele, B.: Simple does it: weakly supervised instance and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 876–885 (2017)Google Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  13. 13.
    Li, H., Lu, H., Lin, Z., Shen, X., Price, B.: Inner and inter label propagation: salient object detection in the wild. IEEE Trans. Image Process. 24(10), 3176–3186 (2015)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Li, K., Malik, J.: Amodal instance segmentation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 677–693. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_42CrossRefGoogle Scholar
  15. 15.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  16. 16.
    Liu, Z., Zou, W., Le Meur, O.: Saliency tree: a novel saliency detection framework. IEEE Trans. Image Process. 23(5), 1937–1952 (2014)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979)CrossRefGoogle Scholar
  18. 18.
    Phong, B.T.: Illumination for computer generated pictures. Commun. ACM 18(6), 311–317 (1975)CrossRefGoogle Scholar
  19. 19.
    Rother, C., Kolmogorov, V., Blake, A.: Grabcut: interactive foreground extraction using iterated graph cuts. In: ACM Transactions on Graphics (TOG), vol. 23, pp. 309–314. ACM (2004)Google Scholar
  20. 20.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Simard, P.Y., Steinkraus, D., Platt, J.C., et al.: Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), vol. 3, pp. 958–962 (2003)Google Scholar
  22. 22.
    Vezhnevets, A., Ferrari, V., Buhmann, J.M.: Weakly supervised semantic segmentation with a multi-image model. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 643–650 (2011)Google Scholar
  23. 23.
    Zhu, Y., Tian, Y., Metaxas, D., Dollar, P.: Semantic amodal segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1464–1472 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.MVTec Software GmbHMunichGermany
  2. 2.Technical University of MunichMunichGermany

Personalised recommendations