Advertisement

A framework for augmented reality guidance in industry

  • Jon Zubizarreta
  • Iker AguinagaEmail author
  • Aiert Amundarain
ORIGINAL ARTICLE
  • 156 Downloads

Abstract

Nowadays, many companies see augmented reality (AR) as an important tool to provide new services related to their products. However, many challenges remain to be solved for the widely adoption of this technology, such as the development of suitable authoring tools and real-time and robust algorithms for the detection and tracking of objects where the virtual annotations will be anchored in the real world. This paper presents a complete framework, called ARgitu, to generate and present virtual and augmented information, including the tools required for the development of new contents. To solve the object detection and tracking in complex industrial environments, we also propose a new monocular method for 3D non-Lambertian object recognition in arbitrary environments. The method is based on the current state-of-the-art chamfer matching approaches with a reduced computational effort while maintaining their accuracy.

Keywords

Manufacturing Maintenance and repair Augmented reality Authoring Object recognition 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgments

We would like to thank Ekin S. Coop, specially Uraitz Ormaza, Leire Carazo, Kepa Huerta, and Ruben Sordo for their help and support during the development of this project. We would also like to thank the authors of the EDCircles algorithm for providing us access to the implementation.

Funding information

This research project has been supported by the Education, Linguistic Politics and Culture Department of the Basque Government under the Predoctoral Program and the ARgitu Hazitek project.

References

  1. 1.
    Akinlar C, Topal C (2013) EDCIrcles: A real-time circle detector with a false detection control. Pattern Recogn 46(3):725–740.  https://doi.org/10.1016/j.patcog.2012.09.020 Google Scholar
  2. 2.
    Ayad MS, Lee J, Deguet A, Burdette EC, Prince JL (2010) C-arm pose estimation using a set of coplanar ellipses in correspondence. In: 2010 7Th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2010 - proceedings, pp 1401–1404.  https://doi.org/10.1109/ISBI.2010.5490260
  3. 3.
    Bay H, Ferrari V, Van Gool L (2005) Wide-baseline stereo matching with line segments. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 1:329–336.  https://doi.org/10.1109/CVPR.2005.375 Google Scholar
  4. 4.
    Besl PJ, McKay ND (1992) A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 14(2):239–256.  https://doi.org/10.1109/34.121791 Google Scholar
  5. 5.
    Choi C, Christensen HI (2012) 3D textureless object detection and tracking: an edge-based approach. In: IEEE International Conference on Intelligent Robots and Systems, pp 3877–3884.  https://doi.org/10.1109/IROS.2012.6386065
  6. 6.
    Costa MS, Shapiro LG (2000) 3D object recognition and pose with relational indexing. Comput Vis Image Underst 79(3):364–407.  https://doi.org/10.1006/cviu.2000.0865 zbMATHGoogle Scholar
  7. 7.
    Damen D, Bunnun P, Calway A, Mayol-cuevas W (2012) Real-time learning and detection of 3D texture-less objects: a scalable approach. In: Procedings of the British Machine Vision Conference 2012 pp 23.1–23.12.  https://doi.org/10.5244/C.26.23
  8. 8.
    De Ma S (1993) Conics-based stereo, motion estimation, and pose determination. Int J Comput Vis 10(1):7–25.  https://doi.org/10.1007/BF01440844 MathSciNetGoogle Scholar
  9. 9.
    Douglas DH, Peucker TK (2011) Algorithms for the Reduction of the Number of Points Required to Represent a Digitized Line or its Caricature. Classics in Cartography: Reflections on Influential Articles from Cartographica 10:15–28.  https://doi.org/10.1002/9780470669488.ch2 Google Scholar
  10. 10.
    Drummond T, Cipolla R (2002) Real-time visual tracking of complex structures. IEEE Trans Pattern Anal Mach Intell 24(7):932–946.  https://doi.org/10.1109/TPAMI.2002.1017620 Google Scholar
  11. 11.
    Ellis T, Abbood A, Brillault B (1992) Ellipse detection and matching with uncertainty. Image Vis Comput 10(5):271–276.  https://doi.org/10.1016/0262-8856(92)90041-Z Google Scholar
  12. 12.
    Fiorentino M, Uva AE, Gattullo M, Debernardis S, Monno G (2014) Augmented reality on large screen for interactive maintenance instructions. Comput Ind 65(2):270–278 .  https://doi.org/10.1016/j.compind.2013.11.004. http://www.sciencedirect.com/science/article/pii/S0166361513002340 Google Scholar
  13. 13.
    Fite-Georgel P (2011) Is there a reality in industrial augmented reality?. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp 201–210.  https://doi.org/10.1109/ISMAR.2011.6092387
  14. 14.
    Garon M, Lalonde JF (2017) Deep 6-DOF tracking. IEEE Trans Vis Comput Graph 23(11):2410–2418.  https://doi.org/10.1109/TVCG.2017.2734599 Google Scholar
  15. 15.
    Hanson R, Falkenström W, Miettinen M (2017) Augmented reality as a means of conveying picking information in kit preparation for mixed-model assembly. Comput Ind Eng 113(April):570–575.  https://doi.org/10.1016/j.cie.2017.09.048 Google Scholar
  16. 16.
    Hinterstoisser S, Cagniart C, Ilic S, Sturm P, Navab N, Fua P, Lepetit V (2012) Gradient response maps for real-time detection of textureless objects. IEEE Trans Pattern Anal Mach Intell 34(5):876–888.  https://doi.org/10.1109/TPAMI.2011.206 Google Scholar
  17. 17.
    Hinterstoisser S, Lepetit V, Ilic S, Fua P, Navab N (2010) Dominant orientation templates for real-time detection of texture-less objects. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp 2257–2264.  https://doi.org/10.1109/CVPR.2010.5539908
  18. 18.
    Imperoli M, Pretto A (2015) D2co: Fast and robust registration of 3d textureless objects using the directional chamfer distance. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9163:316–328.  https://doi.org/10.1007/978-3-319-20904-3_29 Google Scholar
  19. 19.
    Klein G, Murray D (2007) Parallel tracking and mapping for small AR workspaces. In: ISMAR - IEEE International Symposium on Mixed and Augmented RealityGoogle Scholar
  20. 20.
    Li Y, Wang G, Ji X, Xiang Y, Fox D (2018) DeepIM: Deep Iterative Matching for 6D Pose Estimation. In: European Conference on Computer Vision (ECCV).  https://doi.org/10.1007/978-3-030-01231-1_42
  21. 21.
    Liu MY, Tuzel O, Veeraraghavan A, Chellappa R (2010) Fast directional chamfer matching. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp 1696–1703.  https://doi.org/10.1109/CVPR.2010.5539837
  22. 22.
    Liu MY, Tuzel O, Veeraraghavan A, Taguchi Y, Marks TK, Chellappa R (2012) Fast object localization and pose estimation in heavy clutter for robotic bin picking. Int J Robot Res 31(8):951–973.  https://doi.org/10.1177/0278364911436018 Google Scholar
  23. 23.
    Liu Z, Marlet R (2012) Virtual line descriptor and Semi-Local graph matching method for reliable feature correspondence. In: Procedings of the British Machine Vision Conference 2012 pp 16.1–16.11.  https://doi.org/10.5244/C.26.16
  24. 24.
    Wang L, Neumann U, You S (2009) Wide-baseline image matching using Line Signatures. In: 2009 IEEE 12Th International Conference on Computer Vision, iccv, pp 1311–1318.  https://doi.org/10.1109/ICCV.2009.5459316
  25. 25.
    Makris S, Karagiannis P, Koukas S, Matthaiakis AS (2016) Augmented reality system for operator support in human–robot collaborative assembly. CIRP Ann Manuf Technol 65(1):61–64.  https://doi.org/10.1016/j.cirp.2016.04.038 Google Scholar
  26. 26.
    Micusik B, Wildenauer H (2015) Descriptor free visual indoor localization with line segments. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 07-12-June, pp 3165–3173.  https://doi.org/10.1109/CVPR.2015.7298936
  27. 27.
    Mur-Artal R, Montiel JM, Tardos JD (2015) ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans Robot 31(5):1147–1163.  https://doi.org/10.1109/TRO.2015.2463671 Google Scholar
  28. 28.
    Palmarini R, Erkoyuncu JA, Roy R, Torabmostaedi H (2018) A systematic review of augmented reality applications in maintenance. Robot Comput Integr Manuf 49(March 2017):215–228.  https://doi.org/10.1016/j.rcim.2017.06.002 Google Scholar
  29. 29.
    Peng X (2015) Combine color and shape in real-time detection of texture-less objects. Comput Vis Image Underst 135:31–48.  https://doi.org/10.1016/j.cviu.2015.02.010 Google Scholar
  30. 30.
    Pillai S, Leonard J (2015) Monocular SLAM supported object recognition. In: Robotics: Science and Systems (RSS), pp 34–42.  https://doi.org/10.15607/RSS.2015.XI.034
  31. 31.
    Platonov J, Heibel H, Meier P, Grollmann B (2007) A mobile markerless AR system for maintenance and repair. In: Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality, pp 105–108.  https://doi.org/10.1109/ISMAR.2006.297800
  32. 32.
    Ragni M, Perini M, Setti A, Bosetti P (2018) ARTOol Zero: Programming trajectory of touching probes using augmented reality. Comput Ind Eng 124(July):462–473.  https://doi.org/10.1016/j.cie.2018.07.026 Google Scholar
  33. 33.
    Rambach J, Pagani A, Stricker D (2017) Augmented things: enhancing AR applications leveraging the internet of things and universal 3D object tracking. In: Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017, July, pp 103–108.  https://doi.org/10.1109/ISMAR-Adjunct.2017.42
  34. 34.
    Schops T, Enge J, Cremers D (2014) Semi-dense visual odometry for AR on a smartphone. In: ISMAR 2014 - IEEE International Symposium on Mixed and Augmented Reality - Science and Technology 2014, proceedings, pp 145–150.  https://doi.org/10.1109/ISMAR.2014.6948420
  35. 35.
    Tekin B, Sinha SN, Fua P (2017) Real-Time Seamless single shot 6D object pose prediction. In: Conference on Computer Vision and Pattern Recognition (CVPR).  https://doi.org/10.1109/CVPR.2018.00038
  36. 36.
    Tombari F, Franchi A, Di L (2013) BOLD Features to detect texture-less objects. In: Proceedings of the IEEE International Conference on Computer Vision, pp 1265–1272.  https://doi.org/10.1109/ICCV.2013.160
  37. 37.
    Usabiaga J, Erol A, Bebis G, Boyle R, Twombly X (2009) Global hand pose estimation by multiple camera ellipse tracking. Mach Vis Appl 21(1):1–15.  https://doi.org/10.1007/s00138-008-0137-z Google Scholar
  38. 38.
    Verhagen B, Timofte R, Van Gool L (2014) Scale-invariant line descriptors for wide baseline matching. In: 2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014, pp 493–500.  https://doi.org/10.1109/WACV.2014.6836061
  39. 39.
    Wang G, Wu J, Ji Z (2008) Single view based pose estimation from circle or parallel lines. Pattern Recogn Lett 29(7):977–985.  https://doi.org/10.1016/j.patrec.2008.01.017 Google Scholar
  40. 40.
    Wang Y, Zhang S, Wan B, He W, Bai X (2018) Point cloud and visual feature-based tracking method for an augmented reality-aided mechanical assembly system. International Journal of Advanced Manufacturing Technology.  https://doi.org/10.1007/s00170-018-2575-8
  41. 41.
    Wang Y, Zhang S, Yang S, He W, Bai X, Zeng Y (2017) A LINE-MOD-based markerless tracking approachfor AR applications. Int J Adv Manuf Technol 89(5-8):1699–1707.  https://doi.org/10.1007/s00170-016-9180-5 Google Scholar
  42. 42.
    Wang Z, Liu H, Wu F (2009) HLD: A Robust descriptor for line matching. In: Proceedings - 2009 11th IEEE International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2009, pp 128–133.  https://doi.org/10.1109/CADCG.2009.5246918
  43. 43.
    Xiang Y, Schmidt T, Narayanan V, Fox D (2017) PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes. In: Robotics: Science and systems (RSS).  https://doi.org/10.15607/RSS.2018.XIV.019
  44. 44.
    Zhang L, Koch R (2013) An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J Vis Commun Image Represent 24(7):794–805.  https://doi.org/10.1016/j.jvcir.2013.05.006 Google Scholar
  45. 45.
    Zhao C, Zhao H, Lv J, Sun S, Li B (2016) Multimodal image matching based on multimodality robust line segment descriptor. Neurocomputing 177:290–303.  https://doi.org/10.1016/j.neucom.2015.11.025 Google Scholar
  46. 46.
    Zhu J, Ong SK, Nee AYC (2012) An authorable context-aware augmented reality system to assist the maintenance technicians. The International Journal of Advanced Manufacturing TechnologyGoogle Scholar
  47. 47.
    Zhu Z, Branzoi V, Wolverton M, Murray G, Vitovitch N, Yarnall L, Acharya G, Samarasekera S, Kumar R (2014) AR-Mentor: augmented reality based mentoring system. In: ISMAR 2014 - IEEE International Symposium on Mixed and Augmented Reality - Science and Technology 2014, Proceedings, pp 17–22.  https://doi.org/10.1109/ISMAR.2014.6948404

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.CeitSan SebastianSpain
  2. 2.Universidad de Navarra, TecnunSan SebastianSpain

Personalised recommendations