Encyclopedia of Computer Graphics and Games

Living Edition
| Editors: Newton Lee

Augmented Reality for Human-Robot Interaction in Industry

  • Federico Manuri
  • Francesco De Pace
  • Andrea SannaEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-08234-9_329-1

Synonyms

Definitions

Augmented reality for human-robot interaction in industry is the usage of augmented reality technologies for displaying computer-generated digital contents, correctly aligned to real objects, in order to enhance and enrich the communication interface of users, which operates robots in an industrial environment.

Introduction

The industry domain has taken advantage of augmented reality (AR) since its origin (Sutherland 1968): technicians are often involved in complex assembly, repair, or maintenance procedures, and they need to refer to instruction manuals to complete their tasks (Caudell and Mizell 1992). However, these activities often require a high cognitive load due to a continuous switch of the attention between the physical device and the paper manual (Henderson and Feiner 2007). AR can efficiently overcome this issue providing the same content in a digital form and properly displayed on the physical object involved in the task.

In this entry, AR technologies adopted to visualize digital contents in industry, the different kinds of robots involved in, and the most common tasks which benefit of AR contents are discussed.

Technologies

Among human sensory inputs, sight, hearing, and touch are currently the senses enhanced by an AR system through digital contents. However, since industrial environments may provide different kinds of limitations depending on the task, such as anti-noise headphones or gloves, the AR content is usually provided visually. Different devices can be adopted to display the AR content depending on the task and the environment. Overall, an AR system is characterized by three blocks: a tracking system, a content generator, and a combiner.

Tracking System

The tracking system has the task to establish an absolute reference system, based on some features related to the environment: this reference system is fundamental to properly display the augmented reality content with respect to the user’s view and the real-world objects. The tracking system is called object dependent if the reference system depends on the position of a real object: this kind of systems adopts computer graphics algorithms to identify the object from frames of the environment provided by a digital camera. Other, less common solutions consist of inertial, mechanical, and magnetic tracking systems or involve the usage of absolute references such as GPS coordinates (Foxlin 2002).

Content Generator

The content generator has the task to compute the graphical content to be displayed depending on the coordinates provided by the tracking system and the frames provided by the digital camera.

Combiner

The combiner has the task to overlap the assets to the user view. It acts in different ways according to the used AR paradigm.

Optical see-through devices blend physical and virtual objects holographically using transparent mirrors and lenses (Fig. 1). These devices are wearable and most of the time also mobile. Moreover, since it is possible to wear them like glasses, the user’s hands are free, which is usually mandatory in industrial tasks. One disadvantage of this kind of devices is the limited field of view that may cause clipping of virtual images at the edges of the mirrors or lenses. Moreover, it is difficult to occlude a real object because their light is always combined with the virtual image due to the lenses properties. Finally, other devices (e.g., physical trackpad, joypad, or keyboard) are required to be added to the system to provide a proper interaction interface.
Fig. 1

Optical see-through AR system

Handheld technologies include all the devices that allow to display both the physical world, recorded through a camera, and the AR content on a display (Fig. 2). These devices are usually mobile, e.g., smartphones and tablets, since the user needs to freely move them in order to frame the point of interest in the environment and experience the AR content. The most common disadvantages of handheld devices are two: firstly, the need to use one or both hands to handle the devices, thus making them unavailable to perform tasks; secondly, the user may experience disorientation for parallax effect due to the camera position with respect to the viewer’s true eye location.
Fig. 2

Handheld AR system

Projective devices allow to directly display AR contents over physical objects (Fig. 3). They do not require special eyewear and accommodate the user’s eyes during focusing. Moreover, they can cover large surfaces for a wide field of view. In the industrial domain, these devices are usually adopted to display AR content on a big industrial robot, such as robotic arms. Commonly, projection surfaces may vary from flat, plain colored walls to complex scale models. The main limitation of this technology is that the AR content is perceived as bidimensional instead of three-dimensional. Other common disadvantages comprehend occlusion and the need for other input devices for interaction. Moreover, projectors need to be calibrated each time the environment or the distance to the projection surface changes.
Fig. 3

Projective AR system

Robots

AR technologies are usually applied to human-robot interaction (HRI) with two categories of industrial robots: robotic arm manipulators and automated guided vehicles (AGVs).

A robotic arm manipulator is defined as an n-degree-of-freedom (nDoF) arm robot. It is composed of links connected by joints that are controlled by using either DC or brushless electric motors. Their positions are sampled by means of encoders, and the joints’ velocities are measured with tachometers. Joints are divided into two different categories: revolute and prismatic. Revolute joints allow rotations along one local axis (usually the z-axis), whereas prismatic joints allow translations along one local axis (usually the z-axis). Robotic arms are also equipped by different custom tools depending on the task to accomplish: these tools are positioned at the end of the kinematic chain and are called end-effectors.

AGVs are vehicles that can move along predefined path and predetermined directions automatically and autonomously, without human interference. They are usually equipped by sensors that allow them to identify and eventually avoid obstacles along their path. Their primary task is to transport equipment around a manufacturing facility. AGVs are equipped with automatic guiding, either electromagnetic or optical, they can follow a predefined path through visual analysis of a familiar environment, or they can use vision system to understand their location in an unknown environment.

Tasks

Depending on the type of robot considered and its task, various types of AR systems are adopted, and different AR contents are provided to the user. When used with robotic arm manipulators, technicians can benefit from AR systems to visualize one or more of the following features: end-effector path, end-effector direction, object(s) manipulated or involved in the task, workspace involved in the robot task, forces applied by the end-effector, and faults that occur on the industrial manipulator. When used with AGVs, the features visualized through AR usually consist of the path and workspace of the robot. In the following, each feature will be introduced and explained.

Path

Depending on the types of paths and trajectories, different AR systems can be adopted. Trajectories can be divided in 2D paths and 3D paths. The first ones are usually visualized on 2D areas using projectors mounted directly on the industrial manipulator or on appropriate supports placed nearby the robotic arm. They normally consist of one or more connected lines of the same color. The 3D paths are visualized in the real environment by means of wearable or handheld devices. It is possible to interact with both types of path using some specific tracked devices to modify them: this allows users to change the trajectories of the industrial manipulator through the interaction with the augmented reality paths.

When displaying the path of an AGV, since these robots freely move around all the environment, the AR projection system is commonly placed directly on the mobile robots. Thus, workers can work without wearing ad hoc AR devices. AGVs are equipped with projectors that allow to visualize the AGV’s intentions directly on the floor of the facilities. Projectors are mounted on the AGVs at different heights; therefore the projected area varies. Data projected can represent the future path that the AGVs are going to follow by means of arrows (Fig. 4) and lines or only the occupied space (Matsumaru 2006, Coovert et al. 2014, Chadalavada et al. 2015).
Fig. 4

Example of AR contents used to display the AGV’s path (yellow arrows) and the area that it will occupy (in blue). (Reproduced from RoboCup 2013 (CC BY 2.0))

Direction

Direction features are useful to understand in real time the direction of the end-effector. These features are commonly represented by 3D virtual arrows placed in the position of the end-effector (Fig. 5), and they are visualized in the real environment using wearable or handheld devices (Michalos et al. 2016).
Fig. 5

Example of AR contents used to display the object to be manipulated (in green), the direction of the end-effector (in blue), and the forces on it (three axes on the end-effector). (Reproduced from RoboCup 2013 (CC BY 2.0))

Object Manipulated

Objects that are going to be manipulated by the robotic arms can be highlighted using both 2D and 3D features. The 2D features are commonly represented by means of icons or geometric planar shapes projected directly on the object (Fig. 5). The projectors are mounted directly on the industrial manipulator or on appropriate supports placed near the robotic arm. The 3D features are commonly represented by 3D virtual replicas of the real objects that are superimposed on the object manipulated by the robotic arm (Akan and Çürüklü 2010).

Workspace

It is possible to identify two different workspaces: the first one is the workspace of a robot arm manipulator, defined as the set that comprehend all the positions it can reach; the second one is the “collaborative workspace,” defined as the working area in which both the human operator and the industrial manipulator work together.

The robot workspace is commonly visualized using handheld or wearable devices. The operating area of the industrial manipulator can be represented using a 3D sphere, centered in the base of the industrial manipulator. The diameter of the sphere can also vary, depending on the movements of the end-effector.

The second workspace can be visualized using optical see-trough (Makris et al. 2017) or projected AR systems (Vogel et al. 2011). Projectors are placed near the robotic arm, in an elevated position, and the “collaborative workspace” is projected directly on the floor. Optical see-trough systems are usually composed by cameras, placed in the corners of the environment. Depending on the distance of the user from the robot, at least two different areas are projected: the furthest area that is considered the safest operating area for the human worker, and it is commonly colored green and the closest area that is considered the most dangerous operating area for the human worker, and it is commonly colored red. When the human worker operates in the furthest area, the robotic arm works normally. On the other hand, when the human worker is in the closest area, the robotic arm stops or slows down its motion in order to avoid any possible damage (Fig. 6).
Fig. 6

Example of AR contents used to display workspace of the robotic arm, with different colors depending on the safety for the user based on distance and arrows displaying the direction of the movement (in green) and an error state for one joint (in red). (Reproduced from Humanrobo 2009 (CC BY-SA 3.0))

Forces

Forces applied by the end-effector can be monitored and visualized using wearable or handheld devices. Forces can be represented using 3D virtual vectors, applied on the tool center (Fig. 4). The force components in X, Y, and Z are displayed as well as the resulting vector. Furthermore, the components are colored with different colors, depending on the intensity of the force (Mateo et al. 2014).

Faults

When a fault occurs on an industrial manipulator that is working side by side with a human operator, stress and anxiety may increase in the worker because he is not able to realize in real time which is the cause of the error. Moreover, since it is not possible to realize the cause of the fault immediately, the time and resources required to solve it strongly increase.

There are at least four different categories of faults that can be visualized using AR technologies: faults on the velocity sensor, faults on the actuation system, faults due to overloading problems, and faults caused by collision (De Pace et al. 2018). These typologies of errors can be visualized using both hand-handled and wearable devices. Each fault is represented using a specific 3D asset, superimposed on the error’s location. Faults on the velocity sensors are represented using 3D circular arrows placed near the robotic arm’s joints. These arrows rotate as long as the velocity sensor is acquiring correct data from the motor encoder (Fig. 6, green arrow). When a fault occurs on the velocity sensor, the arrows stop moving, and they change their color to red to highlight the problem (Fig. 6, red arrow). Faults on the actuation system can affect the joint’s motor. If an error occurs on it, a 3D model representing the motor is superimposed on the real motor. Moreover, it starts blinking to emphasize the location of the fault. Faults due to collision can be represented using a 3D sphere center at the base of the industrial manipulator. A collaborative manipulator is able to foresee collisions, and when it detects an unexpected object, it stops its movements to avoid the collision. Human operators may not understand why the manipulator has stopped its movements, misunderstanding its actions. To avoid these misjudgments, when the manipulator foresees the collision, the 3D sphere starts blinking to highlight the intentions of the industrial robotic arm. Finally, errors due to overloading problems suddenly stop the movements of the manipulator. When this type of fault occurs, a 3D anvil, along with a warning signal, is superimposed on the payload.

Cross-References

References

  1. Akan, B., Çürüklü, B.: Augmented reality meets industry: interactive robot programming. In: Proceedings of SIGRAD, vol. 52, pp. 55–58. Link¨oping University Electronic Press, Vsters (2010)Google Scholar
  2. Caudell, T.P., Mizell, D.W.: Augmented reality: an application of heads-up display technology to manual manufacturing processes. In: Proceedings of HICSS, vol. 2, pp. 659–669. IEEE, Kauai (1992)Google Scholar
  3. Chadalavada, R.T., Andreasson, H., Krug, R., Lilienthal, A.J.: That’s on my mind! robot to human intention communication through onboard projection on shared floor space. In: Proceedings of ECMR, pp. 1–6. IEEE, Lincolnm (2015)Google Scholar
  4. Coovert, M.D., Lee, T., Shindev, I., Sun, Y.: Spatial augmented reality as a method for a mobile robot to communicate intended movement. Computers in Human Behavior. 34, 241–248 (2014)CrossRefGoogle Scholar
  5. De Pace, F., Manuri, F., Sanna, A., Zappia, D.: An augmented interface to display industrial robot faults. In: International Conference on Augmented Reality, Virtual Reality and Computer Graphics, vol. 2, pp. 403–421. Springer (2018)Google Scholar
  6. Foxlin, E.: Motion Tracking Requirements and Technologies. Handbook of Virtual Environment Technology, vol. 8, pp. 163–210. Mahwah, N.J.; London: Lawrence Erlbaum Associates (2002)Google Scholar
  7. Henderson, S.J., Feiner, S.K.: Augmented reality for maintenance and repair (armar). Technical report, DTIC document (2007)Google Scholar
  8. Humanrobo.: TOSY Industrial Robot: Arm Robot by Humanrobo. https://commons.wikimedia.org/wiki/File:TI_A620-30.JPG CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en (2009)
  9. Makris, S., Tsarouchi, P., Matthaiakis, A. S., Athanasatos, A., Chatzigeorgiou, X., Stefos, M., …., Aivaliotis, S.: Dual arm robot in cooperation with humans for flexible assembly. CIRP Ann. 66(1), 13–16 (2017)CrossRefGoogle Scholar
  10. Mateo, C., Brunete, A., Gambao, E., Hernando, M.: Hammer: an android based application for end-user industrial robot programming. In: Proceedings of MESA, pp. 1–6. IEEE, Senigallia (2014).Google Scholar
  11. Matsumaru, T.: Mobile robot with preliminary-announcement and display function of forthcoming motion using projection equipment. In: Proceedings of ROMAN, pp. 443–450. IEEE, Hatfield (2006)Google Scholar
  12. Michalos, G., Karagiannis, P., Makris, S., Tokçalar, Ö., Chryssolouris, G.: Augmented reality (AR) applications for supporting human robot interactive cooperation. Procedia CIRP. 41, 370–375 (2016)CrossRefGoogle Scholar
  13. RoboCup.: BvOF RoboCup2013 – Junior Rescue by RoboCup2013, https://www.flickr.com/photos/robocup2013/9154255582/in/photostream/ CC BY 2.0 https://creativecommons.org/licenses/by/2.0/ (2013)
  14. Sutherland, I.E.: A head-mounted three dimensional display. In: Proceedings of AFIPS, pp. 757–764. ACM, San Francisco (1968)Google Scholar
  15. Vogel, C., Poggendorf, M., Walter, C., Elkmann, N.: Towards safe physical human-robot collaboration: a projection-based safety system. In: Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pp. 3355–3360. IEEE (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Federico Manuri
    • 1
  • Francesco De Pace
    • 1
  • Andrea Sanna
    • 1
    Email author
  1. 1.Dipartimento di Automatica e InformaticaPolitecnico di TorinoTurinItaly