Abstract
Much research is now focusing on how technology is moving away from the traditional computer to a range of smart devices in smart environments, the so-called Internet of Things. With this increase in computing power and decrease in form factor, we are approaching the possibility of a new generation of robotic assistants able to perform a range of tasks and activities to support all kinds of users. However, history shows that unless care is taken early in the design process, the users who may stand to benefit the most from such assistance may inadvertently be excluded from it. This paper examines some of those historical missteps and examines possible ways forward to ensure that the next generation robots support the principles of universal access.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Technology is moving on apace. Computers have shrunk from being the size of a truck to a credit card in the form of the Raspberry Pi. Computing power has increased simultaneously, following the famous Moore’s Law up until very recently [1]. At the same time, available communication bandwidth has increased substantially with the advent of new communication channels, such as 3G and 4G, offering new opportunities for assistive and/or healthcare applications [2, 3].
Historically, new technologies follow a typical path of development. In the early stages, the focus is on developing the new technology, overcoming the engineering challenges to make something that works [4]. The aim is to develop something that offers an increased level of functionality or something innovative. Users typically get overlooked in this early stage of development [5]. The usual outcome is a product that works best for users who are most like the designer. Those who are notably different, such as those who would benefit most from a universal access-based approach, usually do not fare so well.
Even where products have been developed specifically for users with significant functional impairments, there is no guarantee of a successful outcome. For example, in the 1990s, the EU funded a number of programmes through its TIDE (Telematics for the Integration of Disabled and Elderly people) initiative. Approximately $150Â m was invested in this space, looking at the development of solutions from office workstations to wheelchair-mounted robots [6]. However, the success of those robots and others developed under similar initiatives was far from satisfactory [7]. Only the Handy 1 robot arm [8] and MANUS wheelchair-mounted robot [9] achieved any degree of successful take-up.
2 A Historical Example: The RAID Office Workstation
One example development under the TIDE initiative was the RAID office workstation, shown in Fig. 1. The robot was developed as a project between partners in the UK, Sweden and France.
The robot consisted of a standard RTX robot arm mounted on a gantry so it could move around a specially prepared office space. A user could approach the desk on the left of the picture to control the robot using the Cambridge University Robotics Language (CURL), software developed specifically for such a purpose [10]. The design assumption was made that the user would want to access books and papers stored on the shelving, so would use the CURL interface to move the robot arm to pick up the Perspex containers holding them and bring the containers to the desk. The arm would then be used to pick up the contents and put them on the page-turner mounted next to the computer. The user would control the arm through the computer to turn each page so he or she could read the document.
Only 9 units of the robot were produced and went to each of the research partners. No units were sold commercially. There were several reasons for the lack of commercial success of this workstation. First, it was expensive, costing at least $55,000 just for the workstation and the robot. Second, it needed a dedicated office and for the office to be pre-adapted to support the workstation, for example with the shelving. Third, the interface was quite clunky and not easy to tailor or customize. Finally, and this was the biggest weakness, technology moved on. CDs and the Internet became commonplace, reducing the need for pieces of paper to be moved around. Other office workstations developed at the same time, such as DeVar and the Arlyn Arm Workstation did not fare any better [7].
The Handy 1 and MANUS robots did perform respectably well. Handy 1 was created by a small British start-up company with a view to being launched as a commercial product. It consisted of a robot arm mounted on a mobile base. Attached to the arm was a simple spoon. The user’s food was placed in 5 segregated sections of a tray and through a straightforward interface, the user could feed themselves. This robot allowed many users to feed themselves independently for the first time in their lives. Thus a real need had been identified and a reasonably cheap solution (c. $6000) developed. A second variant was introduced allowing users to apply make-up. Approximately 150 units had been sold by 1997 [7].
The MANUS robot was developed in the Netherlands. It was fundamentally a robot arm mounted on the side of a wheelchair. As such, the robot was inherently mobile, albeit with the disadvantage of making the wheelchair notably wider in certain configurations. The cost was significantly more than the Handy 1 ($35,000), but sales were helped by an agreement between the development team and the Netherlands government, which was the largest buyer.
3 A User-Centered Approach to Rehabilitation Robotics
It is not just in the field of robotics where the introduction of new technology has stumbled because of lack of consideration of the needs and capabilities of the users. Early attempts at gesture recognition, for example, focused on the development of the technology rather than evaluating whether the technology actually offered a genuine benefit to the users [11].
There are numerous user-centered design approaches available in the literature. One such approach is the 7-level model, developed from a rehabilitation robotics project called IRVIS – the Interactive Robotic Visual Inspection System. The 7-level model was developed by expanding on a typical engineering design process, such as the following [12]:
-
Stage 1 – define the problem – ensure there is a clear understanding of the requirements the product or system needs to meet – for universal access this will include a statement of who the users are and their needs, wants and aspirations
-
Stage 2 – develop a solution – follow a user-centered design approach to create concepts and prototypes – for universal access this will include consideration of the full range of users, their knowledge, skills and capabilities
-
Stage 3 – evaluate the solution – ensure that the finished design meets the specified requirements – for universal access this will include checking to ensure that the finished solution meets the wants, needs and aspirations for all users
To produce a successful universal access design, it is necessary to adopt strongly user-centered design practices. It is important to be able to modify and refine the device and its interface iteratively, combining both the above design steps with usability and accessibility evaluations. These evaluations typically involve measurement against known performance criteria, such as Jakob Nielsen’s heuristic evaluation [13].
Developing a usable product or service interface for a wider range of user capabilities involves understanding the fundamental nature of the interaction. Typical interaction with an interface consists of the user perceiving an output from the product, deciding a course of action and then implementing the response. These steps can be explicitly identified as perception, cognition and motor actions [14] and relate directly to the user’s sensory, cognitive and motor capabilities respectively. Three of Nielsen’s heuristics explicitly address these functions:
-
Visibility of system status – the user must be given sufficient feedback to gain a clear understanding of the current state of the complete system;
-
Match between system and real world – the system must accurately follow the user’s intentions;
-
User control and freedom – the user must be given suitably intuitive and versatile controls for clear and succinct communication of intent.
Each of these heuristics effectively addresses the perceptual, cognitive and motor functions of the user. Building on these heuristics, the 7-level approach, shown in Fig. 2, addresses each of the system acceptability goals identified by Nielsen [15].
4 The 7-Level Model and IRVIS
IRVIS (Interactive Robotic Visual Inspection System) was developed to assist in the visual inspection of hybrid microcircuits during manufacture. Such circuits typically undergo up to 50 manual visual inspections to detect faults during manufacture. Each time a circuit is picked up, there is a finite chance of damage being done to the circuit through the action of manually picking it up and manipulating it under a microscope. IRVIS was developed to see if it was possible to inspect the circuits by effectively moving the microscope around the circuits rather than moving the circuits around the microscope. Furthermore, it was considered that as inspecting the circuits was a fundamentally visual task, someone with unimpaired vision, but perhaps a motor impairment may be able to undertake the task. Hence, one of the system requirements was that the robot should be accessible to a user with a motor impairment.
A prototype system was developed, as shown in Fig. 3. It consisted of a high power CCD camera mounted on a gantry. The tray of microcircuits could be mounted on the robot and the tray and camera could be moved through five degrees-of-freedom without the circuits needing to be picked up or handled.
The original interface, shown in Fig. 4, used a variant of the CURL interface developed for the RAID and EPI-RAID workstations. An initial user trial was undertaken, but significant problems were identified and a re-design was required [16]. The account of the re-design is detailed elsewhere [15], so a brief account will be provided here.
4.1 Level 1 – Problem Requirements
The original design requirements were considered satisfactory, i.e. the basic functionality to be provided, but initially it was thought that the original user trials failed because the robot was too under-powered and too slow. A counter-position was that the interface was the source of the issues as the original design team had focused too much on developing the robot and not on the UI. The original UI required the users to select each motor in turn to complete an action and enter a numerical value for how far it should rotate. It was felt that this was a very inefficient control method.
4.2 Level 2 – Problem Specification
To resolve the dilemma whether it was the robot or the interface, a series of user observation sessions were undertaken of the manual inspection process. These sessions identified a number of key steps common to each manual visual inspection, such as rotation about a point, tilting, translation, zooming and focusing. Under the original interface, each of these actions took multiple steps to complete in a piece-wise fashion. Consequently, it was decided to forego a costly rebuild of the robot and focus on a more user-centered interface design.
4.3 Level 3 – Output to the User
To support the user, a virtual model of the robot was developed. A number of views and combination of views were provided and evaluated to ensure that the users could recognize where they were on a range of circuit layouts and what they were looking at.
4.4 Level 4 – User Mental Model
Having developed an interface layout that afforded sufficient visual feedback to the user, the next step was to add the full functionality of the IRVIS robot to the simulation. The user trials for this stage of the re-design were to ensure that the simulated robot response to user input was consistent with that of the actual hardware. The robot was connected to the computer and the users were initially asked to repeat the same procedure as for Level 3, only this time predicting what the robot would do in response to their actions. Once the users were comfortable controlling the robot, new functionality was added to the interface that replicated the five basic actions that had been seen from the manual inspectors: translation, rotation and so on.
4.5 Level 5 – Input from the User
The final stage of the re-design concentrated on assessing the ease of interaction between the user and the robot, identifying particular aspects of the interface that required modification. The task in the user trials changed from “What will the robot do now?” to “Can you accomplish this goal?” As a result of this level, the final interface design was as shown in Fig. 5.
4.6 Level 6 – Functional Attributes
A series of user evaluation sessions were undertaken with users with a range of moderate to severe motor impairments. All of the users were able to navigate around the circuit tray without difficulty and within the time limit allowed. Likewise, all of the users were able to perform all of the other tasks seen in the manual inspection processes, such as tilting, rotating about a point, etc.
4.7 Level 7 – Social Attributes
Qualitative feedback from all the users was extremely favorable. Each user found the new interface easy and intuitive to use and all completed the tasks with a minimum of guidance. No user complained of the speed of response of IRVIS being too slow. This was an important result, because it had been previously thought that IRVIS was mechanically under-specified. A simple analysis showed why this was so. The original interface only allowed the use of one motor at a time. The new interface allowed potentially all five motors to be used simultaneously. The increased power available to the user significantly improved the overall speed of response.
5 Next Generation Robots
The examples given so far in this paper have focused on historical experiences. It is worth looking at how such robotic assistants may develop in the future and what roles they may play, especially in a universal access context. What is clear from the assistive robotic systems from the 1990s is that those designed with a clear purpose and benefit for the users in mind had the most successful take-up, especially the Handy 1. Similarly, the comparatively few examples of commercially successful robots for the home are focused on particular laborious tasks, such as vacuuming or mowing the lawn [17].
Consequently, it is clearly important to consider tasks that are important to users and especially those that support independent living or self-empowerment. Typical areas of life endeavor to consider include [18]:
-
Lifelong learning and education
-
Workplace
-
Real world (i.e. extended activities of daily living)
-
Entertainment
-
Socialising
It is also important to consider the widest possible range of users [19] and impairment types. A somewhat stereotypical concept of an assistive robot is a robot guide dog for users with visual impairments [20]. However, robots can assist in a range of other impairments, such as cognitive [21] or communication impairments. Notable progress has been made in the use of robots to develop communication skills in children with autism, for example [22]. Robotic dogs have also been converted into conversation partners through the use of chatbots [23], see Fig. 6.
Advances in artificial intelligence and natural language processing also offer opportunities for making such robotic systems into genuine communication partners [24]. Furthermore, advances in robotics are helping create a new generation of robots that are very much more anthropomorphic in their appearance and behaviors. One such development is the RoboThespian, shown in Fig. 7 [25, 26].
RoboThespians are capable of simulating human movements from the waist up. They have been designed to emote and come pre-loaded with sample orations from Shakespeare to Terminator. The University of Greenwich has two RoboThespians and use them for outreach purposes. Their appearance and movement typically evokes a range of responses from curiosity and amusement to indications of fear and trepidation. We are currently exploring why different people respond to the robot in these ways.
6 Conclusions
Robotic assistants offer a fantastic opportunity to improve the lives of many people, especially those who are getting older or have functional impairments. However, to truly benefit from these opportunities, designers of such robots need to adopt user-centered inclusive design processes to ensure that they meet the needs, wants and aspirations of the users while not putting demands on them that exceed their skills, knowledge and capabilities.
Furthermore, designers of such robots will increase their chances of successful take-up of their products if they focus on supporting tasks that enable the users to accomplish tasks or activities that support independent living, such as with the Handy 1 and eating.
References
Bright, P.: Moore’s Law really is dead this time. Ars Technica (2016). https://arstechnica.com/information-technology/2016/02/moores-law-really-is-dead-this-time/
Acharya, D., Kumar, V., Han, H.J.: Performance evaluation of data intensive mobile healthcare test-bed in a 4G environment. In: Proceedings of the 2nd ACM International Workshop on Pervasive Wireless Healthcare (MobileHealth 2012), pp. 21–26. ACM, New York (2012). doi:10.1145/2248341.2248353
Ball, L., Szymkowiak, A., Keates, S., Bradley, D., Brownsell, S.: eHealth and the internet of things. In: Proceedings of the 3rd International Conference on Pervasive Embedded Computing and Communication Systems, pp. 139–142. SCITEPRESS, Barcelona (2013). doi:10.5220/0004336701390142
Keates, S.: A pedagogical example of teaching Universal Access. Int. J. Univ. Access Inf. Soc. 14(1), 97–110 (2015). doi:10.1007/s10209-014-0398-4. Springer
Cooper, A.: The Inmates are Running the Asylum. SAMS Publishing, Indianapolis (1999)
Buhler, C.: Robotics for rehabilitation – A European(?) perspective. In: Proceedings of the 5th International Conference on Rehabilitation Robotics (ICORR 1997), Bath, UK, pp. 5–11 (1997)
Mahoney, R.: Robotic products for rehabilitation: status and strategy. In: Proceedings of the 5th International Conference on Rehabilitation Robotics (ICORR 1997), Bath, UK. pp. 12–17 (1997)
Topping, M.J., Smith, J.K.: The development of handy 1. A robotic system to assist the severely disabled. Technol. Disabil. 10(2), 95–105 (1999)
Tijsma, H.A., Liefhebber, F., Herder, J.L.: Evaluation of new user interface features for the manus robot arm. In: 9th International Conference on Rehabilitation Robotics, ICORR 2005, pp. 258–263. IEEE (2005). doi:10.1109/ICORR.2005.1501097
Dallaway, J.L., Mahoney, R.M., Jackson, R.D., Gosine, R.G.: An interactive robot control environment for rehabilitation applications. Robotica 11(6), 541–551 (1993). doi:10.1017/S0263574700019391
Keates, S., Robinson, P.: Gestures and multimodal input. Behav. Inf. Technol. 18(1), 36–44 (1999). doi:10.1080/014492999119237. Taylor and Francis Ltd.
Blessing, L.T.M., Chakrabati, A., Wallace, K.: A design research methodology. In: Proceedings of International Conference on Engineering Design 1995, Prague, Czech Republic, vol. 1, pp. 50–55 (1995)
Nielsen, J.: Usability Engineering. Morgan Kaufman Publishers, San Francisco (1993)
Card, S.K., Moran, T.P., Newell, A.: The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates, Hillsdale (1983)
Keates, S.: Designing for Accessibility: A Business Guide to Countering Design Exclusion. Lawrence Erlbaum Associates/CRC Press, Mahwah (2006)
Keates, S., Clarkson, P.J., Robinson, P.: Designing a usable interface for an interactive robot. In: Proceedings of the 6th International Conference on Rehabilitation Robotics (ICORR 1999), Stanford, CA, pp. 156–162 (1999)
Jodi Forlizzi, J., DiSalvo, C.: Service robots in the domestic environment: a study of the roomba vacuum in the home. In: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction (HRI 2006), pp. 258–265. ACM, New York (2006). doi:10.1145/1121241.1121286
Keates, S., Kozloski, J., Varker, P.: Cognitive impairments, HCI and daily living. In: Stephanidis, C. (ed.) UAHCI 2009. LNCS, vol. 5614, pp. 366–374. Springer, Heidelberg (2009). doi:10.1007/978-3-642-02707-9_42
Keates, S.: Engineering design for mechatronics—a pedagogical perspective. In: Hehenberger, P., Bradley, D. (eds.) Mechatronic Futures, pp. 221–238. Springer, Cham (2016). doi:10.1007/978-3-319-32156-1_14
Galatas, G., McMurrough, C., Mariottini, G.L., Makedon, F.: eyeDog: an assistive-guide robot for the visually impaired. In: Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments (PETRA 2011). ACM, New York (2011) doi:10.1145/2141622.2141691
Keates, S., Adams, R., Bodine, C., Czaja, S., Gordon, W., Gregor, P., Hacker, E., Hanson, V., Kemp, J., Laff, M., Lewis, C., Pieper, M., Richards, J., Rose, D., Savidis, A., Schultz, G., Snayd, P., Trewin, S., Varker, P.: Cognitive and learning difficulties and how they affect access to IT systems. Int. J. Univ. Access Inf. Soc. 5(4), 329–339 (2007). doi:10.1007/s10209-006-0058-4. Springer
Robins, B., Dautenhahn, K., Te Boekhorst, R., Billard, A.: Robotic assistants in therapy and education of children with autism: can a small humanoid robot help encourage social interaction skills? Int. J. Univ. Access Inf. Soc. 4(2), 105–120 (2005). doi:10.1007/s10209-005-0116-3. Springer
Keates, S., Bradley, D., Sapeluk, A.: The future of universal access? Merging computing, design and engineering. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2013. LNCS, vol. 8011, pp. 54–63. Springer, Heidelberg (2013). doi:10.1007/978-3-642-39194-1_7
Keates, S., Varker, P., Spowart, F.: Human-machine design considerations in advanced machine-learning systems. IEEE/IBM J. Res. Dev. 55(5), 4:1–4:10 (2011). doi:10.1147/JRD.2011.2163274. IEEE
Hashimoto, T., Kobayashi, H., Polishuk, A., Verner, I.: Elementary science lesson delivered by robot. In: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2013), pp. 133–134. IEEE Press, Piscataway (2013)
Engineered Arts. RobotThespian. https://www.engineeredarts.co.uk/robothespian/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Keates, S., Kyberd, P. (2017). Robotic Assistants for Universal Access. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human–Computer Interaction. Human and Technological Environments. UAHCI 2017. Lecture Notes in Computer Science(), vol 10279. Springer, Cham. https://doi.org/10.1007/978-3-319-58700-4_43
Download citation
DOI: https://doi.org/10.1007/978-3-319-58700-4_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-58699-1
Online ISBN: 978-3-319-58700-4
eBook Packages: Computer ScienceComputer Science (R0)