Keywords

1 Introduction

Technology is moving on apace. Computers have shrunk from being the size of a truck to a credit card in the form of the Raspberry Pi. Computing power has increased simultaneously, following the famous Moore’s Law up until very recently [1]. At the same time, available communication bandwidth has increased substantially with the advent of new communication channels, such as 3G and 4G, offering new opportunities for assistive and/or healthcare applications [2, 3].

Historically, new technologies follow a typical path of development. In the early stages, the focus is on developing the new technology, overcoming the engineering challenges to make something that works [4]. The aim is to develop something that offers an increased level of functionality or something innovative. Users typically get overlooked in this early stage of development [5]. The usual outcome is a product that works best for users who are most like the designer. Those who are notably different, such as those who would benefit most from a universal access-based approach, usually do not fare so well.

Even where products have been developed specifically for users with significant functional impairments, there is no guarantee of a successful outcome. For example, in the 1990s, the EU funded a number of programmes through its TIDE (Telematics for the Integration of Disabled and Elderly people) initiative. Approximately $150 m was invested in this space, looking at the development of solutions from office workstations to wheelchair-mounted robots [6]. However, the success of those robots and others developed under similar initiatives was far from satisfactory [7]. Only the Handy 1 robot arm [8] and MANUS wheelchair-mounted robot [9] achieved any degree of successful take-up.

2 A Historical Example: The RAID Office Workstation

One example development under the TIDE initiative was the RAID office workstation, shown in Fig. 1. The robot was developed as a project between partners in the UK, Sweden and France.

Fig. 1.
figure 1

The RAID office workstation consisting of an RTX robot arm mounted on a gantry in a purpose-built office.

The robot consisted of a standard RTX robot arm mounted on a gantry so it could move around a specially prepared office space. A user could approach the desk on the left of the picture to control the robot using the Cambridge University Robotics Language (CURL), software developed specifically for such a purpose [10]. The design assumption was made that the user would want to access books and papers stored on the shelving, so would use the CURL interface to move the robot arm to pick up the Perspex containers holding them and bring the containers to the desk. The arm would then be used to pick up the contents and put them on the page-turner mounted next to the computer. The user would control the arm through the computer to turn each page so he or she could read the document.

Only 9 units of the robot were produced and went to each of the research partners. No units were sold commercially. There were several reasons for the lack of commercial success of this workstation. First, it was expensive, costing at least $55,000 just for the workstation and the robot. Second, it needed a dedicated office and for the office to be pre-adapted to support the workstation, for example with the shelving. Third, the interface was quite clunky and not easy to tailor or customize. Finally, and this was the biggest weakness, technology moved on. CDs and the Internet became commonplace, reducing the need for pieces of paper to be moved around. Other office workstations developed at the same time, such as DeVar and the Arlyn Arm Workstation did not fare any better [7].

The Handy 1 and MANUS robots did perform respectably well. Handy 1 was created by a small British start-up company with a view to being launched as a commercial product. It consisted of a robot arm mounted on a mobile base. Attached to the arm was a simple spoon. The user’s food was placed in 5 segregated sections of a tray and through a straightforward interface, the user could feed themselves. This robot allowed many users to feed themselves independently for the first time in their lives. Thus a real need had been identified and a reasonably cheap solution (c. $6000) developed. A second variant was introduced allowing users to apply make-up. Approximately 150 units had been sold by 1997 [7].

The MANUS robot was developed in the Netherlands. It was fundamentally a robot arm mounted on the side of a wheelchair. As such, the robot was inherently mobile, albeit with the disadvantage of making the wheelchair notably wider in certain configurations. The cost was significantly more than the Handy 1 ($35,000), but sales were helped by an agreement between the development team and the Netherlands government, which was the largest buyer.

3 A User-Centered Approach to Rehabilitation Robotics

It is not just in the field of robotics where the introduction of new technology has stumbled because of lack of consideration of the needs and capabilities of the users. Early attempts at gesture recognition, for example, focused on the development of the technology rather than evaluating whether the technology actually offered a genuine benefit to the users [11].

There are numerous user-centered design approaches available in the literature. One such approach is the 7-level model, developed from a rehabilitation robotics project called IRVIS – the Interactive Robotic Visual Inspection System. The 7-level model was developed by expanding on a typical engineering design process, such as the following [12]:

  • Stage 1 – define the problem – ensure there is a clear understanding of the requirements the product or system needs to meet – for universal access this will include a statement of who the users are and their needs, wants and aspirations

  • Stage 2 – develop a solution – follow a user-centered design approach to create concepts and prototypes – for universal access this will include consideration of the full range of users, their knowledge, skills and capabilities

  • Stage 3 – evaluate the solution – ensure that the finished design meets the specified requirements – for universal access this will include checking to ensure that the finished solution meets the wants, needs and aspirations for all users

To produce a successful universal access design, it is necessary to adopt strongly user-centered design practices. It is important to be able to modify and refine the device and its interface iteratively, combining both the above design steps with usability and accessibility evaluations. These evaluations typically involve measurement against known performance criteria, such as Jakob Nielsen’s heuristic evaluation [13].

Developing a usable product or service interface for a wider range of user capabilities involves understanding the fundamental nature of the interaction. Typical interaction with an interface consists of the user perceiving an output from the product, deciding a course of action and then implementing the response. These steps can be explicitly identified as perception, cognition and motor actions [14] and relate directly to the user’s sensory, cognitive and motor capabilities respectively. Three of Nielsen’s heuristics explicitly address these functions:

  • Visibility of system status – the user must be given sufficient feedback to gain a clear understanding of the current state of the complete system;

  • Match between system and real world – the system must accurately follow the user’s intentions;

  • User control and freedom – the user must be given suitably intuitive and versatile controls for clear and succinct communication of intent.

Each of these heuristics effectively addresses the perceptual, cognitive and motor functions of the user. Building on these heuristics, the 7-level approach, shown in Fig. 2, addresses each of the system acceptability goals identified by Nielsen [15].

Fig. 2.
figure 2

The 7-level model, combining a typical three stage engineering design process with usability heuristics [15]

4 The 7-Level Model and IRVIS

IRVIS (Interactive Robotic Visual Inspection System) was developed to assist in the visual inspection of hybrid microcircuits during manufacture. Such circuits typically undergo up to 50 manual visual inspections to detect faults during manufacture. Each time a circuit is picked up, there is a finite chance of damage being done to the circuit through the action of manually picking it up and manipulating it under a microscope. IRVIS was developed to see if it was possible to inspect the circuits by effectively moving the microscope around the circuits rather than moving the circuits around the microscope. Furthermore, it was considered that as inspecting the circuits was a fundamentally visual task, someone with unimpaired vision, but perhaps a motor impairment may be able to undertake the task. Hence, one of the system requirements was that the robot should be accessible to a user with a motor impairment.

A prototype system was developed, as shown in Fig. 3. It consisted of a high power CCD camera mounted on a gantry. The tray of microcircuits could be mounted on the robot and the tray and camera could be moved through five degrees-of-freedom without the circuits needing to be picked up or handled.

Fig. 3.
figure 3

The prototype IRVIS robot [15]

The original interface, shown in Fig. 4, used a variant of the CURL interface developed for the RAID and EPI-RAID workstations. An initial user trial was undertaken, but significant problems were identified and a re-design was required [16]. The account of the re-design is detailed elsewhere [15], so a brief account will be provided here.

Fig. 4.
figure 4

The original IRVIS interface, using CURL [16]

4.1 Level 1 – Problem Requirements

The original design requirements were considered satisfactory, i.e. the basic functionality to be provided, but initially it was thought that the original user trials failed because the robot was too under-powered and too slow. A counter-position was that the interface was the source of the issues as the original design team had focused too much on developing the robot and not on the UI. The original UI required the users to select each motor in turn to complete an action and enter a numerical value for how far it should rotate. It was felt that this was a very inefficient control method.

4.2 Level 2 – Problem Specification

To resolve the dilemma whether it was the robot or the interface, a series of user observation sessions were undertaken of the manual inspection process. These sessions identified a number of key steps common to each manual visual inspection, such as rotation about a point, tilting, translation, zooming and focusing. Under the original interface, each of these actions took multiple steps to complete in a piece-wise fashion. Consequently, it was decided to forego a costly rebuild of the robot and focus on a more user-centered interface design.

4.3 Level 3 – Output to the User

To support the user, a virtual model of the robot was developed. A number of views and combination of views were provided and evaluated to ensure that the users could recognize where they were on a range of circuit layouts and what they were looking at.

4.4 Level 4 – User Mental Model

Having developed an interface layout that afforded sufficient visual feedback to the user, the next step was to add the full functionality of the IRVIS robot to the simulation. The user trials for this stage of the re-design were to ensure that the simulated robot response to user input was consistent with that of the actual hardware. The robot was connected to the computer and the users were initially asked to repeat the same procedure as for Level 3, only this time predicting what the robot would do in response to their actions. Once the users were comfortable controlling the robot, new functionality was added to the interface that replicated the five basic actions that had been seen from the manual inspectors: translation, rotation and so on.

4.5 Level 5 – Input from the User

The final stage of the re-design concentrated on assessing the ease of interaction between the user and the robot, identifying particular aspects of the interface that required modification. The task in the user trials changed from “What will the robot do now?” to “Can you accomplish this goal?” As a result of this level, the final interface design was as shown in Fig. 5.

Fig. 5.
figure 5

The final IRVIS interface [15]

4.6 Level 6 – Functional Attributes

A series of user evaluation sessions were undertaken with users with a range of moderate to severe motor impairments. All of the users were able to navigate around the circuit tray without difficulty and within the time limit allowed. Likewise, all of the users were able to perform all of the other tasks seen in the manual inspection processes, such as tilting, rotating about a point, etc.

4.7 Level 7 – Social Attributes

Qualitative feedback from all the users was extremely favorable. Each user found the new interface easy and intuitive to use and all completed the tasks with a minimum of guidance. No user complained of the speed of response of IRVIS being too slow. This was an important result, because it had been previously thought that IRVIS was mechanically under-specified. A simple analysis showed why this was so. The original interface only allowed the use of one motor at a time. The new interface allowed potentially all five motors to be used simultaneously. The increased power available to the user significantly improved the overall speed of response.

5 Next Generation Robots

The examples given so far in this paper have focused on historical experiences. It is worth looking at how such robotic assistants may develop in the future and what roles they may play, especially in a universal access context. What is clear from the assistive robotic systems from the 1990s is that those designed with a clear purpose and benefit for the users in mind had the most successful take-up, especially the Handy 1. Similarly, the comparatively few examples of commercially successful robots for the home are focused on particular laborious tasks, such as vacuuming or mowing the lawn [17].

Consequently, it is clearly important to consider tasks that are important to users and especially those that support independent living or self-empowerment. Typical areas of life endeavor to consider include [18]:

  • Lifelong learning and education

  • Workplace

  • Real world (i.e. extended activities of daily living)

  • Entertainment

  • Socialising

It is also important to consider the widest possible range of users [19] and impairment types. A somewhat stereotypical concept of an assistive robot is a robot guide dog for users with visual impairments [20]. However, robots can assist in a range of other impairments, such as cognitive [21] or communication impairments. Notable progress has been made in the use of robots to develop communication skills in children with autism, for example [22]. Robotic dogs have also been converted into conversation partners through the use of chatbots [23], see Fig. 6.

Fig. 6.
figure 6

A K9 shell converted into a chatbot as an exhibit at the Dundee Science Centre

Advances in artificial intelligence and natural language processing also offer opportunities for making such robotic systems into genuine communication partners [24]. Furthermore, advances in robotics are helping create a new generation of robots that are very much more anthropomorphic in their appearance and behaviors. One such development is the RoboThespian, shown in Fig. 7 [25, 26].

Fig. 7.
figure 7

A RoboThespian

RoboThespians are capable of simulating human movements from the waist up. They have been designed to emote and come pre-loaded with sample orations from Shakespeare to Terminator. The University of Greenwich has two RoboThespians and use them for outreach purposes. Their appearance and movement typically evokes a range of responses from curiosity and amusement to indications of fear and trepidation. We are currently exploring why different people respond to the robot in these ways.

6 Conclusions

Robotic assistants offer a fantastic opportunity to improve the lives of many people, especially those who are getting older or have functional impairments. However, to truly benefit from these opportunities, designers of such robots need to adopt user-centered inclusive design processes to ensure that they meet the needs, wants and aspirations of the users while not putting demands on them that exceed their skills, knowledge and capabilities.

Furthermore, designers of such robots will increase their chances of successful take-up of their products if they focus on supporting tasks that enable the users to accomplish tasks or activities that support independent living, such as with the Handy 1 and eating.