Keywords

1 Introduction

Touchscreens proliferate in modern life driving the expectation to use touchscreens in self-service terminals (SSTs), such as automated teller machines (ATMs), supermarket self-checkouts and airport check-ins. This trend has been evident for the last decade, and shows no sign of diminishing [1]. For example, a survey in 2004 of 18-34 year olds in the US found that 82 % had used a touchscreen at a self-checkout system, and 70 % had used a touchscreen at an ATM. The majority of respondents (89 %) reported that they expected touchscreens to become the standard way of interacting with self-service devices. [2] This is borne out by our experience; sales of touchscreen ATMs increased from 16.9 % in 2010 to 24.7 % in 2011.

Self-service technology itself is becoming more widespread and increasing in importance [3]. Through the use of self-service, consumers are being empowered to conduct increasingly complex transactions at a time and in a location that is convenient for them.

This trend, however, particularly when combined with the trend to use touchscreens as the main interaction channel, can lead to considerable accessibility barriers. Touchscreens can be extremely difficult, particularly for people with visual impairment, to use. The expectation to use self-service technology, in addition to or instead of service provided by humans, can severely limit the possibilities for a disabled person to independently manage their daily activities, such as finances, shopping, and travel, if care is not taken to make SSTs accessible. As one of the world’s largest providers of self-service technology, NCR takes its responsibility in integrating accessibility into its products very seriously. In this paper we describe the development of two touchscreen input methods and their applicability to self-service in the domains of travel and financial services. Furthermore, by using the two projects as case studies, we discuss the process of applying accessibility research in practice and the particular challenges of integrating accessibility into self-service technology. We also discuss the benefits of integrating accessibility solutions into mainstream products, along with a brief discussion as to the merits of accessibility solutions that are designed from a universal design perspective; namely to bring benefit to multiple groups of people and not just designing for a single type of physical or cognitive impairment.

2 Accessibility in Self-Service Technology

One of the biggest benefits of self-service is the convenience and availability of services to anyone, anywhere, anytime [4]. However, this is also one of the main challenges of making self-service accessible. SSTs must be accessible to anyone, without instructions, personalization or assistive technologies.

What makes an SST accessible is in major part defined in international and country-specific laws, standards and guidelines. These requirements include features such as making input options tactilely discernible without activation, and offering speech output to guide the user through the transaction (e.g. [5]). The benefit of these regulations is that they provide measurable details, for example the optimal angle for a keypad, which tell us that by meeting these requirements, accessibility will be improved.

On the other hand, there are certain disadvantages to having the laws and standards regulate so much of the product specifics. Firstly, some requirements, such as height and reach, vary significantly across the world. Secondly, the regulatory process sometimes lags behind technological development, which might mean that we cannot take full advantage of new technologies because they are not yet allowed by law. Thirdly, the regulatory framework is expanding continuously, which makes it a challenge to monitor: existing regulations are updated, new countries are developing their own accessibility laws and standards, and the regulations are expanding onto new domains, such as airport check-in kiosks. In addition to accessibility regulations, certain aspects of the transactions performed on SSTs, for example entering the Personal Identification Number (PIN) are very tightly controlled by security standards (e.g. [6]).

While the regulatory framework sets quite specific parameters to accessibility, we know very little about the actual user; the user can be anyone, anywhere, anytime. This further emphasizes the importance of integrating user research and usability testing in the development process to make sure users’ needs are understood.

3 Developing a Physical Input Device

Although touchscreens on personal mobile devices have improved greatly in terms of their accessibility to people with visual impairment, these accessibility features do not necessarily scale well to a large screen, particularly on a device that must be usable without any training or learning as is the case for any self-service terminal. In addition, larger touchscreens can pose accessibility problems in terms of the required physical reach (a person must be able to reach across the expanse of the touchscreen); and this can be difficult for smaller people, particularly those in wheelchairs [7]. A solution is therefore required that offers additional tactile features both for locating and activating on-screen elements, and also offers reach benefits over a large touchscreen.

To address these challenges, a team of industrial designers, usability and accessibility specialists, and interaction designers embarked on a project to develop a physical input device that could be attached to a touchscreen-based SST. Overall, the development entailed three rounds of testing and gradual refinement of the concept based on user feedback. The development started with concept ideation to explore different input techniques, such as sliders, rotations, and buttons. Five of these concepts were developed into testable prototypes: a 4-way keypad, a capacitive touch-wheel, a scroll wheel, a pre-production sample of a tactile touchscreen and a commercially available navigation keyboard EZ Access. These were then evaluated with 25 participants to identify which of the physical movement modes offered the most benefit as input technique (reported in [8]). Although the tactile touchscreen performed best, it was not a feasible solution: it only gave feedback when an on-screen option was selected, not when the user was attempting to locate and identify the options, a key legal requirement in many countries. Therefore, the next preferred concept – the 4-way keypad – was taken for further development.

The resulting concept was a physical input device called the Universal Navigator (uNav) that provides tactile keys that allow the user to navigate through on-screen options. It has four direction keys arranged around a central select button, and an audio socket and volume button to enable private audio output (Fig. 1). The concept was later refined after an expert review with RNIB (Royal National Institute of Blind People), the leading UK support and research organisation for people with visual impairment. The expert review was particularly useful in giving direction on the auditory interaction that would best support the use of the device.

Fig. 1.
figure 1

The Universal Navigator (uNav)

To test the uNav, two further rounds of usability evaluations were conducted: one in the UK with RNIB which involved 48 people with different levels of visual impairment (reported in [9]); and another in the US with the Center for Visually Impaired (CVI) and disABILITY Link in the Atlanta metropolitan area which involved 20 people with physical and/or visual impairment (reported in [10]). An existing airline check-in application was used to test the concepts. This was chosen for two reasons: firstly it required complex interaction such as using an on-screen alphanumeric keyboard and the spatial task of seat selection. Secondly, by using an existing application we were able to validate how well a device like this could be retro-fitted to existing self-service technology with minimal impact to the existing infrastructure. In a repeated-measures experiment, participants completed the same flight check-in task twice: with the uNav in both horizontal (13° from horizontal tilted towards user, commonly found in keyboards) and vertical (65° from horizontal, in line with the display) orientations. Those who wanted to also attempted the task a third time with a conventional touchscreen; thus allowing for a comparison with the current method of interaction.

In terms of the main considerations for the accessibility of touchscreen-based SSTs, the project highlighted the importance of involving users and incorporating their feedback continuously throughout the development. The results from the usability evaluations showed high success rates and acceptance for the concept, both by people with visual impairment as well as people with physical impairment. This was a good example of how accessibility improvement in one area benefits others as well: the concept was shown to improve accessibility for people who use wheelchairs or have upper body mobility impairment, as it eliminated the need to reach across a touchscreen.

4 Developing Gesture-Based Input Techniques

As gesture-based touchscreen interaction has become more familiar through personal mobile devices, we wanted to investigate the possibility of using touchscreen as the only input mechanism. This would remove the need for a physical keypad altogether. In a similar manner to the uNav project, the first stage was to create several concepts that utilized gestures such as sliding, swiping, tapping, and combinations of multiple fingers. These early concepts (summarized in Table 1) were evaluated by three experts from RNIB and two visually impaired participants using a simplified PIN entry task. The main findings, which guided the further development of the concepts were as follows:

Table 1. Early concepts evaluated in the expert review
  • Double-tap was the preferred method for making a selection.

  • Moving finger directly over an element to hear it vocalized was preferable to swiping to rotate through options.

  • It was not clear where the active touchscreen area was.

  • Tactile aids might make it easier to locate on-screen elements but they might also make it easier for outsiders to see what is being entered.

  • The angle of the finger’s movement on the touchscreen can be difficult to gauge; for example end up moving slightly downwards when a horizontal movement was intended.

After refinement of the concepts, a usability evaluation with 49 participants with varying levels of visual impairment was conducted in collaboration with RNIB. The concepts were tested using a 10” capacitive touchscreen which was attached to a pedestal simulating the actual height and angle of an ATM. Each participant completed three tasks: first task was to enter a Personal Identification Number (PIN), the second was to select a specific item on a menu, and the third was to enter the word ‘SAVE’ using an on-screen keyboard. The latter two tasks were similar in terms of their interaction gestures: there were three different concepts for both, each demonstrating a different interaction technique (summarized in Table 2). The concepts varied in terms of the method of moving the focus between elements, but the method of making a selection was the same in each: double-tap. In both tasks, the user was able to hear the option in focus, and get an auditory (and visual) confirmation of the selection they had made. Each concept provided audio instructions, similar to the audio instructions currently available at many ATMs. Using a repeated-measures experimental design, each participant used each of the concepts in a randomized order.

Table 2. Concepts evaluated in the usability test

The research is still ongoing; therefore the focus here is to give a general overview of the work so far. Overall, the participants found touchscreen gestures an acceptable method of interacting with an ATM: 10 participants (21 %) said gestures would be an acceptable method of interaction and 33 participants (69 %) thought they would be acceptable but would require some changes. The most requested changes related to changing the keyboard to a QWERTY layout, and improving the color contrast to support access for the partially sighted users. There were five participants (10 %) who said gestures would not be an acceptable solution at all. The main reason for this was the difficulty of entering the PIN, rather than using touchscreen gestures per se.

Although the initial results were promising for the menu selection and text entry tasks, the PIN entry task was extremely challenging, with very low success rates. In the test, four different PIN entry concepts were evaluated, each with a different method for entering the numbers. Due to the stringent security requirements to ensure the privacy of the PIN [6], no feedback that distinguishes individual numbers can be given; the only feedback the user gets is a beep to indicate a number has been entered. The difficulty of this task further highlights the importance of appropriate voice guidance and audio feedback. More detailed research to develop a PIN entry method that is both accessible and secure is currently being conducted.

To improve our understanding of the difficulties people with visual impairments encounter when using touchscreens on self-service terminals, we analyzed the video recordings of the test sessions. We categorized these difficulties as follows:

Lack of feedback: The user needs auditory and/or tactile feedback firstly to locate and identify each interface element without activating them; and secondly to get confirmation that the desired element has been activated. For certain transactions, such as entering the PIN, the security requirements severely limit the feedback the system is allowed to give. Further research is required to better understand how these limitations can be overcome. We also continue to explore the possibilities of haptic feedback in SST touchscreens.

Disorientation/Reorientation: Since touchscreens often have smooth edge-to-edge glass surface without any distinguishable tactile features for reference, it is very easy to get disoriented. It requires a continuous effort to maintain an accurate mental model of the layout and elements on the screen. Without a permanent tactile reference point, the user is forced re-orientate themselves every time the content on the screen changes. Based on our observations, many participants used the physical edge of the touchscreen as their reference point, but this often lead to further difficulties. Typically a participant would keep one (non-dominant) hand on the edge of the screen and move the other (dominant) hand on the touchscreen in relation to the reference hand. However, the hand resting on the edge of the screen could easily activate an interface element by accident. Alternatively, the participant would be unaware of the fact that the active touch area does not extend all the way to the physical edge of the screen, as the touch-sensitive area is surrounded by a non-active frame which is not tactilely discernible. As a design consideration for self-service touchscreens, it is very important to make the active touch area tactilely discernible from the non-active frame. Because of the severity of the problems caused by disorientation, we will incorporate a tactile landmark, a permanent reference point in further development of the concepts (Fig. 2).

Fig. 2.
figure 2

Touchscreen interaction is made easier by having tactile markers along the bottom edge of the screen.

Unfamiliarity with touchscreen gestures: Although many people with visual impairment use touchscreens on their personal mobile devices, there are those who do not. Half of our participants had very little or no experience of touchscreens, and for them it was very difficult to understand what was expected when the audio guidance instructed them to “swipe’’ or “double-tap” (Fig. 3).

Fig. 3.
figure 3

Touchscreen gestures, such as double-taps and swipes, are unfamiliar to some users, which leads to accidental activation of functions; this picture illustrates a typical problem of knuckles accidentally grazing the screen surface.

Physical ergonomics: When the SST is designed primarily for touchscreen interaction, the height and angle of the display are often optimized for visual access, rather than tactile access. This can lead to awkward postures and uncomfortable hand movements (Fig. 4).

Fig. 4.
figure 4

A display designed for visual access can be less than ideal for visually impaired users

Situational awareness: The accessibility of an ATM encompasses much more than making the input and output accessible to the visually impaired users. Many participants commented on the particular concerns that they would have, were they to use this type of an ATM in the real world. Two key factors were raised by the participants: firstly, the longer a transaction takes, including the time needed to listen to the audio instructions, the more vulnerable they will feel. They will feel like they are holding up the people waiting behind them, or feel like they are drawing unwanted attention to themselves by taking longer to complete a transaction. Furthermore, to use the private audio they would need to have their headphones on, which would make them feel exposed and leave them dangerously unaware of their surroundings.

5 Conclusions

In this paper we have described the development of two touchscreen input methods: a physical input device called the Universal Navigator (uNav) and our early research into gesture-based interaction techniques. We presented these two projects as case studies to highlight some of the characteristics of accessibility for self-service technology.

There are four considerations to draw from this work, of particular concern in the self-service environment but also more widely applicable to accessibility research. Firstly, we should make no assumptions about the user’s prior knowledge and experience. In the case of touchscreen technology, we should not assume that everyone who walks up to a self-service terminal will be familiar with touchscreen technology and know what is meant by a tap or a swipe. Even if they were familiar with touchscreen gestures on their personal mobile devices, the experience will not necessarily be transferable to the experience of using touchscreen gestures on SSTs. The SST touchscreen is bigger in size, which makes disorientation a constant challenge. Furthermore, whereas personal mobile devices can be personalized, and the user can spend time learning the features, this is rarely possible on a public SST. Further research is needed to explore the extent to which gestures familiar from personal device touchscreens transfer onto public terminal touchscreens.

Secondly, the time and effort it takes to learn a new interface and interaction technique must be minimized. A primary concern should be to make the features easily discoverable by a first-time user, while keeping the user interface consistent to allow for the more experienced users to learn and use shortcuts. To reduce the required learning effort, audio guidance is critically important. The wording and the sequence of instructions must be carefully considered and tested with users. We cannot overemphasize the importance of providing clear, precise and timely audio instructions to help the user form an accurate mental model of the system and how to use it to achieve their goals. Further research is indicated to discover how to help the user maintain the spatial understanding of a primarily visual user interface on a relatively large touchscreen with few tactile landmarks.

Thirdly, both hardware and software contribute to the user experience, and this is particularly important in the context of self-service accessibility. In the case of touchscreen-based SSTs, a primary design consideration is often to ensure optimal visual access to the display. In terms of the height and angle of the display, this might not be the ideal solution for tactile access. This is an issue that is not adequately addressed by current laws and standards, which most often recommend or require the display angle to be optimized visually, e.g. 55-70° from the horizontal, and a keypad angle optimized for tactile access, e.g. 10-30° from the horizontal.

Fourthly and finally, the core tenet of user-centered design – to know the user – is vitally important in the context of self-service accessibility. In self-service technology, the most common use case is “anyone, anywhere, anytime”, which makes user research both a challenge and a necessity. For certain aspects of accessibility, such as height and reach measurements, we can – and have to – refer to the formal regulations, but this is still only the baseline of accessibility. As we have described in this paper, the goal of our research and development process is to integrate accessibility into mainstream products. In both projects, we applied a similar approach: early ideation followed by rapid testing of the underlying interaction principles to focus the development on the most promising ideas, followed by refinement using feedback from user research, expert reviews and usability testing. The process is based on inclusive design: making mainstream technology accessible to as wide an audience as possible. In the uNav project, for example, the physical input device was helpful both for people with visual impairment as it enabled tactilely discernible input, and for people with physical impairment as it removed the need to reach across a relatively large touchscreen. The gestural input research, on the other hand, further highlighted the need to support both the visual and auditory interfaces simultaneously and in synchronicity. Although the prototype used in the test was designed to assess a blind person’s auditory-only experience, we acknowledge that it is a key requirement to provide an accessible experience for all levels of visual ability, not forcing users to choose between a visual interface and an auditory one.

Future work will continue to address the issues identified so far. We will continue to explore the constraints set by accessibility and security regulations, particularly in the context of PIN entry. This also extends to the concerns of privacy more generally: users are often asked to handle very personal and sensitive data using a very public terminal. In addition to the technical and legal requirements of handling personal data, the perceived privacy is equally important: the user, whoever, wherever, whenever they use a self-service terminal, must be made to feel confident and empowered.