Keywords

1 Introduction

The Cornell Employment and Disability Institute estimates there are 7.3 million adults in the U.S. who are blind [1]. Because most digital experiences involve a screen, most remain inaccessible to people with severe visual impairments. While there has been a recent push towards more inclusive design [2] and screen-less interfaces [3], implementations are few and far between. A possible solution lays in Sensory Substitution Devices (SSDs): having shown promise for making visual experiences accessible [4]. Most SSDs though are purpose-built [5,6,7,8] and are thus unfit for other uses. Towards rectifying this inequity, we introduce the Low Resolution Haptic Interface (LRHI), a general-purpose haptic interface for sensory substitution that abstracts 2D haptic patterns into “Haptic Images”. The LHRI can be controlled using our library: https://github.com/bfakhri/lrhi.

2 Related Work

Dr. Bach-y-Rita showed with his Tactile-to-Vision Sensory Substitution (TVSS) device [4] that individuals who are blind are capable of understanding simple visual scenes with the aid of a sensory substitution device (SSD). His device utilized 400 solenoid actuators placed on a user’s back that were controlled by a camera. Users felt the images captured by the camera on their backs, and with training were able to distinguish a variety of common objects. The successor to the TVSS was the BrainPort, which used ETVSS (electro-tactile visual sensory substitution) to first augment a user’s sense of balance in order to regain autonomy [9] and later as an alternative means to vision [10], similar to the original TVSS. SSDs are a great alternative for circumventing the loss of a sensory modality to devices requiring surgical procedures such as the Cochlear Implant (CI) [11] and retinal prosthesis [12] as these procedures are expensive and invasive.

Examples of more modern SSDs include the Social Interaction Assistant [5] and the VibroGlove [6] where facial expressions are identified by the system and relayed to the user via haptics. SSDs that make use of the auditory modality have also been developed such as KASPA (Kay’s Advanced Spatial Perception Aid) [13], the Sonic Pathfinder [14], and the EyeCane for virtual environments [15] and real environments [16]. More generally, SSDs towards general vision substitution such as the “vOICE” [17, 18] and EyeMusic [19] abstract images into tones or musical notes and instruments to convey visual information. Unfortunately, the usability of auditory SSDs for vision substitution is limited as they obstruct a valuable sensory modality (hearing) which is often counterproductive to SS [6]. Alternatively, haptic SSDs allow the interface to work without obstructing modalities that are often also in use while taking part in typical daily tasks. Because most SSDs are purpose-built they are difficult to modify for other uses, effectively presenting large design costs to researchers and engineers.

3 Low Resolution Haptic Interface

Building a general-purpose haptic sensory substitution device required a standardised and general interface. Towards this, we propose that haptic patterns be abstracted into “haptic images”, which are essentially two dimensional arrays of haptic intensities and frequencies \(i,f = H[x,y]\) analagous to how a visual image can be modeled as a 2D array of color intensities \(r,g,b = V[x,y]\) (RGB model) where xy are discrete coordinates relating to space. A series of haptic images can thus convey moving patterns over time similar to how a series of images becomes a video. The LRHI is a system that communicates using “haptic images” and converts them into tactile representations. The LRHI consists of a computing platform which sends haptic images to be displayed, a controller which intereprets the haptic images and converts them into anolog signals, and a display which converts the analog signals into vibrotactile actuation. Figure 1 shows a block diagram of the LRHI.

Fig. 1.
figure 1

Block diagram of the LRHI. In red Computing Platform. In green Controller. In blue Display (Color figure online)

The computing platform can be any USB enabled computer: its role is to generate the haptic images in a digital and abstract form. The computing platform may take on a variety of roles in generating the haptic images. In sensory substition applications for instance, the computing platform converts images from a video stream into haptic images and sends them to the controller. The actual algorithms for conversion are left up to the designers. In our incarnation of the LRHI, it communicates with the controller by sending \(4\times 4\) 8-bit haptic images.

The controller consists of an Arduino microcontroller, TLC5940 analog to digital converter, and a collection of high-current Darlington Transistor Arrays. The Arduino accepts the haptic image and using the TLC5940 converts the haptic image into 16 analog electrical signals (8-bit PWM). These are transmitted to the transistor arrays where the signals are amplified and made suitable to drive the display. A full version of the LRHI would allow haptic images to specify not only an intensity but also a vibration frequency for each actuator on the display.

Fig. 2.
figure 2

Motor Housing: (a) digital and 3d-printed models (b) vibration axis

Our prototype of the display consists of a \(4\times 4\) array of pancake motors housed in custom 3D printed mounts that orient the motors orthogonal to the user’s back. The housing is shown in Fig. 2a. This accomplishes two objectives: first, the vibration axis is made perpendicular to the user’s back (illustrated in Fig. 2b). Second, the contact point is made smaller. These two objectives increase the perceived intensity of the vibrations which is especially important when the user is wearing thick clothing. The motors and housing are mounted on accoustic foam to provide a maleable surface that adheres to a user’s back and simultaneously transmits minimal intermotor vibration. The haptic display is shown in Fig. 3.

Fig. 3.
figure 3

\(4\times 4\) haptic display mounted on an office chair

The haptic display consumes 50 mA in an idle state with a maximum consumption of 412 mA when all motors are at full power (energy consumption summarized in Table 1). During the non-interactive portion of the user study, the LRHI had a mean power consumption of 0.73 W. During the interactive portion of the study the LRHI showed a mean power consumption of 0.56 W.

Table 1. LRHI energy table

4 User Study

In order to assess the LRHI’s potential as an SSD, we performed a preliminary user study with 8 participants to explore its ability to convey information through haptics. The study consisted of a non-interactive and an interactive component. The non-interactive portion consisted of 3 phases wherein participants were introduced to a finite set of haptic patterns during “familiarization” (being exposed to each individual pattern only once) and were asked to recall those patterns during “testing”.

During the non-interactive testing portion of the study participants were given the option to repeat the pattern if they were not confident in their assessment. Phase 1 consisted of static patterns (Top Left, Bottom Right, etc). Phase 2 consisted of patterns that vary across space and time (Left to Right, Top to Bottom, etc). Phase 3 is similar to Phase 2, but users were asked to recall how fast the pattern was displayed (Left to Right - Fast, Top to Bottom - Slow, etc) in addition to the original pattern identity (Left to Right). The patterns increased in complexity in each subsequent phase, beginning with simple single-motor patterns to patterns that move through space and time. Participants were given the option to repeat a pattern if they were not confident in their initial assessment. The patterns for each phase are illustrated and described in Appendix A.

In order to assess the LRHI’s potential in interactive environments, we designed a completely haptic, cat-mouse game to play (illustrated in Fig. 7). The user plays as a cat, and the goal is to find a mouse. The cat is presented on the haptic display as a solid vibration, while the mouse is a pulsing vibration. Participants used a computer-mouse to control the position of the cat on the haptic display, leading it towards the mouse - the goal being to catch the mouse as quickly as possible. The duration between the beginning of the game and capturing the mouse was recorded, each participant playing 60 games in total (results shown Table 3). Increasing performance in this game (decreasing game time) was intended to show that participants were in fact able to learn to use the LRHI to interact with dynamic environments.

5 Results

For the non-interactive portion of the study, participants were able to identify the patterns with considerable accuracy. Phase 1, which included static patterns only did not significantly differ in accuracy over Phase 2 (dynamic patterns). Only when participants were asked to discern both the pattern and the speed at which it was presented did performance suffer slightly. Results are compiled in Table 2 - participants were able to achieve an aggregate accuracy of 98.38%.

Table 2. Results for non-interactive phase

For the interactive portion of the study (illustrated in Fig. 7), participants were able to capture the mouse in 4.81 s on average, and showed a significant performance increases the longer they played. A comparison of the first third of the gaming session (first 20 games) and the last third as well as total performance can be seen in Table 3. Figure 4 illustrates the participants’ performance over time.

Table 3. Results for interactive phase
Fig. 4.
figure 4

Normalized mean game times

6 Conclusion and Future Work

In conclusion, we have introduced the LRHI, a general haptic interface for sensory substitution applications and have performed a preliminary user-study to show its efficacy in conveying information through the sense of touch as well as an interface to interactive environments. The LRHI shows promise as a general purpose sensory substitution device because we were able to show that users learn as they play the cat-mouse game, successfully navigating a virtual, non-visual, and interactive environment solely through their sense of touch. For future work we intend to augment the LRHI with peripheral controls and test its effectiveness in visual-to-haptic sensory substitution in 3D environments (both virtual and real-world).