When You Can’t Read It, Listen to It! An Audio-Visual Interface for Book Reading

  • Carlos Duarte
  • Luís Carriço
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5616)


This paper presents a prototype of a mobile Digital Talking Book player, which, by combining visual and non-visual means of interaction, strives to achieve universal accessibility. Details on the non-visual aspects of the interaction, both input and output, are provided. To assess the validity of the proposed solutions, an experiment evaluates the non-visual operation of the prototype. Results show users can complete the same tasks with visual and non-visual interaction. However, some limitations are identified, and the observations prompt a discussion on how the use of multimodal interfaces can improve their accessibility and usability.


Universal Access Multimodal Interfaces Non-visual interaction Digital Talking Books 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Resnikoff, S., Pascolini, D., Etya’ale, D., Kocur, I., Pararajasegaram, R., Pokharel, G., Mariotti, S.: Global data on visual impairment in the year 2002. Bulletin of the World Health Organization 82(11), 844–852 (2004)Google Scholar
  2. 2.
    Duarte, C., Carriço, L.: Developing and Adaptive Digital Talking Book Player with FAME. Journal of Digital Information 8, 3 (2007)Google Scholar
  3. 3.
    Moodie, M.: Digital talking book player features list. Technical report, National Library Service for the Blind and Physically Handicapped (1999)Google Scholar
  4. 4.
    Savidis, A., Stephanidis, C.: Unified user interface design: designing universally accessible interactions. Interacting with Computers 16(2), 243–270 (2004)CrossRefGoogle Scholar
  5. 5.
    Duarte, C., Carriço, L., Morgado, F.: Playback of Rich Digital Books on Mobile Devices. In: Jacko, J.A. (ed.) HCI 2007. LNCS, vol. 4551, pp. 270–279. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  6. 6.
    Gaver, W.: Auditory Interfaces. In: Helander, M., Landauer, T., Prabhu, P. (eds.) Handbook of Human-Computer Interaction, 2nd edn., pp. 1003–1041. Elsevier, Amsterdam (1997)CrossRefGoogle Scholar
  7. 7.
    Brewster, S.: Providing a Structured Method for Integrating Non-Speech Audio into Human-Computer Interfaces. Ph.D Thesis, Department of Computer Science, University of Glasgow (1994)Google Scholar
  8. 8.
    Duarte, C., Carriço, L.: Conveying Browsing Context Through Audio on Digital Spoken Books. In: Stephanidis, C. (ed.) HCI 2007. LNCS, vol. 4556, pp. 259–268. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  9. 9.
    Duarte, C., Carriço, L., Guimarães, N.: Evaluating Usability Improvements by Combining Visual and Audio Modalities in the Interface. In: Jacko, J.A. (ed.) HCI 2007. LNCS, vol. 4550, pp. 428–437. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  10. 10.
    Obrenovic, Z., Abascal, J., Starcevic, D.: Universal accessibility as a multimodal design issue. Communications of the ACM 50(5), 83–88 (2007)CrossRefGoogle Scholar
  11. 11.
    Oviatt, S., Darrell, T., Flickner, M.: Multimodal interfaces that flex, adapt, and persist. Communications of the ACM 47(1), 30–33 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Carlos Duarte
    • 1
  • Luís Carriço
    • 1
  1. 1.LaSIGE/Faculty of Sciences of the University of LisbonLisboaPortugal

Personalised recommendations