Accessibility and Design for All Solutions Through Speech Technology



The advent of computer-based speech-processing systems like speech synthesisers (SS) and speech recognisers (SR) has brought mankind a promising way of realising the fundamental need for spoken communication by enabling automatic speech-mediated communication. The acoustic medium and speech can be used to implement a high potential communication channel in an alternative, or augmentative, way to improve accessibility to communication for persons with special needs. It takes speech-processing both for speech input and output into consideration, and is presently a well-defined and visible trend in communication technology. Moreover, it is assumed that the solutions for communication difficulties of disabled persons can also bring advantages for non-disabled persons by providing redundancy, and therefore higher comfort, in the use of the communication systems.


Sign Language Automatic Speech Recognition Dialogue System Tour Guide Synthetic Speech 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



Projects MULTIVOX and SPEECH-AID were developed with collaboration of LSS’s member João Paulo Teixeira from the Polytechnic Institute of Bragança, Portugal, in cooperation with the Technical University of Budapest and the Academy of Science of Budapest and the colleagues Geza Nemeth and Gabor Olaszy.

Project Audiobrowser was developed with collaboration of Fernando Lopes, in consortium with the Department of Informatics (DI) of the University of Minho, and the colleague António Fernandes.

Projects INFOMETRO and NAVMETRO are developed by a consortium formed by ACAPO, Metro do Porto, S. A. and FEUP and is sponsored by the Portuguese Government Program POSConhecimento.

Project SPEAKMATH is sponsored by the Portuguese Government Program, POSConhecimento.

Project AQN was developed by Helder Ferreira and Vitor Carvalho (both from FEUP) with collaboration of Dárida Fernandes and Fernando Pedrosa (both from ESE-IPP, Centro Calculus) and was coordinated by the author. It was started after an idea from the former LSS’s member, Maria Barros.

The author was a member of Action COST 219 ter – Accessibility for All to Services and Terminals for Next Generation Networks. COST 219 ter was a forum that allowed the author for many years to learn about the “whys” and “hows” of design for all in accessibility to telecommunications.

Pedro Marcolino Freitas, the elder of the author’s sons, to whom deep gratitude is expressed, designed the chapter’s pictures.


  1. 1.
    Raman, T. V. (1997). Auditory User Interfaces. kluwer, Dordrecht.CrossRefGoogle Scholar
  2. 2.
    Raman, T. V. (1998). Conversational gestures for direct manipulation on the audio desktop., In: Proc. 3rd Int. ACM SIGACCESS Conf. on Assistive Technologies: Marina del Rey, CA, 51–58.Google Scholar
  3. 3.
    Potamianos, A., Narayanan, S. S. (2003). Robust recognition of children’s speech. In: IEEE Transactions on Speech and Audio Processing 11(6), 603–616.Google Scholar
  4. 4.
    Boll, S. F. (1979). Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans. Acoust., Speech Signal Process.Google Scholar
  5. 5.
    Junqua, J. C., Haton, J. P. (1996). Robutness in Automatic Speech Recognition – Fundamentals and Applications. Kluwer Academic Publishers, Dordrecht.CrossRefGoogle Scholar
  6. 6.
    Gales, M., Young, S. (1996). Robust Continuous Speech Recognition using Parallel Model Combination. In: IEEE Trans Speech and Audio Processing 4(5), 352–359.Google Scholar
  7. 7.
    Moura, A., Pêra, V., Freitas, D. (2006). An automatic speech recognition system for persons with disability. In: Proc. Conf. IBERDISCAP’06: Vitória-ES, Brasil, 20–22.Google Scholar
  8. 8.
    Roe, P. (ed) (2007). Towards an Inclusive Future – Impact and Wider Potential of Information and Communication Technologies. COST, European Commission.Google Scholar
  9. 9.
    Neti, C., Potamianos, G., Luettin, J., Mattheus, I., Glotin, H., Vergyri, D., Sison, J., Mashari, A., Zhou, J. (2000). Audio-visual Speech Recognition. Final Workshop 2000 report. Baltimore MD Centre for Language and Speech Processing, The Johns Hopkins University.Google Scholar
  10. 10.
    Turunen, M., Hakulinen, J., Räihä, K-J., Salonen, E-P., Kainulainen, A., Prusi, P. (2005). An architecture and applications for speech-based accessibility systems, In: IBM Systems Journal, 44(3), 485–504.Google Scholar
  11. 11.
    Jokinen, K., Kerminen, A., Kaipainen, M., Jauhiainen, T., Wilcock, G., Turunen, M., Hakulinen, J., Kuusisto, J., Lagus, K. (2002). Adaptive dialogue systems – interaction with Interact. In: Jokinen, K., McRoy, S. (eds) Proc. 3rd SIGdial Workshop on Discourse and Dialogue, Philadelphia, 64–73.Google Scholar
  12. 12.
    Ferreira, H. (2005). Audiomath 2005, developed as part of the Graduation Thesis and MSc thesis, LSS, FEUP.Google Scholar
  13. 13.
    Freitas, D., Ferreira, H., Carvalho, V., Fernandes, D., Pedrosa, F. (2003). A prototype application for teaching numbers. Proc. 10th Int. Conf. on Human–Computer, HCII-2003, Crete, Greece.Google Scholar
  14. 14.
    Freitas, D., Ferreira, H., Fernandes, D. (2003). A. Q. N., A Quinta dos Números, Um projecto em desenvolvimento. Proc. “8o Baú da Matemática”: Ermesinde, Portuguese.Google Scholar
  15. 15.
    Roe, P. (ed) (2001). Bridging the Gap? COST219bis, European Commission.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Speech Processing Laboratory, University of PortoFaculty of EngineeringPortoPortugal

Personalised recommendations