Abstract
In this section, the evaluation of our dialogue system is presented. Established methods for evaluating spoken language dialogue systems differentiate between subjective and objective methods as described in Section 2.2. The aim of the evaluation of our dialogue system is to appraise the user acceptance and rating of this novel sort of interactive system. Thus, the main focus is put on subjective evaluation for which data was obtained through the questionnaires filled out by the participants prior and subsequent to the data recordings. Usability evaluation is performed using two established methods (AttrakDiff [Hassenzahl et al., 2003] and a modified version of SASSI [Hone and Graham, 2000]). Evaluation is performed over the different recording sessions to detect the improvement of the system as well as comparing the different setups with and without avatar using the data of the Session III dialogues. A technical self-assessment of the participants was further conducted in order to validate the comparison of the di_erent recording sessions. The results of the evaluation are presented in Section 5.1.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2010 Springer-Verlag US
About this chapter
Cite this chapter
Strauß, PM., Minker, W. (2010). Evaluation. In: Proactive Spoken Dialogue Interaction in Multi-Party Environments. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-5992-8_5
Download citation
DOI: https://doi.org/10.1007/978-1-4419-5992-8_5
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4419-5991-1
Online ISBN: 978-1-4419-5992-8
eBook Packages: EngineeringEngineering (R0)