Design and Development Methodology for the Emotional State Estimation of Verbs

  • Georgios Kouroupetroglou
  • Nikolaos Papatheodorou
  • Dimitrios Tsonos
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7946)


The use of words and particularly the verbs in Human-Human Interaction reveals significant aspects of both human’s social and mental state. This work presents a novel methodology towards the emotional assessment of verbs by users. Essentially we would like to study whether the emotions that user experience are comparable with the corresponding results obtained through a mixture of natural language and statistical classifiers in SentiWordNet. Following the paper and pencil guidelines of the International Affective Picture System (IAPS) we have developed a web-based unsupervised version of the Self Assessment Manikin (SAM) test, designed for the emotional assessment of verbs in English and Greek language. Thirty five men and seventeen women participated in an internet survey version of the experiment. In the first part of the process, the participants had to assess their induced emotional state while reading a verb (totally 75 Greek verbs), on 5-point scales of “Pleasure”, “Arousal” and “Dominance”. The results comprise coherence and consistency. As a rule, all verbs obtained low to mid range scores on Arousal and Dominance axis and only on the Pleasure dimension scores are close to the edge.


verbs emotional state SentiWordNet Self-Assessment Manikin test 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Scherer, K.R.: What are emotions? And how can they be measured? Social Science Information 44(4), 695–729 (2005)CrossRefGoogle Scholar
  2. 2.
    Brave, S., Nass, C.: Emotion in human-computer interaction. In: Sears, A., Jacko, J.A. (eds.) The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, pp. 77–92. Taylor and Francis Group (2007)Google Scholar
  3. 3.
    WordNet, Lexical database of English,
  4. 4.
  5. 5.
    Esuli, A., Sebastiani, F.: SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining. In: 5th Conference on Language Resources and Evaluation (LREC 2006), Genoa, Italy, pp. 417–422 (2006)Google Scholar
  6. 6.
    Esuli, A., Baccianella, S., Sebastiani, F.: SentiWordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining. In: 7th Conference on International Language Resources and Evaluation (LREC 2010), Malta (2010)Google Scholar
  7. 7.
    Word frequency lists and dictionary from the Corpus of Contemporary American English,
  8. 8.
    Babylon 9 translation Software and Dictionary Tool,
  9. 9.
    Lang, P.J., Bradley, M.M., Cuthbert, B. N.: International affective picture system (IAPS): Affective retings of pictures and instruction manual. Technical report A-8, University of Florida, Gainesville, FL (2005)Google Scholar
  10. 10.
    Bradley, M.M., Lang, P.J.: Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry 25(1), 49–59 (1994)CrossRefGoogle Scholar
  11. 11.
    Russell, J.A., Mehrabian, A.: Evidence for a three-factor theory of emotions. Journal of Research in Personality 11(3), 273–294 (1977)CrossRefGoogle Scholar
  12. 12.
    Morris, J.D.: Observations SAM: The Self-Assessment Manikin - An Efficient Cross-Cultural Measurement of Emotional Response. Journal of Advertising Research 35(8), 63–68 (1995)Google Scholar
  13. 13.
    Grimm, M., Mower, E., Narayanan, S., Kroschel, K.: Combining categorical and primitives-based emotion recognition. In: Proceedings of 14th European Signal Processing Conference (EUSIPCO), Florence, Italy (September 2006)Google Scholar
  14. 14.
    Grimm, M., Kroschel, K., Mower, E., Narayanan, S.: Primitives-based evaluation and estimation of emotions in speech. Speech Communication 49(10-11), 787–800 (2007) ISSN: 0167-6393Google Scholar
  15. 15.
    Grimm, M., Kroschel, K., Narayanan, S.: Support Vector Regression for Automatic Recognition of Spontaneous Emotions in Speech. In: IEEE International Conference on Acoustics, Speech and Signal Processing 2007 (ICASSP 2007), Hawaii, USA, April 15-20, vol. 4, pp. IV-1085-IV–1088 (2007) ISBN: 1-4244-0728-1Google Scholar
  16. 16.
    Chul, M.L., Narayanan, S.S.: Toward detecting emotions in spoken dialogs. Transactions on Speech and Audio Processing 13(2), 293–303 (2005) ISSN: 1063-6676Google Scholar
  17. 17.
    Busso, C., Deng, Z., Grimm, M., Neumann, U., Narayanan, S.: Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis. IEEE Transactions on Audio, Speech, and Language Processing 15(3), 1075–1086 (2007) ISSN: 1558-7916Google Scholar
  18. 18.
    Lang, P.J., Bradley, M.M., Culthbert, B.N.: International Affective Picture System (IAPS): Instruction Manual and Affective Ratings. Technical Report A-6, The Center for Research in Psychophysiology, University of Florida, U.S.A. (2005)Google Scholar
  19. 19.
    PHP Hypertext Preprocessor,
  20. 20.
    Welling, L., Thomson, L.: PHP and MySQL Web Development, 4th edn. Addison-Wesley Professional (2008)Google Scholar
  21. 21.
  22. 22.
  23. 23.
  24. 24.
    Derogatis, L.R.: The Symptom Checklist-90-revised. NCS Assessments, Minneapolis (1992)Google Scholar
  25. 25.
    Corulia, W.J.: A psychometric investigation of the Eysenck Personality Questionnaire (revised) and its relationship to the I7 Impulsiveness Questionnaire. Personality and Individual Differences 8, 651–658 (1987)CrossRefGoogle Scholar
  26. 26.
    Donias, S., Karastergiou, A., Manos, N.: Standardization of the symptom checklist-90-R rating scale in a Greek population. Psychiatriki (1), 42–48 (1991)Google Scholar
  27. 27.
    Demetriou, E.X.: The Eysenck Personality Questionnaire: Standardization to the Greek population, Adult and Junior. Engefalos 23, 41–54 (1986)Google Scholar
  28. 28.
    Emotion Markup Language (EmotionML) 1.0,
  29. 29.
    Campbell, N., Hamza, W., Hoge, H., Tao, J., Bailly, G.: Editorial Special Section on Expressive Speech Synthesis. IEEE Trans. Audio, Speech, and Language Processing 14(4), 1097–1098 (2006)CrossRefGoogle Scholar
  30. 30.
    Kouroupetroglou, G., Tsonos, D.: Multimodal Accessibility of Documents. In: Pinder, S. (ed.) Advances in Human-Computer Interaction, pp. 451–470. I-Tech Education and Publishing, Vienna (2008)Google Scholar
  31. 31.
    Stickel, C., Ebner, M., Steinbach-Nordmann, S., Searle, G., Holzinger, A.: Emotion Detection: Application of the Valence Arousal Space for Rapid Biological Usability Testing to Enhance Universal Access. In: Stephanidis, C. (ed.) Universal Access in HCI, Part I, HCII 2009. LNCS, vol. 5614, pp. 615–624. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  32. 32.
    Petz, G., Karpowicz, M., Fürschuß, H., Auinger, A., Winkler, S.M., Schaller, S., Holzinger, A.: On Text Preprocessing for Opinion Mining Outside of Laboratory Environments. In: Huang, R., Ghorbani, A.A., Pasi, G., Yamaguchi, T., Yen, N.Y., Jin, B. (eds.) AMT 2012. LNCS, vol. 7669, pp. 618–629. Springer, Heidelberg (2012)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Georgios Kouroupetroglou
    • 1
  • Nikolaos Papatheodorou
    • 1
  • Dimitrios Tsonos
    • 1
  1. 1.Department of Informatics and TelecommunicationsNational and Kapodistrian University of AthensAthensGreece

Personalised recommendations