Skip to main content

Conclusion and Future Work

  • Chapter
  • First Online:
Multimodal Sentiment Analysis

Part of the book series: Socio-Affective Computing ((SAC,volume 8))

  • 1153 Accesses

Abstract

The main aim of this book was to go beyond textual sentiment analysis approaches by integrating audio and visual features with textual for multimodal sentiment analysis. To this end, textual sentiment analysis has also been improved by further developing and applying common-sense computing and linguistic patterns to bridge the cognitive and affective gap between word-level natural language data and the concept-level opinions conveyed by these. Various novel linguistic and machine learning based frameworks have been developed in order to accomplish multimodal sentiment analysis. Apart from the sentiment analysis task, the proposed multimodal model is also capable to detect emotions in videos.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cambria E, Hussain A, Havasi C, Eckl C (2010) Senticspace: visualizing opinions and sentiments in a multi-dimensional vector space. In: Jordanov I, Setchi R (eds) Knowledge-based and intelligent information and engineering systems. Springer, Berlin/Heidelberg, pp 385–393

    Chapter  Google Scholar 

  2. Dashtipour K, Poria S, Hussain A, Cambria E, Hawalah AYA, Gelbukh A, Zhou Q (2016) Multilingual sentiment analysis: state of the art and independent comparison of techniques. Cogn Comput 8(4):757–771

    Article  Google Scholar 

  3. Majumder N, Poria S, Gelbukh A, Cambria E (2017) Deep learning-based document modeling for personality detection from text. IEEE Intell Syst 32(2):74–79

    Article  Google Scholar 

  4. See Ref. [283].

    Google Scholar 

  5. Poria S, Cambria E, Hazarika D, Vij P (2016) A deeper look into sarcastic tweets using deep convolutional neural networks. In: COLING, pp 1601–1612

    Google Scholar 

  6. Rosasco L, De Vito E, Caponnetto A, Piana M, Verri A (2004) Are loss functions all the same? Neural Comput 16(5):1063–1076

    Article  Google Scholar 

  7. Zhang T, Yu B et al (2005) Boosting with early stopping: convergence and consistency. Ann Stat 33(4):1538–1579

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Poria, S., Hussain, A., Cambria, E. (2018). Conclusion and Future Work. In: Multimodal Sentiment Analysis. Socio-Affective Computing, vol 8. Springer, Cham. https://doi.org/10.1007/978-3-319-95020-4_8

Download citation

Publish with us

Policies and ethics