Advertisement

Design Towards AI-Powered Workplace of the Future

  • Yujia Cao
  • Jiri Vasek
  • Matej Dusik
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10921)

Abstract

The advances of technology have profoundly improved the way people live and work. However, accompanying fast-paced technological development is information overload, which can minimise our capacity for cognitive processing and our ability to make quality decisions. We conducted extensive user research to identify needs and problems of contemporary office workers. Based on the insights of these real needs, the concept of a system called Cognitive Hub has been developed which supports an activity-based new metaphor for work, user state adaptation, smart enterprise search, smart transformation between physical and digital contents and multimodal interaction. Konica Minolta is thus developing Cognitive Hub as a platform that will serve as a nexus for users’ information flows within the digital workplace. Cognitive Hub will also provide AI-based services to improve work experience and the well-being of office workers. A demonstrator was created to show the concept in action and illustrate its benefits and value for users.

Keywords

Digital workplace Artificial intelligence User-centred design User state inference Multimodal interaction 

References

  1. 1.
    Affectiva. https://www.affectiva.com/. Accessed 26 Jan 2018
  2. 2.
    Alibali, M.W., Kita, S., Young, A.J.: Gesture and the process of speech production: we think, therefore we gesture. Lang. Cogn. Process. 15, 593–613 (2000)CrossRefGoogle Scholar
  3. 3.
    Anagnostopoulos, C.N., Iliou, T., Giannoukos, I.: Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011. Artif. Intell. Rev. 43(2), 155–177 (2015)CrossRefGoogle Scholar
  4. 4.
    Anil, J., Suresh, L.P.: Literature survey on face and face expression recognition. In: Proceedings of the International Conference on Circuit, Power and Computing Technologies, ICCPCT, pp. 1–6. IEEE (2016)Google Scholar
  5. 5.
    Mangen, A., Walgermo, B.R., Brønnick, K.: Reading linear texts on paper versus computer screen: effects on reading comprehension. Int. J. Educ. Res. 58, 61–68 (2013)CrossRefGoogle Scholar
  6. 6.
    Apache Tika - a content analysis toolkit. https://tika.apache.org/. Accessed 30 Jan 2018
  7. 7.
    Bernsen, N.O.: Multimodality theory. In: Tzovaras, D. (ed.) Multimodal User Interfaces: From Signals to Interaction. SCT, pp. 5–29. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-78345-9_2CrossRefGoogle Scholar
  8. 8.
    Chen, S., Epps, J.: Automatic classification of eye activity for cognitive load measurement with emotion interference. Comput. Methods Prog. Biomed. 110(2), 111–124 (2013)CrossRefGoogle Scholar
  9. 9.
    Cottrill Research: Various Survey Statistics: Workers Spend Too Much Time Searching For Information. https://www.cottrillresearch.com/various-survey-statistics-workers-spend-too-much-time-searching-for-information/. Accessed 30 Jan 2018
  10. 10.
    Feel. https://www.myfeel.co/. Accessed 26 Jan 2018
  11. 11.
    Ferris, J.: The Reading Brain in the Digital Age: The Science of Paper versus Screens. Scientific American (2014)Google Scholar
  12. 12.
    Goldin-Meadow, S., Nusbaum, H., Kelly, S.D., Wagner, S.: Explaining math: gesturing lightens the load. Psychol. Sci. 12, 516–522 (2001)CrossRefGoogle Scholar
  13. 13.
    Happy, S.L., Routray, A.: Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 6(1), 1–12 (2015)CrossRefGoogle Scholar
  14. 14.
    Jessica, G., Aditya, K.: Artificial Intelligence Use Cases – 215 Use Case Descriptions, Examples, and Market Sizing and Forecasts Across Enterprise, Consumer, and Government Markets. Tractica (2017)Google Scholar
  15. 15.
    Kaptelinin, V., Mary, C.: Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments. The MIT Press, Cambridge (2007)Google Scholar
  16. 16.
    Konica Minolta: Cognitive Hub: the operating system for the workplace of the future. White paper in Artificial Intelligence series (2017)Google Scholar
  17. 17.
    Konica Minolta: The future of work. White paper in Artificial Intelligence series (2017)Google Scholar
  18. 18.
    Lampix. https://www.lampix.co/. Accessed 30 Jan 2018
  19. 19.
    Maragos, P., Gros, P., Katsamanis, A., Papandreou, G.: Cross-modal integration for performance improving in multimedia: a review. In: Maragos, P., Potamianos, A., Gros, P. (eds.) Multimodal Processing and Interaction. MMSA, vol. 33, pp. 1–46. Springer, Boston (2008).  https://doi.org/10.1007/978-0-387-76316-3_1CrossRefGoogle Scholar
  20. 20.
    Marshall, S.P.: Identifying cognitive state from eye metrics. Aviat. Space Environ. Med. 78(5), B165–B175 (2007)Google Scholar
  21. 21.
    Myrberg, C., Wiberg, N.: Screen vs. paper: what is the difference for reading and learning? Insights 28(2), 49–54 (2015)CrossRefGoogle Scholar
  22. 22.
    Neville, W.: Office Printing Statistics 2017. https://www.lasersresource.com/blog/office-printing-statistics. Lasers Resource. Accessed 30 Jan 2018
  23. 23.
    Oviatt, S.: Ten myths of multimodal interaction. Commun. ACM 42(11), 74–81 (1999)CrossRefGoogle Scholar
  24. 24.
    Potamianos, A., Perakakis, M.: Human-computer interfaces to multimedia content: a review. In: Maragos, P., Potamianos, A., Gros, P. (eds.) Multimodal Processing and Interaction: Audio, Video, Text. MMSA, vol. 33, pp. 50–89. Springer, Boston (2008).  https://doi.org/10.1007/978-0-387-76316-3_2Google Scholar
  25. 25.
    Susan, F., Chris, S.: The high cost of not finding information. ICD white paper (2001)Google Scholar
  26. 26.
    Verma, G.K., Tiwary, U.S.: Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals. NeuroImage 102, 162–172 (2014)CrossRefGoogle Scholar
  27. 27.
    Viola, U., James, L.M., Schlangen, L.J., Dijk, D.-J.: Blue-enriched white light in the workplace improves self-reported alertness, performance and sleep quality. Scand. J. Work Environ. Health 34, 297–306 (2008)CrossRefGoogle Scholar
  28. 28.
    Weninger, F., Wöllmer, M., Schuller, B.: Emotion recognition in naturalistic speech and language – a survey. Emot. Recognit.: Pattern Anal. Approach 237–267 (2015)Google Scholar
  29. 29.
    West, M.: Is Office Printing Increasing or Declining?. https://www.printaudit.com/printaudit-blog/premier/is-office-printing-increasing-or-declining-answer-yes. Print Audit. Accessed 30 Jan 2018
  30. 30.
    Wickens, C.D.: Multiple resources and performance prediction. Theoret. Issues Ergon. Sci. 3(2), 159–177 (2002)CrossRefGoogle Scholar
  31. 31.
    Zhang, F., Su, J., Geng, L., Xiao, Z.: Driver fatigue detection based on eye state recognition. In: Machine International Conference on Vision and Information Technology, CMVIT, pp. 105–110. IEEE (2017)Google Scholar
  32. 32.
    Tobii eye tracker. https://www.tobii.com/

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Konica Minolta Laboratory EuropeBrnoCzech Republic

Personalised recommendations