Advertisement

Leveraging Machine-Executable Descriptive Knowledge in Design Science Research – The Case of Designing Socially-Adaptive Chatbots

  • Jasper FeineEmail author
  • Stefan Morana
  • Alexander Maedche
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11491)

Abstract

In Design Science Research (DSR) it is important to build on descriptive (Ω) and prescriptive (Λ) state-of-the-art knowledge in order to provide a solid grounding. However, existing knowledge is typically made available via scientific publications. This leads to two challenges: first, scholars have to manually extract relevant knowledge pieces from the data-wise unstructured textual nature of scientific publications. Second, different research results can interact and exclude each other, which makes an aggregation, combination, and application of extracted knowledge pieces quite complex. In this paper, we present how we addressed both issues in a DSR project that focuses on the design of socially-adaptive chatbots. Therefore, we outline a two-step approach to transform phenomena and relationships described in the Ω-knowledge base in a machine-executable form using ontologies and a knowledge base. Following this new approach, we can design a system that is able to aggregate and combine existing Ω-knowledge in the field of chatbots. Hence, our work contributes to DSR methodology by suggesting a new approach for theory-guided DSR projects that facilitates the application and sharing of state-of-the-art Ω-knowledge.

Keywords

Design science research Descriptive knowledge Prescriptive knowledge Ontology Chatbot Conversational agent 

References

  1. 1.
    Hevner, A., Vom Brocke, J., Maedche, A.: Roles of digital innovation in design science research. Bus. Inf. Syst. Eng. 6, 39 (2018)Google Scholar
  2. 2.
    Gregor, S., Hevner, A.R.: Positioning and presenting design science research for maximum impact. MIS Q. 37, 337–355 (2013)CrossRefGoogle Scholar
  3. 3.
    Hevner, A.R., March, S.T., Park, J., Ram, S.: Design science in information systems research. MIS Q. 28, 75–105 (2004)CrossRefGoogle Scholar
  4. 4.
    Drechsler, A., Hevner, A.R.: Utilizing, producing, and contributing design knowledge in DSR projects. In: Chatterjee, S., Dutta, K., Sundarraj, R.P. (eds.) DESRIST 2018. LNCS, vol. 10844, pp. 82–97. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-91800-6_6CrossRefGoogle Scholar
  5. 5.
    Auer, S.: Towards an open research knowledge graph (2018).  https://doi.org/10.5281/zenodo.1157185
  6. 6.
    Marcondes, C.H.: From scientific communication to public knowledge: the scientific article web published as a knowledge base (2005)Google Scholar
  7. 7.
    Davenport, T.H., de Long, D.W., Beers, M.C.: Successful knowledge management projects. Sloan Manag. Rev. 39, 43–57 (1998)Google Scholar
  8. 8.
    Staab, S., Studer, R.: Handbook on Ontologies. Springer, Heidelberg (2019).  https://doi.org/10.1007/978-3-540-92673-3zbMATHCrossRefGoogle Scholar
  9. 9.
    Hovorka, D.S., Larsen, K.R., Birt, J., Finnie, G.: A meta-theoretic approach to theory integration in information systems. In: 46th Hawaii International Conference on System Sciences (HICSS), pp. 4656–4665 (2013)Google Scholar
  10. 10.
    Larsen, K.R., Bong, C.H.: A tool for addressing construct identity in literature reviews and meta-analyses. MIS Q. 40, 529–551 (2016)CrossRefGoogle Scholar
  11. 11.
    Morana, S., et al.: Tool support for design science research-towards a software ecosystem: a report from a DESRIST 2017 workshop. In: Communications of the Association for Information Systems, vol. 43 (2018)Google Scholar
  12. 12.
    vom Brocke, J., et al.: Tool-support for design science research: design principles and instantiation. SSRN Electron. J. 1–13 (2017).  https://doi.org/10.2139/ssrn.2972803
  13. 13.
    Maedche, A., Motik, B., Stojanovic, L., Studer, R., Volz, R.: Ontologies for enterprise knowledge management. IEEE Intell. Syst. 18, 26–33 (2003)CrossRefGoogle Scholar
  14. 14.
    Larsen, K.R., et al.: Behavior change interventions: the potential of ontologies for advancing science and practice. J. Behav. Med. 40, 6–22 (2017)CrossRefGoogle Scholar
  15. 15.
    Berners-Lee, T., Hendler, J., Lassila, O.: The semantic web. Sci. Am. 284, 34–43 (2001)CrossRefGoogle Scholar
  16. 16.
    Reinecke, K., Bernstein, A.: Knowing what a user likes: a design science approach to interfaces that automatically adapt to culture. MIS Q. 37, 427–453 (2013)CrossRefGoogle Scholar
  17. 17.
    Horridge, M.: A Practical Guide To Building OWL Ontologies Using Protégé 4 and CO-ODE Tools Edition 1.3. University of Manchester (2011)Google Scholar
  18. 18.
    Musen, M.A.: The protégé project: a look back and a look forward. AI Matters 1, 4–12 (2015)CrossRefGoogle Scholar
  19. 19.
    Dale, R.: The return of the chatbots. Nat. Lang. Eng. 22, 811–817 (2016)CrossRefGoogle Scholar
  20. 20.
    Gnewuch, U., Morana, S., Maedche, A.: Towards designing cooperative and social conversational agents for customer service. In: Proceedings of the 38th International Conference on Information Systems (ICIS). AISel, Seoul (2017)Google Scholar
  21. 21.
    Gnewuch, U., Morana, S., Heckmann, C., Maedche, A.: Designing conversational agents for energy feedback. In: Chatterjee, S., Dutta, K., Sundarraj, R.P. (eds.) DESRIST 2018. LNCS, vol. 10844, pp. 18–33. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-91800-6_2CrossRefGoogle Scholar
  22. 22.
    Rietz, T., Benke, I., Maedche, A.: The impact of anthropomorphic and functional chatbot design features in enterprise collaboration systems on user acceptance. In: 14. Internationale Tagung Wirtschaftsinformatik (WI 2019) (2019)Google Scholar
  23. 23.
    Mimoun, M.S.B., Poncin, I., Garnier, M.: Case study—embodied virtual agents. An analysis on reasons for failure. J. Retail. Consum. Serv. 19, 605–612 (2012)CrossRefGoogle Scholar
  24. 24.
    Nass, C., Moon, Y.: Machines and mindlessness. social responses to computers. J. Soc. Issues 56, 81–103 (2000)CrossRefGoogle Scholar
  25. 25.
    Feine, J., Morana, S., Gnewuch, U.: Measuring service encounter satisfaction with customer service chatbots using sentiment analysis. In: 14. Internationale Tagung Wirtschaftsinformatik (WI 2019) (2019)Google Scholar
  26. 26.
    Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 72–78. ACM, New York (1994)Google Scholar
  27. 27.
    Gnewuch, U., Morana, S., Adam, M., Maedche, A.: Faster is not always better: understanding the effect of dynamic response delays in human-chatbot interaction. In: Proceedings of the 26th European Conference on Information Systems (ECIS), Portsmouth, 23–28 June 2018Google Scholar
  28. 28.
    Fogg, B.J.: Computers as persuasive social actors. In: Persuasive Technology: Using Computers to Change What We Think and Do, pp. 89–120. Morgan Kaufmann Publishers, San Francisco (2002)CrossRefGoogle Scholar
  29. 29.
    Hurst, A., Hudson, S.E., Mankoff, J., Trewin, S.: Automatically detecting pointing performance. In: Proceedings of the 13th International Conference on Intelligent User Interfaces, Gran Canaria, pp. 11–19. ACM (2008)Google Scholar
  30. 30.
    Webster, J., Watson, R.T.: Analyzing the past to prepare for the future. Writing a literature review. MIS Q. 26, xiii–xxiii (2002)Google Scholar
  31. 31.
    Sah, Y.J., Peng, W.: Effects of visual and linguistic anthropomorphic cues on social perception, self-awareness, and information disclosure in a health website. Comput. Hum. Behav. 45, 392–401 (2015)CrossRefGoogle Scholar
  32. 32.
    Puzakova, M., Rocereto, J.F., Kwak, H.: Ads are watching me. Int. J. Advertising 32, 513–538 (2013)CrossRefGoogle Scholar
  33. 33.
    Catrambone, R., Stasko, J., Xiao, J.: ECA as user interface paradigm. In: Ruttkay, Z., Pelachaud, C. (eds.) From Brows to Trust. HIS, vol. 7, pp. 239–267. Springer, Dordrecht (2004).  https://doi.org/10.1007/1-4020-2730-3_9CrossRefGoogle Scholar
  34. 34.
    Chandra, L., Seidel, S., Gregor, S.: Prescriptive knowledge in IS research: conceptualizing design principles in terms of materiality, action, and boundary conditions. In: 48th Hawaii International Conference on System Sciences, pp. 4039–4048 (2015)Google Scholar
  35. 35.
    McBreen, H.: Embodied conversational agents in E-commerce applications. In: Dautenhahn, K., Bond, A., Cañamero, L., Edmonds, B. (eds.) Socially Intelligent Agents. Multiagent Systems, Artificial Societies, and Simulated Organizations, vol. 3. Springer, Boston (2002).  https://doi.org/10.1007/0-306-47373-9_33CrossRefGoogle Scholar
  36. 36.
    Nass, C., Moon, Y., Fogg, B.J., Reeves, B., Dryer, D.C.: Can computer personalities be human personalities? Int. J. Hum Comput Stud. 43, 223–239 (1995)CrossRefGoogle Scholar
  37. 37.
    Kraemer, N.C., Karacora, B., Lucas, G., Dehghani, M., Ruether, G., Gratch, J.: Closing the gender gap in STEM with friendly male instructors? On the effects of rapport behavior and gender of a virtual agent in an instructional interaction. Comput. Educ. 99, 1–13 (2016)CrossRefGoogle Scholar
  38. 38.
    Brahnam, S., de Angeli, A.: Gender affordances of conversational agents. Interact. Comput. 24, 139–153 (2012)CrossRefGoogle Scholar
  39. 39.
    Niculescu, A., Hofs, D., van Dijk, B., Nijholt, A.: How the agent’s gender influence users’ evaluation of a QA system. In: International Conference on User Science and Engineering (i-USEr) (2010)Google Scholar
  40. 40.
    Nunamaker, J.E., Derrick, D.C., Elkins, A.C., Burgoon, J.K., Patton, M.W.: Embodied conversational agent-based kiosk for automated interviewing. J. Manag. Inf. Syst. 28, 17–48 (2011)CrossRefGoogle Scholar
  41. 41.
    Li, J., Zhou, M.X., Yang, H., Mark, G.: Confiding in and listening to virtual agents. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces - IUI, pp. 275–286. ACM Press (2017)Google Scholar
  42. 42.
    Hone, K.: Empathic agents to reduce user frustration. The effects of varying agent characteristics. Interacting Comput. 18, 227–245 (2006)CrossRefGoogle Scholar
  43. 43.
    Forlizzi, J., Zimmerman, J., Mancuso, V., Kwak, S.: How interface agents affect interaction between humans and computers. In: Proceedings of the 2007 Conference on Designing Pleasurable Products and Interfaces, pp. 209–221. ACM, New York (2007)Google Scholar
  44. 44.
    Hayashi, Y.: Lexical network analysis on an online explanation task. Effects of affect and embodiment of a pedagogical agent. IEICE Trans. Inf. Syst. 99, 1455–1461 (2016)CrossRefGoogle Scholar
  45. 45.
    Beldad, A., Hegner, S., Hoppen, J.: The effect of virtual sales agent (VSA) gender – product gender congruence on product advice credibility, trust in VSA and online vendor, and purchase intention. Comput. Hum. Behav. 60, 62–72 (2016)CrossRefGoogle Scholar
  46. 46.
    Ostrowski, L., Helfert, M., Gama, N.: Ontology engineering step in design science research methodology: a technique to gather and reuse knowledge. Behav. Inf. Technol. 33, 443–451 (2014)CrossRefGoogle Scholar
  47. 47.
    Rani, P., Sarkar, N., Liu, C.: Maintaining optimal challenge in computer games through real-time physiological feedback. In: Proceedings of the 11th International Conference on Human Computer Interaction, vol. 58 (2005)Google Scholar
  48. 48.
    Latham, A., Crockett, K., McLean, D., Edmonds, B.: A conversational intelligent tutoring system to automatically predict learning styles. Comput. Educ. 59, 95–109 (2012)CrossRefGoogle Scholar
  49. 49.
    Durand, R., Vaara, E.: Causation, counterfactuals, and competitive advantage. Strateg. Manag. J. 30, 1245–1264 (2009)CrossRefGoogle Scholar
  50. 50.
    Hovorka, D.S., Gregor, S.: Untangling causality in design science theorizing. In: 5th Biennial ANU Workshop on Information Systems Foundations, pp. 1–16 (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Jasper Feine
    • 1
    Email author
  • Stefan Morana
    • 1
  • Alexander Maedche
    • 1
  1. 1.Institute of Information Systems and Marketing (IISM)Karlsruhe Institute of Technology (KIT)KarlsruheGermany

Personalised recommendations