Advertisement

A Workflow for Network Analysis-Based Structure Discovery in the Assessment Community

  • Grace Teo
  • Lauren Reinerman-Jones
  • Mark E. Riecken
  • Joseph McDonnell
  • Scott Gallant
  • Maartje Hidalgo
  • Clayton W. Burford
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10915)

Abstract

When technology opens up new domains or areas of research, such as human-agent teaming, new challenges in assessments emerge. Assessments may not be as systematically conducted as new measures develop, and the research may not be as firmly grounded in theory since theories in newer domains are still being formulated. As a result, research in these domains can be fragmented. To address these, an empirically-driven network approach that is complementary to the traditional theory-driven approach is proposed. The network approach seeks to discover patterns and structure in the assessment metadata (.e.g., constructs and measures) that can provide starting points and direction for future research. This paper outlines the workflow of the network approach which comprises three steps: (1) Data Preparation; (2) Data Analysis; and (3) Structure Discovery. As most of the work has been on Data Preparation, the paper will focus on the complexities and issues encountered in the first step, and include broad overviews of the subsequent steps. Anticipated use and outcomes of the network approach are also discussed.

Keywords

Network analysis Structure discovery Assessment Standardization Data extraction 

Notes

Acknowledgements

This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-15-2-0100. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied of the Army Research Laboratory of or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

References

  1. 1.
    Tsarouchi, P., Makris, S., Chryssolouris, G.: Human–robot interaction review and challenges on task planning and programming. Int. J. Comput. Integr. Manuf. 29, 916–931 (2016)CrossRefGoogle Scholar
  2. 2.
    Perrow, C.: Normal Accidents: Living with High Risk Systems. Basic Books, New Tork (1984)Google Scholar
  3. 3.
    Barnes, M.J., Chen, J.Y.C., Jentsch, F., Redden, E.S.: Designing effective soldier-robot teams in complex environments: training, interfaces, and individual differences. In: Harris, D. (ed.) EPCE 2011. LNCS (LNAI), vol. 6781, pp. 484–493. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21741-8_51CrossRefGoogle Scholar
  4. 4.
    Chen, J.Y.C., Haas, E.C., Barnes, M.J.: Human performance issues and user interface design for teleoperated robots. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37, 1231–1245 (2007)CrossRefGoogle Scholar
  5. 5.
    Endsley, M.R., Jones, W.M.: A model of inter- and intra-team situation awareness: Implications for design, training and measurement. In: New Trends in Cooperative Activities: Understanding System Dynamics in Complex Environments, pp. 46–67. Human Factors and Ergonomics Society, Santa Monica (2001)Google Scholar
  6. 6.
    Goodrich, M.A., Olsen, D.R.: Seven principles of efficient human robot interaction. In: 2003 IEEE International Conference on Presented at the Systems, Man and Cybernetics (2003)Google Scholar
  7. 7.
    Groom, V., Nass, C.: Can robots be teammates? benchmarks in human–robot teams. Interact. Stud. 8, 483–500 (2007)CrossRefGoogle Scholar
  8. 8.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46, 50–80 (2004)CrossRefGoogle Scholar
  9. 9.
    Lyons, J.B., Sadler, G.G., Koltai, K., Battiste, H., Ho, N.T., Hoffmann, L.C., Smith, D., Johnson, W., Shively, R.: Shaping trust through transparent design: theoretical and experimental guidelines. In: Savage-Knepshield, P., Chen, J. (eds.) Advances in Human Factors in Robots and Unmanned Systems, pp. 127–136. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-41959-6_11CrossRefGoogle Scholar
  10. 10.
    Millot, P., Pacaux-Lemoine, M.-P.: A common work space for a mutual enrichment of human-machine cooperation and team-situation awareness. IFAC Proc. 46, 387–394 (2013)CrossRefGoogle Scholar
  11. 11.
    Sibley, C., Coyne, J., Morrison, J.: Research considerations for managing future unmanned systems. Naval Research Laboratory, Washington, United States (2015)Google Scholar
  12. 12.
    Gu, W., Mittu, R., Marble, J., Taylor, G., Sibley, C., Coyne, J., Lawless, W.F.: Towards modeling the behavior of autonomous systems and humans for trusted operations. In: Presented at the 2014 AAAI Spring Symposium Series (2014)Google Scholar
  13. 13.
    Yanco, H.A., Drury, J.L., Scholtz, J.: Beyond usability evaluation: analysis of human-robot interaction at a major robotics competition. Hum.-Comput. Interact. 19, 117–149 (2004)CrossRefGoogle Scholar
  14. 14.
    Burford, C.; Reinerman-Jones, L., Teo, G., Matthews, G., McDonnell, J., Orvis, K., Riecken, M., Hancock, P., Metevier, C.: Unified Multimodal Measurement for Performance Indication Research, Evaluation, and Effectiveness (2018)Google Scholar
  15. 15.
    Flake, J.K., Pek, J., Hehman, E.: Construct validation in social and personality research: current practice and recommendations. Soc. Psychol. Pers. Sci. 8, 370–378 (2017)CrossRefGoogle Scholar
  16. 16.
    Benson, J.: Developing a strong program of construct validation: a test anxiety example. Educ. Meas. Issues Pract. 17, 10–17 (1998)CrossRefGoogle Scholar
  17. 17.
    Clark, L.A., Watson, D.: Constructing validity: basic issues in objective scale development. Psychol. Assess. 7, 309–319 (1995)CrossRefGoogle Scholar
  18. 18.
    Crocker, L.M., Algina, J.: Introduction to Classical and Modern Test Theory. Wadsworth Publishing Company, Belmont (2006)Google Scholar
  19. 19.
    Loevinger, J.: Objective tests as instruments of psychological theory. Psychol. Rep. 3, 635–694 (1957)Google Scholar
  20. 20.
    Raykov, T., Marcoulides, G.A.: Introduction to Psychometric Theory. Routledge, New York (2011)MATHGoogle Scholar
  21. 21.
    Gehlbach, H., Brinkworth, M.E.: Measure twice, cut down error: a process for enhancing the validity of survey scales. Rev. Gen. Psychol. 15, 380–387 (2011)CrossRefGoogle Scholar
  22. 22.
    Dawis, R.V.: Scale construction. J. Couns. Psychol. 34, 481–489 (1987)CrossRefGoogle Scholar
  23. 23.
    Willis, G.B.: Cognitive Interviewing: A Tool for Improving Questionnaire Design. Sage Publications, Thousand Oaks (2004)Google Scholar
  24. 24.
    Sireci, S.G.: The construct of content validity. Social indicators research, pp. 83–117. Kluwer Academic Publishers, Netherlands (1998)Google Scholar
  25. 25.
    McDonald, R.: Test homogeneity, reliability, and generalizability. In: Test Theory: A Unified Approach, pp. 76–120. Lawrence Erlbaum Associates, Mahwah (1999)Google Scholar
  26. 26.
    McCrae, R.R., Kurtz, J.E., Yamagata, S., Terracciano, A.: Internal consistency, retest reliability, and their implications for personality scale validity. Pers. Soc. Psychol. Rev. 15, 28–50 (2011)CrossRefGoogle Scholar
  27. 27.
    Chmielewski, M., Watson, D.: What is being assessed and why it matters: The impact of transient error on trait research. J. Pers. Soc. Psychol. 97, 186–202 (2009)CrossRefGoogle Scholar
  28. 28.
    Millsap, R.E.: Statistical Approaches to Measurement Invariance. Routledge, Florence (2012)Google Scholar
  29. 29.
    Campbell, D.T., Fiske, D.W.: Convergent and discriminant validation by the multitrait-multimethod matrix. Psychol. Bull. 56, 81–105 (1959)CrossRefGoogle Scholar
  30. 30.
  31. 31.
    West, D.B.: Introduction to Graph Theory. Prentice hall, Upper Saddle River (2001)Google Scholar
  32. 32.
    Borgatti, S.P., Mehra, A., Brass, D.J., Labianca, G.: Network analysis in the social sciences. Science 323, 892–895 (2009)CrossRefGoogle Scholar
  33. 33.
    Nelson, E.: Internal set theory: a new approach to nonstandard analysis. Bull. Am. Math. Soc. 83, 1165–1198 (1977)MathSciNetCrossRefGoogle Scholar
  34. 34.
    Moore, R., Lodwick, W.: Interval analysis and fuzzy set theory. Fuzzy Sets Syst. 135, 5–9 (2003)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Grace Teo
    • 1
  • Lauren Reinerman-Jones
    • 1
  • Mark E. Riecken
    • 2
  • Joseph McDonnell
    • 3
  • Scott Gallant
    • 4
  • Maartje Hidalgo
    • 1
  • Clayton W. Burford
    • 5
  1. 1.Institute for Simulation and TrainingUniversity of Central FloridaOrlandoUSA
  2. 2.TrideumOrlandoUSA
  3. 3.Dynamic Animation SystemsFairfaxUSA
  4. 4.Effective Applications CorporationOrlandoUSA
  5. 5.Army Research LaboratoryOrlandoUSA

Personalised recommendations