Advertisement

A Generalizable Method for Validating the Utility of Process Analytics with Usability Assessments

  • Ryan Mullins
  • Chad Weiss
  • Brent D. Fegley
  • Ben Ford
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 850)

Abstract

Crowdsourcing systems rely on assessments of individual performance over time to assign tasking that improves aggregate performance. We call these combinations of performance assessment and task allocation process analytics. As crowdsourcing advances to include greater levels of task complexity, validating process analytics, which requires replicable behaviors across crowds, becomes more challenging and urgent. Here, we present a work-in-progress design for validating process analytics using integrated usability assessments, which we view as a sufficient proxy for crowdsourced problem-solving. Using the process of developing a crowdsourcing system itself as a use case, we begin by distributing usability assessments to two independent, equally-sized, and otherwise comparable subgroups of a crowd. The first subgroup (control) uses a conventional method of usability assessment; the second (treatment), a distributed method. Differences in subgroup performance determine the degree to which the process analytics for the distributed method vary about the conventional method.

Keywords

Crowdsourcing Usability assessment Process analytics Performance assessment 

Notes

Acknowledgements

This research was performed in connection with contract N68335-18-C-0040 with the U.S. Office of Naval Research. We would like to thank Dr. Yiling Chen, Dr. Predrag Neskovic, Dr. James Intriligator, Mr. Roger Barry, Mr. Vilmos Csizmadia, and Ms. Kelsey Loanes for their contributions to this work as thought partners.

References

  1. 1.
    Bernstein, M.S., Little, G., Miller, R.C., Hartmann, B., Ackerman, M.S., Karger, D.R., Crowell, D., Panovich, K.: Soylent: a word processor with a crowd inside. Commun. ACM 58(8), 85–94 (2015).  https://doi.org/10.1145/2791285CrossRefGoogle Scholar
  2. 2.
    Burns, T.E., Stalker, G.M.: The Management of Innovation. Oxford University Press, Oxford (1961)Google Scholar
  3. 3.
    Camerer, C.F., Dreber, A., Forsell, E., Ho, T.H., Huber, J., Johannesson, M., Kirchler, M., Almenberg, J., Altmejd, A., Chan, T., et al.: Evaluating replicability of laboratory experiments in economics. Science 351(6280), 1433–1436 (2016)CrossRefGoogle Scholar
  4. 4.
    Dahl, M.S.: Organizational change and employee stress. Manag. Sci. 57(2), 240–256 (2011)CrossRefGoogle Scholar
  5. 5.
    Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. Appl. Stat. 28, 20–28 (1979)CrossRefGoogle Scholar
  6. 6.
    Drapeau, R., Chilton, L.B., Bragg, J., Weld, D.S.: Microtalk: using argumentation to improve crowdsourcing accuracy. In: Fourth AAAI Conference on Human Computation and Crowdsourcing (2016)Google Scholar
  7. 7.
    Jimmieson, N.L., Peach, M., White, K.M.: Utilizing the theory of planned behavior to inform change management: an investigation of employee intentions to support organizational change. J. Appl. Behav. Sci. 44(2), 237–262 (2008)CrossRefGoogle Scholar
  8. 8.
    Kerber, K., Buono, A.F.: Rethinking organizational change: reframing the challenge of change management. Organ. Dev. J. 23(3), 23 (2005)Google Scholar
  9. 9.
    Mellers, B., Stone, E., Murray, T., Minster, A., Rohrbaugh, N., Bishop, M., Chen, E., Baker, J., Hou, Y., Horowitz, M., Ungar, L., Tetlock, P.: Identifying and cultivating superforecasters as a method of improving probabilistic predictions. Perspect. Psychol. Sci. 10(3), 267–281 (2015).  https://doi.org/10.1177/1745691615577794CrossRefGoogle Scholar
  10. 10.
    Post, M., Callison-Burch, C., Osborne, M.: Constructing parallel corpora for six Indian languages via crowdsourcing. In: Proceedings of the Seventh Workshop on Statistical Machine Translation, pp. 401–409. Association for Computational Linguistics, Montréal, Canada, June 2012. http://www.aclweb.org/anthology/W12-3152
  11. 11.
    Tetlock, P.E., Gardner, D.: Superforecasting: The Art and Science of Prediction. Random House, New York City (2016)Google Scholar
  12. 12.
    Unger, R., Warfel, T.Z.: Guerilla UX Research Methods: Thrifty, Fast, and Effective User Experience Research Techniques. Morgan Kaufmann, Burlington (2012)Google Scholar
  13. 13.
    Valentine, M.A., Retelny, D., To, A., Rahmati, N., Doshi, T., Bernstein, M.S.: Flash organizations: crowdsourcing complex work by structuring crowds as organizations. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3523–3537. ACM (2017)Google Scholar
  14. 14.
    Yin, M., Chen, Y.: Predicting crowd work quality under monetary interventions. In: Fourth AAAI Conference on Human Computation and Crowdsourcing. The Association for the Advancement of Artificial Intelligence (2016)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Ryan Mullins
    • 1
  • Chad Weiss
    • 1
  • Brent D. Fegley
    • 1
  • Ben Ford
    • 1
  1. 1.Aptima, Inc.WoburnUSA

Personalised recommendations