A Generalizable Method for Validating the Utility of Process Analytics with Usability Assessments
Crowdsourcing systems rely on assessments of individual performance over time to assign tasking that improves aggregate performance. We call these combinations of performance assessment and task allocation process analytics. As crowdsourcing advances to include greater levels of task complexity, validating process analytics, which requires replicable behaviors across crowds, becomes more challenging and urgent. Here, we present a work-in-progress design for validating process analytics using integrated usability assessments, which we view as a sufficient proxy for crowdsourced problem-solving. Using the process of developing a crowdsourcing system itself as a use case, we begin by distributing usability assessments to two independent, equally-sized, and otherwise comparable subgroups of a crowd. The first subgroup (control) uses a conventional method of usability assessment; the second (treatment), a distributed method. Differences in subgroup performance determine the degree to which the process analytics for the distributed method vary about the conventional method.
KeywordsCrowdsourcing Usability assessment Process analytics Performance assessment
This research was performed in connection with contract N68335-18-C-0040 with the U.S. Office of Naval Research. We would like to thank Dr. Yiling Chen, Dr. Predrag Neskovic, Dr. James Intriligator, Mr. Roger Barry, Mr. Vilmos Csizmadia, and Ms. Kelsey Loanes for their contributions to this work as thought partners.
- 2.Burns, T.E., Stalker, G.M.: The Management of Innovation. Oxford University Press, Oxford (1961)Google Scholar
- 6.Drapeau, R., Chilton, L.B., Bragg, J., Weld, D.S.: Microtalk: using argumentation to improve crowdsourcing accuracy. In: Fourth AAAI Conference on Human Computation and Crowdsourcing (2016)Google Scholar
- 8.Kerber, K., Buono, A.F.: Rethinking organizational change: reframing the challenge of change management. Organ. Dev. J. 23(3), 23 (2005)Google Scholar
- 9.Mellers, B., Stone, E., Murray, T., Minster, A., Rohrbaugh, N., Bishop, M., Chen, E., Baker, J., Hou, Y., Horowitz, M., Ungar, L., Tetlock, P.: Identifying and cultivating superforecasters as a method of improving probabilistic predictions. Perspect. Psychol. Sci. 10(3), 267–281 (2015). https://doi.org/10.1177/1745691615577794CrossRefGoogle Scholar
- 10.Post, M., Callison-Burch, C., Osborne, M.: Constructing parallel corpora for six Indian languages via crowdsourcing. In: Proceedings of the Seventh Workshop on Statistical Machine Translation, pp. 401–409. Association for Computational Linguistics, Montréal, Canada, June 2012. http://www.aclweb.org/anthology/W12-3152
- 11.Tetlock, P.E., Gardner, D.: Superforecasting: The Art and Science of Prediction. Random House, New York City (2016)Google Scholar
- 12.Unger, R., Warfel, T.Z.: Guerilla UX Research Methods: Thrifty, Fast, and Effective User Experience Research Techniques. Morgan Kaufmann, Burlington (2012)Google Scholar
- 13.Valentine, M.A., Retelny, D., To, A., Rahmati, N., Doshi, T., Bernstein, M.S.: Flash organizations: crowdsourcing complex work by structuring crowds as organizations. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3523–3537. ACM (2017)Google Scholar
- 14.Yin, M., Chen, Y.: Predicting crowd work quality under monetary interventions. In: Fourth AAAI Conference on Human Computation and Crowdsourcing. The Association for the Advancement of Artificial Intelligence (2016)Google Scholar