Abstract
Amazon Mechanical Turk (AMT), a system for crowdsourcing work, has been used in many academic fields to support research and could be similarly useful for information systems research. This paper briefly describes the functioning of the AMT system and presents a simple typology of research data collected using AMT. For each kind of data, it discusses potential threats to reliability and validity and possible ways to address those threats. The paper concludes with a brief discussion of possible applications of AMT to research on organizations and information systems.
Chapter PDF
Similar content being viewed by others
References
Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast—But is it good?: Evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 254–263. Association for Computational Linguistics (2008)
Kaisser, M., Lowe, J.: Creating a research collection of question answer sentence pairs with Amazon’s Mechanical Turk. In: Proceedings of the Sixth International Conference on Language Resources and Evaluation, LREC 2008 (2008)
Rashtchian, C., Young, P., Hodosh, M., Hockenmaier, J.: Collecting image annotations using Amazon’s Mechanical Turk. In: Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pp. 139–147. Association for Computational Linguistics (2010)
Sorokin, A., Forsyth, D.: Utility data annotation with Amazon Mechanical Turk. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–8 (2008), doi:10.1109/CVPRW.2008.4562953
Heer, J., Bostock, M.: Crowdsourcing graphical perception: Using Mechanical Turk to assess visualization design. In: Proceedings of the 28th International Conference on Human Factors in Computing Systems (CHI 2010), pp. 203–212. ACM (2010), doi:10.1145/1753326.1753357
Kittur, A., Chi, E.H., Suh, B.: Crowdsourcing user studies with Mechanical Turk. In: Proceedings of the ACM Conference on Human-factors in Computing Systems, pp. 453–456. ACM, New York (2008)
Berinsky, A.J., Huber, G.A., Lenz, G.S.: Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis 20, 351–368 (2012), doi:10.1093/pan/mpr057
Buhrmester, M., Kwang, T., Gosling, S.D.: Amazon’s Mechanical Turk. Perspectives on Psychological Science 6, 3–5 (2011), doi:10.1177/1745691610393980
Conley, C.A.: Design for quality: The case of Open Source Software development. PhD dissertation. New York University, New York, NY (2008)
http://www.amazon.com/gp/help/customer/display.html?nodeId=16465291
Brabham, D.C.: Crowdsourcing as a model for problem solving: An introduction and cases. Convergence 14, 75 (2008)
Mason, W., Suri, S.: Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods 44, 1–23 (2012)
Ipeirotis, P.G.: Analyzing the Amazon Mechanical Turk marketplace. XRDS 17, 16–21 (2010), doi:10.1145/1869086.1869094
Horton, J.J., Chilton, L.B.: The labor economics of paid crowdsourcing. In: Proceedings of the 11th ACM Conference on Electronic Commerce, pp. 209–218 (2010)
Chilton, L.B., Horton, J.J., Miller, R.C., Azenkot, S.: Task search in a human computation market. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 1–9. ACM (2010), doi:10.1145/1837885.1837889
Paolacci, G., Chandler, J., Ipeirotis, P.G.: Running experiments on Amazon Mechanical Turk. Judgment and Decision Making 5, 411–419 (2010)
Sprouse, J.: A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory. Behavior Research Methods 43, 155–167 (2011), doi:10.3758/s13428-010-0039-7
Ipeirotis, P.G.: Demographics of Mechanical Turk. Working Paper CEDER-10-01, New York University (2010), http://ssrn.com/abstract=1585030
Wang, Y.-C., Kraut, R., Levine, J.M.: To stay or leave? The relationship of emotional and informational support to commitment in online health support groups. In: Proceedings of the ACM Conference on Computer Supported Cooperative Work, pp. 833–842. ACM (2012), doi:10.1145/2145204.2145329
Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on Amazon Mechanical Turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010), doi:10.1145/1837885.1837906
De George, R.: Information technology, globalization and ethics. Ethics and Information Technology 8, 29–40 (2006)
Crowston, K., Prestopnik, N.R.: Motivation and data quality in a citizen science game: A design science evaluation. In: Proceedings of Hawai’i International Conference on System Science (2013)
Cohn, J.P.: Citizen science: Can volunteers do real research? BioScience 58, 192–107 (2008)
Wiggins, A., Crowston, K.: From conservation to crowdsourcing: A typology of citizen science. In: Proceedings of 44th Hawaii International Conference on System Sciences, pp. 1–10 (2011), doi:10.1109/HICSS.2011.207
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 IFIP International Federation for Information Processing
About this paper
Cite this paper
Crowston, K. (2012). Amazon Mechanical Turk: A Research Tool for Organizations and Information Systems Scholars. In: Bhattacherjee, A., Fitzgerald, B. (eds) Shaping the Future of ICT Research. Methods and Approaches. IFIP Advances in Information and Communication Technology, vol 389. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35142-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-642-35142-6_14
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-35141-9
Online ISBN: 978-3-642-35142-6
eBook Packages: Computer ScienceComputer Science (R0)