Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Indexing with Crowds

  • Ahmed R. Mahmood
  • Walid G. Aref
  • Saleh Basalamah
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_80656

Synonyms

Crowdsourced indexing; Indexing techniques

Definition

Humans are better than computers in performing certain tasks, e.g., in understanding images and videos, in estimating prices of antique items, as well as in performing tasks that demand subjective judgement, e.g., ranking flowers or butterflies in terms of beauty, wilderness, or smell. Using crowd sourcing, one can delegate these tasks to humans. However, with large numbers of tasks, costs and response times can be large. Typically, database systems use a large variety of indexing techniques to speed up the search and the manipulation, e.g., update, insert, and delete, of large sets of data items. The question is: Are database indexing techniques applicable when humans are in the loop for such tasks that demand subjective judgement? The answer is in the affirmative. Indeed, several applications integrate database indexing alongside with crowdsourcing to reduce the overall task cost while still improving the quality of task...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Chow C-Y, Mokbel MF, Aref WG. Casper*: query processing for location services without compromising privacy. ACM Trans Database Syst. 2009;34(4):Article 24.CrossRefGoogle Scholar
  2. 2.
    Das Sarma A, Parameswaran A, Garcia-Molina H, Halevy A. Crowd-powered find algorithms. In: Proceedings of the 30th International Conference on Data Engineering; 2014. p. 964–75.Google Scholar
  3. 3.
    Davidson SB, Khanna S, Milo T, Roy S. Using the crowd for top-k and group-by queries. In: Proceedings of the 16th International Conference on Database Theory; 2013. p. 225–36.Google Scholar
  4. 4.
    Eltabakh MY, Aref WG, Elmagarmid AK, Silva YN, Ouzzani M. Supporting real-world activities in database management systems. In: Proceedings of the 26th International Conference on Data Engineering; 2010. p. 808–11.Google Scholar
  5. 5.
    Eltabakh MY, Aref WG, Elmagarmid AK, Silva YN, Ouzzani M. Handson db: managing data dependencies involving human actions. In: IEEE Trans Knowl Data Eng. 2013.Google Scholar
  6. 6.
    Finin T, Murnane W, Karandikar A, Keller N, Martineau J, Dredze M. Annotating named entities in twitter data with crowdsourcing. In: Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk; 2010. p. 80–8.Google Scholar
  7. 7.
    Franklin MJ, Kossmann D, Kraska T, Ramesh S, Xin R. CrowdDB: answering queries with crowdsourcing. In: Proceedings of the ACM SIGMOD International Conference on Management of Data; 2011. p. 61–72.Google Scholar
  8. 8.
    Guo S, Parameswaran AG, Garcia-Molina H. So who won? Dynamic max discovery with the crowd. In: Proceedings of the ACM SIGMOD International Conference on Management of Data; 2012. p. 385–96.Google Scholar
  9. 9.
    Kazemi L, Shahabi C. Geocrowd: enabling query answering with spatial crowdsourcing. In: Proceedings of the 20th SIGSPATIAL ACM International Symposium on Advances in Geographic Information Systems; 2012. p. 189–98.Google Scholar
  10. 10.
    Lofi C, El Maarry K, Balke W-T. Skyline queries in crowd-enabled databases. In: Proceedings of the 16th International Conference on Extending Database Technology; 2013. p. 465–76.Google Scholar
  11. 11.
    Mahmood AR, Aref WG, Dragut EC, Basalamah S. The palm-tree index: indexing with the crowd. In: Proceedings of the 1st VLDB Workshop on Databases and Crowdsourcing; 2013.Google Scholar
  12. 12.
    Marcus A, Wu E, Karger D, Madden S, Miller R. Human-powered sorts and joins. Proc VLDB Endowment. 2011;5(1):13–24.CrossRefGoogle Scholar
  13. 13.
    Marcus A, Wu E, Karger DR, Madden S, Miller RC. Crowdsourced databases: query processing with people. In: Proceedings of the 5th Biennial Conference on Innovative Data Systems Research; 2011.Google Scholar
  14. 14.
    Marcus A, Karger D, Madden S, Miller R, Oh S. Counting with the crowd. Proc VLDB Endowment. 6(2):109–20.CrossRefGoogle Scholar
  15. 15.
    Nowak S, Rüger S. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In: Proceedings of the 11th International Conference on Multimedia Information Retrieval. 2010. p. 557–66.Google Scholar
  16. 16.
    Park H, Pang R, Parameswaran AG, Garcia-Molina H, Polyzotis N, Widom J. Deco: a system for declarative crowdsourcing. Proc VLDB Endowment. 2012;5(12):1990–3.CrossRefGoogle Scholar
  17. 17.
    Roy SB, Lykourentzou I, Thirumuruganathan S, Amer-Yahia S, Das G, et al. Crowds, not drones: modeling human factors in interactive crowdsourcing. In: Proceedings of the 1st VLDB Workshop on Databases and Crowdsourcing; 2013. p. 39–42.Google Scholar
  18. 18.
    Roy SB, Lykourentzou I, Thirumuruganathan S, Amer-Yahia S, Das G. Optimization in knowledge-intensive crowdsourcing. CoRR; 2014. abs/1401.1302.Google Scholar
  19. 19.
    To H, Ghinita G, Shahabi C. A framework for protecting worker location privacy in spatial crowdsourcing. Proc VLDB Endow. 2014;7:919–30.CrossRefGoogle Scholar
  20. 20.
    Venetis P, Garcia-Molina H, Huang K, Polyzotis N. Max algorithms in crowdsourcing environments. In: Proceedings of the 21st International World Wide Web Conference; 2012. p. 989–98.Google Scholar
  21. 21.
    Vondrick C, Patterson D, Ramanan D. Efficiently scaling up crowdsourced video annotation. Int J Comput Vis. 2013;101(1):184–204.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Ahmed R. Mahmood
    • 1
  • Walid G. Aref
    • 2
  • Saleh Basalamah
    • 3
  1. 1.Computer SciencePurdue UniversityWest LafayetteUSA
  2. 2.Purdue UniversityWest LafayetteUSA
  3. 3.Computer ScienceUmm Al-Qura UniversityMecca, Makkah ProvinceSaudi Arabia