CrowdAE: A Crowdsourcing System with Human Inspection Quality Enhancement for Web Accessibility Evaluation
Crowdsourcing technology can help manual testing by soliciting the contributions from volunteer evaluators. But crowd evaluators may give inaccurate or invalid evaluation results. This paper proposes an advanced crowdsourcing-based web accessibility evaluation system called CrowdAE by enhancing the crowdsourcing-based manual testing module of the previous version. Through three main process namely learning system, task assignment and task review, we can improve the quality of evaluation results from the crowd. From the comparison on the two years’ evaluation process of Chinese government websites, our CrowdAE outperforms the previous version and improve the accuracy of the evaluation results.
KeywordsWeb accessibility evaluation Crowdsourcing
This work is supported by the National Natural Science Foundation of China (Nos. 61173185 and 61173186), the National Key Technology R&D Program of China (No. 2012BAI34B01 and 2014BAK15B02), and the Hangzhou S&T Development Plan (No. 20150834M22).
- 1.Sullivan, T., Matson, R.: Barriers to use: usability and content accessibility on the Web’s most popular websites. In: ACM Conference on Universal Usability, pp. 139–144 (2000)Google Scholar
- 4.Brajnik, G., Yesilada, Y., Harper, S.: The expertise effect on web accessibility evaluation methods. Hum.-Comput. Interact. 26(3), 246–283 (2011)Google Scholar
- 5.Lujan-Mora, S., Navarrete, R., Penafiel, M.: E-government and web accessibility in South America. In: First International Conference on E-democracy & E-government, pp. 77–82 (2014)Google Scholar
- 7.Brajnik, G.: A comparative test of Web accessibility evaluation methods. In: Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 113–120 (2008)Google Scholar