The Difficulty of Programming Contests Increases
Conference paper
- 2 Citations
- 854 Downloads
Abstract
In this paper we give a detailed quantitative and qualitative analysis of the difficulty of programming contests in past years. We analyze task topics in past competition tasks, and also analyze an entire problem set in terms of required algorithm efficiency. We provide both subjective and objective data on how contestants are getting better over the years and how the tasks are getting harder. We use an exact, formal method based on Item Response Theory to analyze past contest results.
Keywords
Item Response Theory Skill Level Hard Task Median Task Contest Result
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Preview
Unable to display preview. Download preview PDF.
References
- 1.ACM International Collegiate Programming Contest, http://cm2prod.baylor.edu/ (accessed 2008)
- 2.Baker, F.B., Kim, S.: Item Response Theory: Parameter Estimation Techniques. CRC, Boca Raton (2004), http://edres.org/irt/baker/ zbMATHGoogle Scholar
- 3.Burton, B.: Breaking the routine: events to complement informatics olympiad training. Olympiads in Informatics 2, 5–15 (2008)Google Scholar
- 4.Fenwick, P.: A New Data Structure for Cumulative Frequency Tables. Software – Practice And Experience 24, 327–336 (1994)CrossRefGoogle Scholar
- 5.Forišek, M.: Theoretical and Practical Aspects of Programming Contests. PhD thesis, Comenius University (2009)Google Scholar
- 6.Forišek, M.: Using Item Response Theory to Rate (Not Only) Programmers. Olympiads in Informatics 3, 3–16 (2009)Google Scholar
- 7.Greve, M.: UVA toolkit (2009), http://uvatoolkit.com/
- 8.International Olympiad in Informatics, http://ioinformatics.org (accessed 2009)
- 9.Internet Problem Solving Contest, http://ipsc.ksp.sk/ (accessed 2009)
- 10.Kelevedjiev, E., Dzhenkova, Z.: Tasks and training the youngest beginners for informatics competitions. Olympiads in Informatics 2, 75–89 (2008)Google Scholar
- 11.Kemkes, G., Vasiga, T., Cormack, G.: Objective Scoring for Computing Competition Tasks. In: Mittermeir, R.T. (ed.) ISSEP 2006. LNCS, vol. 4226, pp. 230–241. Springer, Heidelberg (2006)CrossRefGoogle Scholar
- 12.Kiryukhin, V.: The Modern Contents of the Russian National Olympiads in Informatics. Olympiads in Informatics 1, 90–104 (2007)Google Scholar
- 13.Kiryukhin, V., Okulov, S.: Methods of Problem Solving in Informatics: International Olympiads. Izdatelstvo BINOM (2007) (in Russian)Google Scholar
- 14.Manev, K.: Tasks on graphs. Olympiads in Informatics 2, 90–104 (2008)Google Scholar
- 15.Naverniouk, I.: Igor’s UVa Tools (2009), http://shygypsy.com/acm/
- 16.Opmanis, M.: Team Competition in Mathematics and Informatics “Ugāle” – finding new task types. Olympiads in Informatics 3, 80–100 (2009)Google Scholar
- 17.Revilla, M., et al.: University of Valladolid (UVa) Online Judge (2009), http://uva.onlinejudge.org/
- 18.Revilla, M., Manzoor, S., Liu, R.: Competitive Learning in Informatics: The UVa Online Judge Experience. Olympiads in Informatics 2, 131–148 (2008)Google Scholar
- 19.Skiena, S., Revilla, M.: Programming Challenges. Springer, Heidelberg (2003)zbMATHGoogle Scholar
- 20.TopCoder, Inc.: Algorithm Data Feeds (2009), http://www.topcoder.com/wiki/display/tc/Algorithm+Data+Feeds
- 21.Truu, A., Ivanov, H.: On Using Testing-Related Tasks in the IOI. Olympiads in Informatics 2, 171–180 (2008)Google Scholar
- 22.Verhoeff, T.: 20 Years of IOI Competition Tasks. Olympiads in Informatics 3, 149–166 (2009)Google Scholar
- 23.Verhoeff, T., Horváth, G., Diks, K., Cormack, G.: A Proposal for an IOI Syllabus. Teaching Mathematics and Computer Science 4, 193–216 (2006)Google Scholar
- 24.Verhoeff, T., Horváth, G., Diks, K., Cormack, G., Forišek, M.: IOI Syllabus for IOI 2009 (2009), http://www.ioi2009.org/GetResource?id=32
Copyright information
© Springer-Verlag Berlin Heidelberg 2010