Skip to main content

Abstract

Computing competitions like the International Olympiad in Informatics (IOI) typically pose several problems that contestants are required to solve by writing a program. The program is tested automatically on several sets of input data to determine whether or not it computes the correct answer within specified time and memory limits. We consider the controversy of whether and how to award partial credit for programs that fail some of the tests. Using item response theory, we analyze the degree to which the scores from these automatic tests, separately and in various combinations, truly reflect the contestants’ achievement.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. The International Olympiad of Informatics (IOI) (2006), http://www.ioinformatics.org/

  2. Baker, F.: The Basics of Item Response Theory ERIC Clearinghouse on Assessment and Evaluation, College Park, MD (2001), Also available at: http://edres.org/irt/baker/

  3. Cormack, G.: Random Factors in IOI 2005 Test Case Scoring. In Perspectives on Computer Science Competitions for (High School) Students (Dagstuhl) (2006)

    Google Scholar 

  4. Cormack, G., Kemkes, G., Munro, I., Vasiga, T.: Structure, Scoring and Purpose of Computing Competitions. Informatics in Education 5, 1–22 (2006)

    Google Scholar 

  5. Partchev, I.: Visual guide to item response theory, Available at: http://www2.uni-jena.de/svw/metheval/irt/VisualIRT.pdf

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kemkes, G., Vasiga, T., Cormack, G. (2006). Objective Scoring for Computing Competition Tasks. In: Mittermeir, R.T. (eds) Informatics Education – The Bridge between Using and Understanding Computers. ISSEP 2006. Lecture Notes in Computer Science, vol 4226. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11915355_22

Download citation

  • DOI: https://doi.org/10.1007/11915355_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-48218-5

  • Online ISBN: 978-3-540-48227-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics