TSW: A Web-Based Automatic Correction System for C Programming Exercises

  • Pietro Longo
  • Andrea Sterbini
  • Marco Temperini
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5736)


We present the TSW system (TestSystem Web), a web-based environment currently developed at the Rome 1 University, for the delivery of C programming exercises and their automatic correction.

The core of the correction system automatically tests the student’s programs by applying unit-tests and/or by comparing the behaviour of the student’s code to a reference implementation. Care is taken to avoid error propagation from a function to other functionally depending procedures by redirecting the failing calls to the corresponding reference implementation. The system “instruments” the student’s code by using a code analyser and rewriter to handle instruction tracing and function calls redirection. The rewriter can be easily extended to develop other analysis instruments. As an example, we have developed: a code coverage tool that reports how much of the student’s code has been visited during the test, a cyclomatic complexity evaluator to compare the number of different logic paths in the code, a tracker for stack depth usage to check for proper implementation of recursive functions, a function/loop execution counter for the evaluation of the execution complexity. Additional care is taken to capture disruptive errors that would abort the program: “segmentation faults” caused by wrong pointer dereferentiation, and time-out caused by run-away processes.

With these tools, the teacher can write rich unit tests that can either compare the behaviour of the function under analysis with a reference implementation (e.g. by generating random input and comparing the results), or by submitting well-crafted special inputs to elicit special cases or by comparing the complexity and/or stack depth counters. Each test applied will then explain to the student what was the problem found.

TSW is the core component of a future larger social knowledge project, in which students will cooperatively/competitively participate to the definition and test of each-other’s programs, sharing ideas and learning from each other.


automatic grading automatic correction programming exercises 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Morris, D.S.: Automatically Grading Java Programming Assignments via Reflection, Inheritance, and Regular Expressions. In: Proc. Frontiers in Education 2002, Boston, USA (2002)Google Scholar
  2. 2.
  3. 3.
  4. 4.
    Sterbini, A., Temperini, M.: Automatic correction of C programming exercises through Unit-Testing and Aspect-Programming. EISTA (2004)Google Scholar
  5. 5.
    Sterbini, A., Temperini, M.: Good students help each other: improving knowledge sharing through reputation systems. In: ITHET 2007, Kumamoto, Japan, July 10-13 (2007)Google Scholar
  6. 6.
    Sterbini, A., Temperini, M.: Learning from peers: motivating students through reputation systems. In: SPeL 2008, Turku, Finland, July 28 (2008)Google Scholar
  7. 7.
    Fernandez, G., Sterbini, A., Temperini, M.: Learning Objects: a Metadata Approach. In: Eurocon 2007, Varsaw, September 9-12 (2007)Google Scholar
  8. 8.
    Limongelli, C., Sterbini, A., Temperini, M.: Automated course configuration based on automated planning: framework and first experiments. In: International Conference Methods and Technologies for Learning (ICMTL 2005), Palermo, Italy (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Pietro Longo
    • 1
  • Andrea Sterbini
    • 1
  • Marco Temperini
    • 2
  1. 1.DIUniversity of Rome “La Sapienza”RomeItaly
  2. 2.DISUniversity of Rome “La Sapienza”RomeItaly

Personalised recommendations