Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Computer-graders have been in regular use in the context of MOOCs (Massive Open Online Courses). The automatic grading of programs presents an opportunity to assess and provide tailored feedback to large classes, while featuring at the same time a number of benefits like: immediate feedback, unlimited submissions, as well as low cost of feedback. The present paper compares Algo+, an automatic assessment tool for computer programs, to an automatic grader used in a MOOC course at EPFL (Ecole Polytechnique Federale de Lausanne, Switzerland). This empirical study explores the practicability and behavior of Algo+ and analyzes whether it can be used to evaluate a large scale of programs. Algo+ is a prototype based on a static analysis approach for automated assessment of algorithms where programs are not executed but analyzed by looking at their instructions. The second tool, EPFL grader, is used to grade programs submitted by students in MOOCs of Introductory programming with C++ at EPFL and is based on a compiler approach (Dynamic Analysis approach). In this technique submissions are assessed via a battery of unit tests where the student programs are run with standard input and assessed on whether they produced the correct output. In this study results showed the advantages and limits of each approach and pointed out how the two tools can be used to get a benefit assessment of students' learning in MOOCs of computer programming. This study led to the proposition of a model for the relationship between the number of submissions and the appearance of the most frequent submitted programs. This technique is used by Algo+ for giving feedback and it is based only on the n most redundant submissions that have been annotated by the instructor.
Martin Odersky, Sébastien Jean R Doeraene, Nicolas Alexander Stucki