Leveraging Instructor Feedback at Scale

Kristin Stephens-Martinez and Hezheng Yin

IBM and Google

We work on tools to leverage the feedback of (primarily CS) instructors at scale, for large residential courses, MOOCs, and hybrid courses. The AutoStyle project uses a combination of automatic analysis and instructor-constructed "strategy hints" to automatically generate feedback on coding style--the tasteful use of language features and idioms to produce code that is not only correct, but also concise, elegant, and revealing of design intent, as opposed to simply code correctness. In an initial randomized controlled trial, 70% of students using our system achieved the "stylistically best" solution to a coding problem in less than an hour, while only 13% of students in the control group did so. Students using our system also showed a statistically-significant greater improvement in code style than students in the control group. We are also working on a system to mine students' incorrect (constructed) responses to questions in order to determine possible misunderstandings and target them for remediation. Expert instructors hand-label the top K most popular wrong answers from a question set according to the misunderstanding they think those wrong answers might signify; co-occurrence statistics are then used to propagate these labels to unlabeled wrong answers.