ACES: Automatic Evaluation of Coding Style
Stephanie Rogers and Dan Garcia and John F. Canny and Steven Tang and Daniel Kang
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2014-77
May 15, 2014
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-77.pdf
Coding style is important to teach to beginning programmers, so that bad habits don't become permanent. This is often done manually at the University level because automated static analyzers cannot accurately grade based on a given rubric. However, even manual analysis of coding style encounters problems, as we have seen quite a bit of inconsistency among our graders. We introduce ACES--Automated Coding Evaluation of Style--a module that automates grading for the composition of Python programs. ACES, given certain constraints, assesses the composition of a program through static analysis, conversion from code to an Abstract Syntax Tree, and clustering (unsupervised learning), helping streamline the subjective process of grading based on style and identifying common mistakes. Further, we create visual representations of the clusters to allow readers and students understand where a submission falls, and what are the overall trends. We have applied this tool to CS61A--a CS1 level course at UC, Berkeley experiencing rapid growth in student enrollment--in an attempt to help expedite the involved process of grading code based off of composition, as well as reduce human grader inconsistencies.
Advisors: John F. Canny and Dan Garcia
BibTeX citation:
@mastersthesis{Rogers:EECS-2014-77, Author= {Rogers, Stephanie and Garcia, Dan and Canny, John F. and Tang, Steven and Kang, Daniel}, Title= {ACES: Automatic Evaluation of Coding Style}, School= {EECS Department, University of California, Berkeley}, Year= {2014}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-77.html}, Number= {UCB/EECS-2014-77}, Abstract= {Coding style is important to teach to beginning programmers, so that bad habits don't become permanent. This is often done manually at the University level because automated static analyzers cannot accurately grade based on a given rubric. However, even manual analysis of coding style encounters problems, as we have seen quite a bit of inconsistency among our graders. We introduce ACES--Automated Coding Evaluation of Style--a module that automates grading for the composition of Python programs. ACES, given certain constraints, assesses the composition of a program through static analysis, conversion from code to an Abstract Syntax Tree, and clustering (unsupervised learning), helping streamline the subjective process of grading based on style and identifying common mistakes. Further, we create visual representations of the clusters to allow readers and students understand where a submission falls, and what are the overall trends. We have applied this tool to CS61A--a CS1 level course at UC, Berkeley experiencing rapid growth in student enrollment--in an attempt to help expedite the involved process of grading code based off of composition, as well as reduce human grader inconsistencies.}, }
EndNote citation:
%0 Thesis %A Rogers, Stephanie %A Garcia, Dan %A Canny, John F. %A Tang, Steven %A Kang, Daniel %T ACES: Automatic Evaluation of Coding Style %I EECS Department, University of California, Berkeley %D 2014 %8 May 15 %@ UCB/EECS-2014-77 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-77.html %F Rogers:EECS-2014-77