Laryn Qi

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-132

May 17, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-132.pdf

Chatbot interfaces for LLMs enable students to get immediate, interactive help on homework assignments, but doing so naively may not serve pedagogical goals. In this thesis, we first report on the development and deployment of an LLM-based homework assistant for students in a large introductory computer science course. Our assistant offers both a "Get Help" button within a popular code editor, as well as a "Get Feedback" feature within our command line autograder, wrapping student code in a custom prompt that supports our pedagogical goals and avoids providing solutions directly. We explore class-wide effects of deploying this AI assistant, showing that hints are appreciated by students, and the bot's effects on reducing homework completion time are concentrated among students with above-median times, suggesting that these hints can help struggling students make more rapid progress on assignments. Then, we present and evaluate three research questions that measure the effectiveness of the assistant's hints: Do the hints help students make progress? How effectively do hints capture issues in their code? Are students able to apply the hints to make progress? Through analysis of knowledge components, we show that students are able to resolve problems with their code and approach a working solution more quickly with access to hints, that the assistant's hints are able to consistently capture the most pressing errors in students' code, and that hints that address a few issues at once rather than a single bug are more likely to lead to student progress.

Advisors: John DeNero


BibTeX citation:

@mastersthesis{Qi:EECS-2024-132,
    Author= {Qi, Laryn},
    Title= {Developing and Evaluating LLM Assistants in Introductory Computer Science},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-132.html},
    Number= {UCB/EECS-2024-132},
    Abstract= {Chatbot interfaces for LLMs enable students to get immediate, interactive help on homework assignments, but doing so naively may not serve pedagogical goals. In this thesis, we first report on the development and deployment of an LLM-based homework assistant for students in a large introductory computer science course. Our assistant offers both a "Get Help" button within a popular code editor, as well as a "Get Feedback" feature within our command line autograder, wrapping student code in a custom prompt that supports our pedagogical goals and avoids providing solutions directly. We explore class-wide effects of deploying this AI assistant, showing that hints are appreciated by students, and the bot's effects on reducing homework completion time are concentrated among students with above-median times, suggesting that these hints can help struggling students make more rapid progress on assignments. Then, we present and evaluate three research questions that measure the effectiveness of the assistant's hints: Do the hints help students make progress? How effectively do hints capture issues in their code? Are students able to apply the hints to make progress? Through analysis of knowledge components, we show that students are able to resolve problems with their code and approach a working solution more quickly with access to hints, that the assistant's hints are able to consistently capture the most pressing errors in students' code, and that hints that address a few issues at once rather than a single bug are more likely to lead to student progress.},
}

EndNote citation:

%0 Thesis
%A Qi, Laryn 
%T Developing and Evaluating LLM Assistants in Introductory Computer Science
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 17
%@ UCB/EECS-2024-132
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-132.html
%F Qi:EECS-2024-132