Computer-Based Testing using PrairieLearn in BJC
Bojin Yao
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2021-156
May 21, 2021
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-156.pdf
Starting in the spring of 2019, Professor Dan Garcia turned the focus of his computer science education research and development group to proficiency-based learning for CS10: The Beauty and Joy of Computing (BJC) [1]. The long-term goal was for students to have the opportunity to achieve proficiency through formative assessments that they could practice over and over (and perhaps even summative assessments they could take, and continue to retake until they were above threshold), rather than only being able to showcase their abilities during a few high-stress, high-stakes exams. To achieve this goal, we looked to Question Generators (QG), implemented on the PrairieLearn (PL) platform [2] from the University of Illinois at Urbana Champaign. Question Generators (QG) are computer programs that randomly generate different variants of a question based on predefined parameters. According to [3], randomization is an effective tool to deter collaborative cheating for assessments. This became especially important for the remote-learning environment during the recent pandemic when all assessments had to be conducted online, making exam proctoring a challenging issue. As one of the technical leads of the group, I was among the first students to dive deep into PL to set up foundational infrastructure for CS10. I assisted in creating and documenting many of the subsequent QGs, best practices, and tools; later, I also led the first contributions to PL’s codebase and collaborated with PL’s development team. One of the research contributions during that time was advocacy of a better way to formally categorize QGs, into conceptual and numerical variants [4]. As a result of this work, all of the lectures in CS10 had become video-based with randomly generated quizzes that students had access to the entire semester. This helped us to accommodate schedules of all students and free up the usual lecture times for office hours. The randomized quiz questions incentivized students to pay attention to the contents of the video lectures instead of getting answers from their classmates. The fully auto-graded quizzes also came at no additional cost to staff hours. Additionally, for the first time ever, CS10 was able to introduce a long-exam format to alleviate exam pressure and accommodate the needs of all students. This was possible partly because the randomized questions made collaborative cheating more difficult, and we devised new strategies to detect cheating using comprehensive logs. Furthermore, the entire exams were auto-graded with scores and statistics released minutes after the exams’ conclusion that saved many hours of handgrading or scanning the exam papers. The purpose of this master’s report is to document and share results from three aspects of my program working with PL: (1) software development, (2) CS10 curricular development using QGs, and (3) student mentorship. I will detail many of the things I’ve learned, best practices, challenges we faced while working on this project, and how we resolved these challenges. Additionally, I will also mention some important research work related to CS10’s PL project that might be informative for others looking to transition to computer-based testing. It is hoped that future PL-course TAs, QG authors, PL developers, and others interested in computer-based testing will benefit from the information presented in this report.
Advisors: Dan Garcia
BibTeX citation:
@mastersthesis{Yao:EECS-2021-156, Author= {Yao, Bojin}, Editor= {Garcia, Dan}, Title= {Computer-Based Testing using PrairieLearn in BJC}, School= {EECS Department, University of California, Berkeley}, Year= {2021}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-156.html}, Number= {UCB/EECS-2021-156}, Abstract= {Starting in the spring of 2019, Professor Dan Garcia turned the focus of his computer science education research and development group to proficiency-based learning for CS10: The Beauty and Joy of Computing (BJC) [1]. The long-term goal was for students to have the opportunity to achieve proficiency through formative assessments that they could practice over and over (and perhaps even summative assessments they could take, and continue to retake until they were above threshold), rather than only being able to showcase their abilities during a few high-stress, high-stakes exams. To achieve this goal, we looked to Question Generators (QG), implemented on the PrairieLearn (PL) platform [2] from the University of Illinois at Urbana Champaign. Question Generators (QG) are computer programs that randomly generate different variants of a question based on predefined parameters. According to [3], randomization is an effective tool to deter collaborative cheating for assessments. This became especially important for the remote-learning environment during the recent pandemic when all assessments had to be conducted online, making exam proctoring a challenging issue. As one of the technical leads of the group, I was among the first students to dive deep into PL to set up foundational infrastructure for CS10. I assisted in creating and documenting many of the subsequent QGs, best practices, and tools; later, I also led the first contributions to PL’s codebase and collaborated with PL’s development team. One of the research contributions during that time was advocacy of a better way to formally categorize QGs, into conceptual and numerical variants [4]. As a result of this work, all of the lectures in CS10 had become video-based with randomly generated quizzes that students had access to the entire semester. This helped us to accommodate schedules of all students and free up the usual lecture times for office hours. The randomized quiz questions incentivized students to pay attention to the contents of the video lectures instead of getting answers from their classmates. The fully auto-graded quizzes also came at no additional cost to staff hours. Additionally, for the first time ever, CS10 was able to introduce a long-exam format to alleviate exam pressure and accommodate the needs of all students. This was possible partly because the randomized questions made collaborative cheating more difficult, and we devised new strategies to detect cheating using comprehensive logs. Furthermore, the entire exams were auto-graded with scores and statistics released minutes after the exams’ conclusion that saved many hours of handgrading or scanning the exam papers. The purpose of this master’s report is to document and share results from three aspects of my program working with PL: (1) software development, (2) CS10 curricular development using QGs, and (3) student mentorship. I will detail many of the things I’ve learned, best practices, challenges we faced while working on this project, and how we resolved these challenges. Additionally, I will also mention some important research work related to CS10’s PL project that might be informative for others looking to transition to computer-based testing. It is hoped that future PL-course TAs, QG authors, PL developers, and others interested in computer-based testing will benefit from the information presented in this report.}, }
EndNote citation:
%0 Thesis %A Yao, Bojin %E Garcia, Dan %T Computer-Based Testing using PrairieLearn in BJC %I EECS Department, University of California, Berkeley %D 2021 %8 May 21 %@ UCB/EECS-2021-156 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-156.html %F Yao:EECS-2021-156