Scalable Requirements Elicitation Education Through Simulated Interview Practice with Large Language Models

Nelson Lojo

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2025-52
May 13, 2025

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-52.pdf

Conducting Requirements Elicitation (ReqEl) interviews is a crucial software engineering skill that involves interviewing a client and then devising a software design based on the interview results. Effectively teaching this inherently experiential skill is incredibly costly—for example, acquiring an industry partner to interview, or training course staff or other students to play the role of a client. As a result, a typical instructional approach is to provide students with transcripts of real or fictitious interviews to analyze. This exercise trains the skill of extracting technical requirements but fails to develop equally important skills to conduct an interview. As an alternative to transcript-based exercises, we propose conditioning a large language model to play the role of the client during a chat-based interview. We devise a scheme to specify this conditioning in order to ensure that the LLM is (1) believable as a client, (2) resistant to simple jailbreaks that can be conducted in a classroom, (3) specific enough for students to glean useful information, and (4) non-technical enough to enable student practice. We implement a web tool to administer and evaluate both chat-based and transcript-based exercises. Using this tool, we perform a between-subjects study (n = 120) in which students construct a high-level application design from either an interactive LLM- backed interview session or an existing interview transcript describing the same business processes. Through both a qualitative survey and quantitative observations of participant work, we find that both chat-based and transcript-based exercises provide sufficient information for participants to construct technically sound solutions and require comparable time on task, but the chat-based approach is preferred by most participants. Importantly, we observe that interviewing the LLM is seen as both more realistic and more engaging, despite the LLM occasionally providing imprecise or contradictory information. These results, combined with the wide accessibility of LLMs, suggest a new way to practice critical ReqEl skills in a scalable and realistic manner without the overhead of arranging live interviews.

Advisor: Armando Fox

\"Edit"; ?>


BibTeX citation:

@mastersthesis{Lojo:EECS-2025-52,
    Author = {Lojo, Nelson},
    Title = {Scalable Requirements Elicitation Education Through Simulated Interview Practice with Large Language Models},
    School = {EECS Department, University of California, Berkeley},
    Year = {2025},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-52.html},
    Number = {UCB/EECS-2025-52},
    Abstract = {Conducting Requirements Elicitation (ReqEl) interviews is a crucial software engineering skill that involves interviewing a client and then devising a software design based on the interview results. Effectively teaching this inherently experiential skill is incredibly costly—for example, acquiring an industry partner to interview, or training course staff or other students to play the role of a client. As a result, a typical instructional approach is to provide students with transcripts of real or fictitious interviews to analyze. This exercise trains the skill of extracting technical requirements but fails to develop equally important skills to conduct an interview. As an alternative to transcript-based exercises, we propose conditioning a large language model to play the role of the client during a chat-based interview. We devise a scheme to specify this conditioning in order to ensure that the LLM is (1) believable as a client, (2) resistant to simple jailbreaks that can be conducted in a classroom, (3) specific enough for students to glean useful information, and (4) non-technical enough to enable student practice. We implement a web tool to administer and evaluate both chat-based and transcript-based exercises. Using this tool, we perform a between-subjects study (n = 120) in which students construct a high-level application design from either an interactive LLM- backed interview session or an existing interview transcript describing the same business processes. Through both a qualitative survey and quantitative observations of participant work, we find that both chat-based and transcript-based exercises provide sufficient information for participants to construct technically sound solutions and require comparable time on task, but the chat-based approach is preferred by most participants. Importantly, we observe that interviewing the LLM is seen as both more realistic and more engaging, despite the LLM occasionally providing imprecise or contradictory information. These results, combined with the wide accessibility of LLMs, suggest a new way to practice critical ReqEl skills in a scalable and realistic manner without the overhead of arranging live interviews.}
}

EndNote citation:

%0 Thesis
%A Lojo, Nelson
%T Scalable Requirements Elicitation Education Through Simulated Interview Practice with Large Language Models
%I EECS Department, University of California, Berkeley
%D 2025
%8 May 13
%@ UCB/EECS-2025-52
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-52.html
%F Lojo:EECS-2025-52