Conversation Regression Testing: A Design Technique for Prototyping Generalizable Prompt Strategies for Pre-trained Language Models
J.D. Zamfirescu-Pereira and Björn Hartmann and Qian Yang
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2023-16
February 7, 2023
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-16.pdf
Pre-trained language models (LLMs) such as GPT-3 can carry fluent, multi-turn conversations out-of-the-box, making them attractive materials for chatbot design. Further, designers can improve LLM chatbot utterances by prepending textual prompts -- instructions and examples of desired interactions -- to its inputs. However, prompt-based improvements can be brittle; designers face challenges systematically understanding how a prompt strategy might impact the unfolding of subsequent conversations across users. To address this challenge, we introduce the concept of Conversation Regression Testing. Based on sample conversations with a baseline chatbot, Conversation Regression Testing tracks how conversational errors persist or are resolved by applying different prompt strategies. We embody this technique in an interactive design tool, BotDesigner, that lets designers identify archetypal errors across multiple conversations; shows common threads of conversation using a graph visualization; and highlights the effects of prompt changes across bot design iterations. A pilot evaluation demonstrates the usefulness of both the concept of regression testing and the functionalities of BotDesigner for chatbot designers.
BibTeX citation:
@techreport{Zamfirescu-Pereira:EECS-2023-16, Author= {Zamfirescu-Pereira, J.D. and Hartmann, Björn and Yang, Qian}, Title= {Conversation Regression Testing: A Design Technique for Prototyping Generalizable Prompt Strategies for Pre-trained Language Models}, Year= {2023}, Month= {Feb}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-16.html}, Number= {UCB/EECS-2023-16}, Abstract= {Pre-trained language models (LLMs) such as GPT-3 can carry fluent, multi-turn conversations out-of-the-box, making them attractive materials for chatbot design. Further, designers can improve LLM chatbot utterances by prepending textual prompts -- instructions and examples of desired interactions -- to its inputs. However, prompt-based improvements can be brittle; designers face challenges systematically understanding how a prompt strategy might impact the unfolding of subsequent conversations across users. To address this challenge, we introduce the concept of Conversation Regression Testing. Based on sample conversations with a baseline chatbot, Conversation Regression Testing tracks how conversational errors persist or are resolved by applying different prompt strategies. We embody this technique in an interactive design tool, BotDesigner, that lets designers identify archetypal errors across multiple conversations; shows common threads of conversation using a graph visualization; and highlights the effects of prompt changes across bot design iterations. A pilot evaluation demonstrates the usefulness of both the concept of regression testing and the functionalities of BotDesigner for chatbot designers.}, }
EndNote citation:
%0 Report %A Zamfirescu-Pereira, J.D. %A Hartmann, Björn %A Yang, Qian %T Conversation Regression Testing: A Design Technique for Prototyping Generalizable Prompt Strategies for Pre-trained Language Models %I EECS Department, University of California, Berkeley %D 2023 %8 February 7 %@ UCB/EECS-2023-16 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-16.html %F Zamfirescu-Pereira:EECS-2023-16