Forrest Huang

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2022-175

July 18, 2022

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-175.pdf

Sketching and prototyping are central to creative activities that improve and advance many aspects of human lives. They enable non-experts to express themselves through drawing, or help User Interface (UI) designers explore diverse alternatives through low-fidelity prototyping. Generating these sketches and prototypes, however, typically requires significant expertise that casual users might not possess, and may be effortful and time-consuming even for professional users.

In this dissertation, I will introduce multiple deep-learning methods and systems that can generate sketches and prototypes. The generation of these artifacts is designed to be guided by annotations in familiar modalities (e.g., generating user interfaces from text descriptions). The presented generation systems and methods include Sketchforme, a system that generates individual sketched scenes from text descriptions; Scones, a system that iteratively generates and refines sketched scenes based on users' multiple text instructions; and Words2ui, a collection of methods that can create UI prototypes from high-level text descriptions. This research creates unique affordances, advances the state-of-the-art of creativity support tools, contributes benchmark metrics, and explores novel interaction paradigms in diverse domains from non-expert sketching to professional UI design. These research contributions can serve as important building blocks towards future multi-modal systems that enable more effective and efficient sketching and prototyping for all.

Advisors: John F. Canny


BibTeX citation:

@phdthesis{Huang:EECS-2022-175,
    Author= {Huang, Forrest},
    Title= {Human-Guided Generation of Sketches and Prototypes},
    School= {EECS Department, University of California, Berkeley},
    Year= {2022},
    Month= {Jul},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-175.html},
    Number= {UCB/EECS-2022-175},
    Abstract= {Sketching and prototyping are central to creative activities that improve and advance many aspects of human lives. They enable non-experts to express themselves through drawing, or help User Interface (UI) designers explore diverse alternatives through low-fidelity prototyping. Generating these sketches and prototypes, however, typically requires significant expertise that casual users might not possess, and may be effortful and time-consuming even for professional users.

In this dissertation, I will introduce multiple deep-learning methods and systems that can generate sketches and prototypes. The generation of these artifacts is designed to be guided by annotations in familiar modalities (e.g., generating user interfaces from text descriptions). The presented generation systems and methods include Sketchforme, a system that generates individual sketched scenes from text descriptions; Scones, a system that iteratively generates and refines sketched scenes based on users' multiple text instructions; and Words2ui, a collection of methods that can create UI prototypes from high-level text descriptions. This research creates unique affordances, advances the state-of-the-art of creativity support tools, contributes benchmark metrics, and explores novel interaction paradigms in diverse domains from non-expert sketching to professional UI design. These research contributions can serve as important building blocks towards future multi-modal systems that enable more effective and efficient sketching and prototyping for all.},
}

EndNote citation:

%0 Thesis
%A Huang, Forrest 
%T Human-Guided Generation of Sketches and Prototypes
%I EECS Department, University of California, Berkeley
%D 2022
%8 July 18
%@ UCB/EECS-2022-175
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-175.html
%F Huang:EECS-2022-175