Dreamcrafter: Imagining Future Immersive Radiance Field Editors with Generative AI
Cyrus Vachha
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2024-124
May 17, 2024
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-124.pdf
Authoring 3D scenes is a central task for spatial computing applications. Two competing visions for lowering existing barriers are (1) focus on immersive, direct manipulation of 3D content; or (2) leverage recent techniques that capture real scenes (3D Radiance Fields such as NeRFs, 3D Gaussian Splatting) and modify them at a higher level of abstraction, at the cost of high latency. We unify the complementary strengths of these approaches and investigate how to integrate generative AI advances into real-time, immersive 3D Radiance Field editing. We introduce Dreamcrafter, a VR-based 3D scene editing system that: (1) provides a modular architecture to integrate generative AI systems; (2) combines different levels of control for creating objects, including natural language and direct manipulation; and (3) introduces proxy representations that support interaction during high-latency operations. We also contribute empirical findings about how and when people prefer different controls when working with radiance fields in a user study. Avenues for future work for developing interactions and features for multi-modal 3D editors leveraging generative models and radiance fields are also discussed.
Advisors: Björn Hartmann
BibTeX citation:
@mastersthesis{Vachha:EECS-2024-124,
Author= {Vachha, Cyrus},
Title= {Dreamcrafter: Imagining Future Immersive Radiance Field Editors with Generative AI},
School= {EECS Department, University of California, Berkeley},
Year= {2024},
Month= {May},
Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-124.html},
Number= {UCB/EECS-2024-124},
Abstract= {Authoring 3D scenes is a central task for spatial computing applications. Two competing visions for lowering existing barriers are (1) focus on immersive, direct manipulation of 3D content; or (2) leverage recent techniques that capture real scenes (3D Radiance Fields such as NeRFs, 3D Gaussian Splatting) and modify them at a higher level of abstraction, at the cost of high latency. We unify the complementary strengths of these approaches and investigate how to integrate generative AI advances into real-time, immersive 3D Radiance Field editing. We introduce Dreamcrafter, a VR-based 3D scene editing system that: (1) provides a modular architecture to integrate generative AI systems; (2) combines different levels of control for creating objects, including natural language and direct manipulation; and (3) introduces proxy representations that support interaction during high-latency operations. We also contribute empirical findings about how and when people prefer different controls when working with radiance fields in a user study. Avenues for future work for developing interactions and features for multi-modal 3D editors leveraging generative models and radiance fields are also discussed.},
}
EndNote citation:
%0 Thesis %A Vachha, Cyrus %T Dreamcrafter: Imagining Future Immersive Radiance Field Editors with Generative AI %I EECS Department, University of California, Berkeley %D 2024 %8 May 17 %@ UCB/EECS-2024-124 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-124.html %F Vachha:EECS-2024-124