Indoor Scene Augmentation via Scene Graph Priors
Mohammad Keshavarzi
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2022-39
May 8, 2022
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-39.pdf
In spatial computing experiences augmenting virtual objects to existing scenes requires a contextual approach, where geometrical conflicts are avoided, and functional relationships to other objects are maintained. Yet, due to the complexity and diversity of user environments, automatically predicting contextual and adaptive placements of virtual content is considered a challenging task. Motivated by this problem, in this paper we introduce SceneGen, a generative contextual scene augmentation framework that predicts virtual object placement within existing scenes. SceneGen takes a scene as input, and outputs positional and orientational probability maps for placing virtual content. We formulate a novel spatial Scene Graph representation, which encapsulates explicit topological properties between objects, object groups, and rooms. We use kernel density estimation to build a multivariate conditional knowledge model trained using prior spatial Scene Graphs extracted from real-world 3D scanned data as prior. To further capture orientational properties, we develop a fast pose annotation tool to extend current real-world datasets with orientational labels. Furthermore, we conduct comparative and user experiments to demonstrate the performance of our system in various indoor scene augmentation scenarios. Finally, to demonstrate our system in action, we develop an Augmented Reality application, in which objects can be contextually augmented in real-time.
Advisors: Björn Hartmann
BibTeX citation:
@mastersthesis{Keshavarzi:EECS-2022-39, Author= {Keshavarzi, Mohammad}, Title= {Indoor Scene Augmentation via Scene Graph Priors}, School= {EECS Department, University of California, Berkeley}, Year= {2022}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-39.html}, Number= {UCB/EECS-2022-39}, Abstract= {In spatial computing experiences augmenting virtual objects to existing scenes requires a contextual approach, where geometrical conflicts are avoided, and functional relationships to other objects are maintained. Yet, due to the complexity and diversity of user environments, automatically predicting contextual and adaptive placements of virtual content is considered a challenging task. Motivated by this problem, in this paper we introduce SceneGen, a generative contextual scene augmentation framework that predicts virtual object placement within existing scenes. SceneGen takes a scene as input, and outputs positional and orientational probability maps for placing virtual content. We formulate a novel spatial Scene Graph representation, which encapsulates explicit topological properties between objects, object groups, and rooms. We use kernel density estimation to build a multivariate conditional knowledge model trained using prior spatial Scene Graphs extracted from real-world 3D scanned data as prior. To further capture orientational properties, we develop a fast pose annotation tool to extend current real-world datasets with orientational labels. Furthermore, we conduct comparative and user experiments to demonstrate the performance of our system in various indoor scene augmentation scenarios. Finally, to demonstrate our system in action, we develop an Augmented Reality application, in which objects can be contextually augmented in real-time.}, }
EndNote citation:
%0 Thesis %A Keshavarzi, Mohammad %T Indoor Scene Augmentation via Scene Graph Priors %I EECS Department, University of California, Berkeley %D 2022 %8 May 8 %@ UCB/EECS-2022-39 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-39.html %F Keshavarzi:EECS-2022-39