Paul E. Debevec and Camillo J. Taylor and Jitendra Malik

EECS Department, University of California, Berkeley

Technical Report No. UCB/CSD-96-893

, 1996

http://www2.eecs.berkeley.edu/Pubs/TechRpts/1996/CSD-96-893.pdf

We present an approach for creating realistic synthetic views of existing architectural scenes from a sparse set of still photographs. Our approach, which combines both geometry-based and image-based modeling and rendering techniques, has two components. The first component is an easy-to-use photogrammetric modeling system which facilitates the recovery of a basic geometric model of the photographed scene. The modeling system is effective and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo approach can robustly recover accurate depth from image pairs with large baselines. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. As an intermediate result, we present view-dependent texture mapping, a method of better simulating geometric detail on basic models. Our approach can recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach's abilty to create realistic renderings of architectural scenes from viewpoints far from the original photographs.


BibTeX citation:

@techreport{Debevec:CSD-96-893,
    Author= {Debevec, Paul E. and Taylor, Camillo J. and Malik, Jitendra},
    Title= {Modeling and Rendering Architecture from Photographs: A Hybrid Geometry- and Image-Based Approach},
    Year= {1996},
    Month= {Jan},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/1996/5357.html},
    Number= {UCB/CSD-96-893},
    Abstract= {We present an approach for creating realistic synthetic views of existing architectural scenes from a sparse set of still photographs. Our approach, which combines both geometry-based and image-based modeling and rendering techniques, has two components. The first component is an easy-to-use photogrammetric modeling system which facilitates the recovery of a basic geometric model of the photographed scene. The modeling system is effective and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo approach can robustly recover accurate depth from image pairs with large baselines. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. As an intermediate result, we present view-dependent texture mapping, a method of better simulating geometric detail on basic models. Our approach can recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach's abilty to create realistic renderings of architectural scenes from viewpoints far from the original photographs.},
}

EndNote citation:

%0 Report
%A Debevec, Paul E. 
%A Taylor, Camillo J. 
%A Malik, Jitendra 
%T Modeling and Rendering Architecture from Photographs: A Hybrid Geometry- and Image-Based Approach
%I EECS Department, University of California, Berkeley
%D 1996
%@ UCB/CSD-96-893
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/1996/5357.html
%F Debevec:CSD-96-893