Anjali Thakrar

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-133

May 17, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-133.pdf

We introduce Multispectral Colorimetry, a novel technique for image color correction that uses 3D reconstruction techniques, object color theory, and photos captured by multiple cameras to generate a more accurate scene representation. Images captured by consumer cameras typically look similar to, but don't quite match the colors seen in the real world. In fact, there are many objects that have distinct spectral reflectances in the physical world but appear to have the same color when captured through a camera. This discrepancy is due to fundamental differences between the camera capture pipeline and that of the human eye: cameras only capture a subset of the colors humans can see, and any subsequent image processing introduces further error by approximating its output colors. In this work, we extend and improve current camera processing mechanisms to correct the color representation of any image, making its colors appear more similar to those found in the real world. We use images captured of the same scene by multiple cameras with slightly different response functions to extrapolate multispectral information within the visible spectrum and find a more accurate color mapping. To achieve this, we extend Gaussian Splatting to reconstruct a multispectral 3D scene using RAW captures from a set of cameras. This allows us to flexibly capture input images, integrate spectral samples from all cameras in 3D, and then generate multispectral images from arbitrary new views. We use the generated multispectral images to map colors between camera captures with pixel-perfect accuracy. We then use information about the spectra that can be functionally captured by each of the cameras and the average human eye in order to construct a mapping between each color value in the image and a set of candidate, “real” colors that it may represent in the world. As we introduce more cameras into this pipeline, the set of candidate colors becomes more constrained, and thus more precise. We produce a dataset of spectral response curves and color-corrected images for machine learning researchers, and an underlying processing pipeline that can be used by photographers.

Advisors: Ren Ng


BibTeX citation:

@mastersthesis{Thakrar:EECS-2024-133,
    Author= {Thakrar, Anjali},
    Editor= {Ng, Ren and Roorda, Austin},
    Title= {3D Multispectral Colorimetry},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-133.html},
    Number= {UCB/EECS-2024-133},
    Abstract= {We introduce Multispectral Colorimetry, a novel technique for image color correction that uses 3D reconstruction techniques, object color theory, and photos captured by multiple cameras to generate a more accurate scene representation. Images captured by consumer cameras typically look similar to, but don't quite match the colors seen in the real world. In fact, there are many objects that have distinct spectral reflectances in the physical world but appear to have the same color when captured through a camera. This discrepancy is due to fundamental differences between the camera capture pipeline and that of the human eye: cameras only capture a subset of the colors humans can see, and any subsequent image processing introduces further error by approximating its output colors. In this work, we extend and improve current camera processing mechanisms to correct the color representation of any image, making its colors appear more similar to those found in the real world. We use images captured of the same scene by multiple cameras with slightly different response functions to extrapolate multispectral information within the visible spectrum and find a more accurate color mapping. To achieve this, we extend Gaussian Splatting to reconstruct a multispectral 3D scene using RAW captures from a set of cameras. This allows us to flexibly capture input images, integrate spectral samples from all cameras in 3D, and then generate multispectral images from arbitrary new views. We use the generated multispectral images to map colors between camera captures with pixel-perfect accuracy. We then use information about the spectra that can be functionally captured by each of the cameras and the average human eye in order to construct a mapping between each color value in the image and a set of candidate, “real” colors that it may represent in the world. As we introduce more cameras into this pipeline, the set of candidate colors becomes more constrained, and thus more precise. We produce a dataset of spectral response curves and color-corrected images for machine learning researchers, and an underlying processing pipeline that can be used by photographers.},
}

EndNote citation:

%0 Thesis
%A Thakrar, Anjali 
%E Ng, Ren 
%E Roorda, Austin 
%T 3D Multispectral Colorimetry
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 17
%@ UCB/EECS-2024-133
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-133.html
%F Thakrar:EECS-2024-133