Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
The colors in a digital image do not only depend on the scene's reflectances, but also on the lighting conditions and on the characteristics of the imaging device. It is often necessary to separate the influence of these three elements, in particular to separate the contribution of the object's reflection properties from the one of the incident light. Both are usually represented by spectral quantities, i.e., defined over the visible spectrum. The retrieval of illuminant and reflectances from an RGB image, which provides three values per pixel, is an under-determined problem and requires additional information or assumption on the scene's content in order to be solved. In this thesis, we study how redundant information across a set of images can be exploited to recover either illuminant or reflectance knowledge. We demonstrate that accurate object color can be retrieved as tristimulus values from standard RGB images captured with an unknown camera under uncontrolled lighting conditions and no access to raw data, but containing a limited number of reference colors in the form of a specific calibration target. The correction must be constrained to a limited range of colors similar to the one of the object of interest, which are compared to pre-computed references for the derivation of a linear transform. We implement this method for two consumer oriented applications in Makeup and Home Décor, respectively, which provide users with color advice. The correction accuracy is of ΔE*ab ≃ 1 to 2 depending on the design of the target. We show that this accuracy is sufficient for the applications at hand. We also developed a method to retrieve illuminant spectra from a set of images taken with fixed location cameras, such as panoramic or surveillance ones. In pictures captured with such devices, there will be changes in lighting and dynamic content, but there will also be constant objects. We propose to use these elements as reference colors to solve for color constancy. More precisely, we exploit that their reflectances, while unknown, remain constant across images. We invert a series of image formation models in parallel for a set of test illuminants and, by forcing the output reflectances to match, deduce the illuminant under which each image was captured. Simulations on synthetic RGB patches demonstrate that real daylight illuminants can be retrieved with a median angular under 3° from sets containing as little as four images using a limited number of reference surfaces.
Edoardo Charbon, Claudio Bruschini, Paul Mos, Mohit Gupta
, , , ,