Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Scene depth estimation is gaining in importance as more and more AR/VR and robot vision applications are developed. Conventional depth-from-defocus techniques can passively provide depth maps from a single image. This is especially advantageous for moving scenes. However, they suffer a depth ambiguity problem where two distinct depth planes can have the same amount of defocus blur in the captured image. We solve the ambiguity problem and, as a consequence, introduce a passive technique that provides a one-to-one mapping between depth and defocus blur. Our method relies on the fact that the relationship between defocus blur and depth is also wavelength dependent. The depth ambiguity is thus solved by leveraging (multi-) spectral information. Specifically, we analyze the difference in defocus blur of two channels to obtain different scene depth regions. This paper provides the derivation of our solution, a robustness analysis, and validation on consumer lenses.
Pascal Fua, Cécile Hébert, Emadeddin Oveisi, Gulnaz Ganeeva, Anastasiia Mishchuk, Okan Altingövde