We show that we can effectively fit arbitrarily complex animation models to noisy image data. Our approach is based on least-squares adjustment using of a set of progressively finer control triangulations and takes advantage of three complementary sources of information: stereo data, silhouette edges and 2D feature points. In this way, complete head models, including ears and hair, can be acquired with a cheap and entirely passive sensor, such as an ordinary video camera. They can then be fed to existing animation software to produce synthetic sequences
Alexandre Caboussat, Dimitrios Gourzoulidis
Rachid Guerraoui, Nirupam Gupta, Youssef Allouah, Geovani Rizk, Rafaël Benjamin Pinot