The human interface for computer graphics systems is evolving to involve a multimodal approach. It is now moving from keyboard operation to more natural modes of interaction using visual, audio and gestural means. This paper discusses real-time interaction using visual input from a human face. It describes the underlying approach to recognizing and analysing the facial movements of a real performance. The output in the form of parameters describing the facial expressions can then be used to drive one or more applications running on the same or on a remote computer. This enables the user to control the graphics system by means of facial expressions. This is used primarily as part of a real-time facial animation system, where the synthetic actor reproduces the animator's expression. This offers interesting possibilities for teleconferencing as the requirements for the network bandwidth are low (about 7 kbit/s). Experiments are also done using facial movements to control a walkthrough or perform simple object manipulation
Christophe René Joseph Ecabert
Sabine Süsstrunk, Tong Zhang, Ehsan Pajouheshgar, Yitao Xu
Sabine Süsstrunk, Tong Zhang, Ehsan Pajouheshgar, Yitao Xu