Department of Computer Engineering
Santa Clara University, 2003
The MPEG-4 FBA (Face and Body Animation) standard provides a set of FAPs (Facial Animation Parameters) for animating a talking face with its moods and expressions. A face model driven by this set of FAPs can produce high quality animation at a bitrate as low as 2 kb/s. The applications for the face model range from as diverse areas as video phone and video conferencing for wireless, portable units like PDAs and cell phones, to game development and movie production. The success of facial animation relies on being able to track facial features and to generate FAPs accurately and reliably.
This thesis presents a method for extracting FAPs from a person's face in a video sequence. The goal is to generate FAPs that make the animated face resemble the original face in the video sequence. The proposed method is based on feedback from the render unit during the FAP generation process. This ensures that the animated face is as close to the original as possible. A penalty function is derived to measure the resemblance between the animated and the original face. The optimization process consists of minimizing the penalty function, which includes a match function and some barrier functions. The match function compares how well an animated face matches the original face in the video sequence. Each barrier function indicates the level of distortion for a certain part of a face, and advises the optimizer. Unnecessary FAPs are eliminated and the search space is partitioned to speed up the optimization process.
The results show that the generated FAPs are accurate and the proposed method is very robust. The generated FAPs can drive animations that are lifelike and truthful to the original sequence, making them suitable for very high quality applications, including internet agents (avatars) and animation for the movies and computer games.