ABSTRACT

This paper addresses the problem of automatically synchro- nizing computer-generated faces with synthetic speech. The complete process provides a novel form of face-to-face com- munication and the ability to create a new range of talking personable synthetic characters. Based on plain ASCII text input, a synthetic speech segment is generated and synchro- nized in real-time to a graphical display of an articulating mouth and face. The key component of the algorithm is the run-time facility that adaptively synchronizes the graphical display of the face to the audio.