Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech using Adversarial Disentanglement of Multimodal Style Encoding - Perception,INteraction, Robotique sociales Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech using Adversarial Disentanglement of Multimodal Style Encoding

Résumé

Modeling virtual agents with behavior style is one factor for personalizing human-agent interaction. In this paper, we propose an efficient yet effective machine learning approach to synthesize gestures driven by prosodic features and text in the style of different speakers including those unseen during training. Our model performs zero-shot multimodal style transfer driven by multimodal data from the PATS database containing videos of various speakers. We view style as being pervasive while speaking; it colors the communicative behaviors expressivity while speech content is carried by multimodal signals and text. This disentanglement scheme of content and style allows us to directly infer the style embedding even of speaker whose data are not part of the training phase, without requiring any further training or fine-tuning. The first goal of our model is to generate the gestures of a source speaker based on the content of two input modalities-Mel spectrogram and text semantics. The second goal is to condition the source speaker's predicted gestures on the multimodal behavior style embedding of a target speaker. The third goal is to allow zero-shot style transfer of speakers unseen during training without retraining the model. Our system consists of two main components: (1) a speaker style encoder network that learns to generate a fixed-dimensional speaker embedding style from a target speaker multimodal data (mel-spectrogram, pose, and text); and (2) a sequence-to-sequence synthesis network that synthesizes gestures based on the content of the input modalities-text and mel-spectrogram-of a source speaker, and conditioned on the speaker style embedding. We evaluate that our model is able to synthesize gestures of a source speaker given the two input modalities, and transfer the knowledge of target speaker style variability learned by the speaker style encoder to the gesture generation task in a zero-shot setup, indicating that the model has learned a high quality speaker representation. For our evaluation we convert the 2D generated gestures to 3D poses, and produce 3D animations of the generated gestures. We conduct objective and subjective evaluations to validate our approach and compare it with baselines.
Fichier principal
Vignette du fichier
2208.01917(1).pdf (1.3 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
licence : Copyright (Tous droits réservés)

Dates et versions

hal-03972415 , version 1 (03-02-2023)

Identifiants

Citer

Mireille Fares, Michele Grimaldi, Catherine Pelachaud, Nicolas Obin. Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech using Adversarial Disentanglement of Multimodal Style Encoding. 2023. ⟨hal-03972415⟩
37 Consultations
71 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More