Guided Attention for Interpretable Motion Captioning - IMT Mines Alès
Pré-Publication, Document De Travail Année : 2023

Guided Attention for Interpretable Motion Captioning

Karim Radouane
Sylvie Ranwez
Julien Lagarde

Résumé

While much effort has been invested in generating human motion from text, relatively few studies have been dedicated to the reverse direction, that is, generating text from motion. Much of the research focuses on maximizing generation quality without any regard for the interpretability of the architectures, particularly regarding the influence of particular body parts in the generation and the temporal synchronization of words with specific movements and actions. This study explores the combination of movement encoders with spatio-temporal attention models and proposes strategies to guide the attention during training to highlight perceptually pertinent areas of the skeleton in time. We show that adding guided attention with adaptive gate leads to interpretable captioning while improving performance compared to higher parameter-count non-interpretable SOTA systems. On the KIT MLD dataset, we obtain a BLEU@4 of 24.4% (SOTA+6%), a ROUGE-L of 58.30% (SOTA +14.1%), a CIDEr of 112.10 (SOTA +32.6) and a Bertscore of 41.20% (SOTA +18.20%). On HumanML3D, we obtain a BLEU@4 of 25.00 (SOTA +2.7%), a ROUGE-L score of 55.4% (SOTA +6.1%), a CIDEr of 61.6 (SOTA -10.9%), a Bertscore of 40.3% (SOTA +2.5%). Our code implementation and reproduction details will be soon available at https://github.com/rd20karim/M2T-Interpretable/tree/main.
Fichier principal
Vignette du fichier
guided-attention.pdf (1.83 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04251363 , version 1 (20-10-2023)
hal-04251363 , version 2 (06-09-2024)

Identifiants

Citer

Karim Radouane, Andon Tchechmedjiev, Sylvie Ranwez, Julien Lagarde. Guided Attention for Interpretable Motion Captioning. 2023. ⟨hal-04251363v1⟩
249 Consultations
133 Téléchargements

Altmetric

Partager

More