Guided Attention for Interpretable Motion Captioning - IMT Mines Alès
Communication Dans Un Congrès Année : 2024

Guided Attention for Interpretable Motion Captioning

Résumé

Diverse and extensive work has recently been conducted on text-conditioned human motion generation. However, progress in the reverse direction, motion captioning, has seen less comparable advancement. In this paper, we introduce a novel architecture design that enhances text generation quality by emphasizing interpretability through spatio-temporal and adaptive attention mechanisms. To encourage human-like reasoning, we propose methods for guiding attention during training, emphasizing relevant skeleton areas over time and distinguishing motion-related words. We discuss and quantify our model's interpretability using relevant histograms and density distributions. Furthermore, we leverage interpretability to derive fine-grained information about human motion, including action localization, body part identification, and the distinction of motion-related words. Finally, we discuss the transferability of our approaches to other tasks. Our experiments demonstrate that attention guidance leads to interpretable captioning while enhancing performance compared to higher parameter-count, non-interpretable state-of-the-art systems. The code is available at: https://github.com/rd20karim/M2T-Interpretable.
Fichier principal
Vignette du fichier
Guided_Attention_BMVC2024_with_Supp.pdf (1.53 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04251363 , version 1 (20-10-2023)
hal-04251363 , version 2 (06-09-2024)

Licence

Identifiants

Citer

Karim Radouane, Julien Lagarde, Sylvie Ranwez, Andon Tchechmedjiev. Guided Attention for Interpretable Motion Captioning. BMVC 2024 - The 35th British Machine Vision Conference, Nov 2024, Glasgow, United Kingdom. ⟨10.48550/arXiv.2310.07324⟩. ⟨hal-04251363v2⟩
276 Consultations
155 Téléchargements

Altmetric

Partager

More