LSG Attention: Extrapolation of pretrained Transformers to long sequences - IMT Mines Alès Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

LSG Attention: Extrapolation of pretrained Transformers to long sequences

Résumé

Transformer models achieve state-of-the-art performance on a wide range of NLP tasks. They however suffer from a prohibitive limitation due to the self-attention mechanism, inducing O(n2) complexity with regard to sequence length. To answer this limitation we introduce the LSG architecture which relies on Local, Sparse and Global attention. We show that LSG attention is fast, efficient and competitive in classification and summarization tasks on long documents. Interestingly, it can also be used to adapt existing pretrained models to efficiently extrapolate to longer sequences with no additional training. Along with the introduction of the LSG attention mechanism, we propose tools to train new models and adapt existing ones based on this mechanism.
Fichier principal
Vignette du fichier
PAKDD2023 (3).pdf (279.73 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03835159 , version 1 (26-06-2023)

Identifiants

Citer

Charles Condevaux, Sébastien Harispe. LSG Attention: Extrapolation of pretrained Transformers to long sequences. PAKDD 2023 - The 27th Pacific-Asia Conference on Knowledge Discovery and Data Mining, May 2023, Osaka, Japan. ⟨10.1007/978-3-031-33374-3_35⟩. ⟨hal-03835159⟩
64 Consultations
31 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More