Joint Beamforming and Speaker-Attributed ASR for Real Distant-Microphone Meeting Transcription - Department of Natural Language Processing & Knowledge Discovery
Pré-Publication, Document De Travail Année : 2023

Joint Beamforming and Speaker-Attributed ASR for Real Distant-Microphone Meeting Transcription

Résumé

Distant-microphone meeting transcription is a challenging task. State-of-the-art end-to-end speaker-attributed automatic speech recognition (SA-ASR) architectures lack a multichannel noise and reverberation reduction front-end, which limits their performance. In this paper, we introduce a joint beamforming and SA-ASR approach for real meeting transcription. We first describe a data alignment and augmentation method to pretrain a neural beamformer on real meeting data. We then compare fixed, hybrid, and fully neural beamformers as front-ends to the SA-ASR model. Finally, we jointly optimize the fully neural beamformer and the SA-ASR model. Experiments on the real AMI corpus show that, while state-of-the-art multi-frame cross-channel attention based channel fusion fails to improve ASR performance, fine-tuning SA-ASR on the fixed beamformer's output and jointly fine-tuning SA-ASR with the neural beamformer reduce the word error rate by 8% and 9% relative, respectively.
Fichier principal
Vignette du fichier
Template_Blind.pdf (1.07 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04755558 , version 1 (29-10-2024)

Identifiants

  • HAL Id : hal-04755558 , version 1

Citer

Can Cui, Imran Ahamad Sheikh, Mostafa Sadeghi, Emmanuel Vincent. Joint Beamforming and Speaker-Attributed ASR for Real Distant-Microphone Meeting Transcription. 2023. ⟨hal-04755558⟩
0 Consultations
0 Téléchargements

Partager

More