AHMN: A multi-modal network for long MOOC videos chapter segmentation - Université de Rennes Accéder directement au contenu
Article Dans Une Revue Multimedia Tools and Applications Année : 2023

AHMN: A multi-modal network for long MOOC videos chapter segmentation

Résumé

This paper proposes a task named MOOC Videos Chapter Segmentation (MVCS) which is a significant problem in the field of video understanding. To solve this problem, we first introduce a dataset called MOOC Videos Understanding (MVU) which consists of approximately 10k annotated chapters organized by 120k snippets from 400 MOOC videos where chapters and snippets are two levels of video unit proposed in this paper for hierarchical level expression of videos. Then, we design the Attention-based Hierarchical bi-LSTM Multi-modal Network (AHMN) based on three core ideas: (1) we take advantage of the features of multi-modal semantic elements, including video, audio, and text, along with an attention-based multi-modal fusion module to extract video information in a comprehensive way. (2) we focus on chapters boundaries rather than the content recognition of chapters themselves, so we develop Boundary Predict Network (BPN) to label boundaries between chapters. (3) we exploit the semantic consistency between snippets and develop Consistency Modeling as an auxiliary task to improve the performance of BPN. Our experiments demonstrate that the proposed AHMN can solve the MVCS precisely, outperforming previous methods on all evaluation metrics.
Fichier non déposé

Dates et versions

hal-04361312 , version 1 (22-12-2023)

Identifiants

Citer

Jiasong Wu, Yu Sun, Youyong Kong, Huazhong Shu, Lotfi Senhadji. AHMN: A multi-modal network for long MOOC videos chapter segmentation. Multimedia Tools and Applications, 2023, ⟨10.1007/s11042-023-17654-2⟩. ⟨hal-04361312⟩
7 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More