MUG: A General Meeting Understanding and Generation Benchmark
Qinglin Zhang (Alibaba); Chong Deng (Alibaba inc); Jiaqing Liu (Speech Lab, Alibaba Group); Hai Yu (Alibaba); Qian Chen (Speech Lab, DAMO Academy, Alibaba Group); Wen Wang (Alibaba Group); Zhijie Yan (Alibaba Inc.); Jinglin Liu (Zhejiang University); Yi Ren (Bytedance); Zhou Zhao (Zhejiang University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Listening to long video/audio recordings from video conferencing and online courses for acquiring information is extremely inefficient. Even after ASR systems transcribe recordings into long-form spoken language documents, reading ASR transcripts only partly speeds up seeking information. It has been observed that a range of NLP applications, such as keyphrase extraction, topic segmentation, and summarization, significantly improve users' efficiency in grasping important information. The meeting scenario is among the most valuable scenarios for deploying these \emph{spoken language processing} (SLP) capabilities. However, the lack of large-scale public meeting datasets annotated for these SLP tasks severely hinders their advancement. To prompt SLP advancement, we establish a large-scale \textbf{general Meeting Understanding and Generation Benchmark (MUG)} to benchmark the performance of a wide range of SLP tasks, including topic segmentation, topic-level and session-level extractive summarization and topic title generation, keyphrase extraction, and action item detection. To facilitate the MUG benchmark, we construct and release a large-scale meeting dataset for comprehensive long-form SLP development, \emph{the AliMeeting4MUG Corpus}, which consists of 654 recorded Mandarin meeting sessions with diverse topic coverage, with manual annotations for SLP tasks on manual transcripts of meeting recordings. To the best of our knowledge, the AliMeeting4MUG Corpus is so far the largest meeting corpus in scale and facilitates most SLP tasks. In this paper, we provide a detailed introduction of this corpus, SLP tasks and evaluation methods, baseline systems and their performance.