科学研究

Research

首页 >  论文  >  详情

Unsupervised Extractive Summarization by Pre-training Hierarchical Transformers

发表会议及期刊:EMNLP

Shusheng Xu1*, Xingxing Zhang2, Yi Wu1,3, Furu Wei2 and Ming Zhou2

1 IIIS, Tsinghua Unveristy, Beijing, China

2 Microsoft Research Asia, Beijing, China

3 Shanghai Qi Zhi institute, Shanghai China

xuss20@mails.tsinghua.edu.cn

{xizhang,fuwei,mingzhou}@microsoft.com

jxwuyi@gmail.com


Abstract:

Unsupervised extractive document summarization aims to select important sentences from a document without using labeled summaries during training. Existing methods are mostly graph-based with sentences as nodes and edge weights measured by sentence similarities. In this work, we find that transformer attentions can be used to rank sentences for unsupervised extractive summarization. Specifically, we first pre-train a hierarchical transformer model using unlabeled documents only. Then we propose a method to rank sentences using sentence-level self-attentions and pre-training objectives. Experiments on CNN/DailyMail and New York Times datasets show our model achieves state-of-the-art performance on unsupervised summarization. We also find in experiments that our model is less dependent on sentence positions. When using a linear combination of our model and a recent unsupervised model explicitly modeling sentence positions, we obtain even better results.

comm@pjlab.org.cn

上海市徐汇区云锦路701号西岸国际人工智能中心37-38层

沪ICP备2021009351号-1