Students are often tasked with reading a document and producing a summary (for example, a book report) to demonstrate both reading comprehension and writing ability. Thisabstractive text summarizationis one of the most challenging tasks in natural language processing, involving understanding of long passages, information compression, and language generation. The dominant paradigm for training machine learning models to do this issequence-to-sequence(seq2seq) learning, where a neural network learns to map input sequences to output sequences. While these seq2seq models were initially developed usingrecurrent neural networks,Transformerencoder-decoder models have recently become favored as they are more effective at modeling the dependencies present in the long sequences encountered in summarization.
Transformer models combined with self-supervised pre-training (e.g.,BERT,GPT-2,RoBERTa,XLNet,ALBERT,T5,ELECTRA) have shown to be a powerful framework for producing general language learning, achieving state-of-the-art performance when fine-tuned on a wide array of language tasks. In prior work, the self-supervised objectives used in pre-training have been somewhat agnostic to the down-stream application in favor of generality; we wondered whether better performance could be achieved if the self-supervised objective more closely mirrored the final task.
In “PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization” (to appear at the2020 International Conference on Machine Learning), we designed a pre-training self-supervised objective (calledgap-sentence generation) for Transformer encoder-decoder models to improve fine-tuning performance on abstractive summarization, achieving state-of-the-art results on 12 diverse summarization datasets. Supplementary to the paper, we are also releasing the training code and model checkpoints onGitHub.
链接地址:https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html