Self-Supervised and Controlled Multi-Document Opinion Summarization

Hady Elsahar, Maxmin Coavoux, Matthias Gallé, Jos Rozen
EACL2021 [
Paper] [Twitter summary]

We address the problem of unsupervised abstractive summarization of collections of user generated reviews through self-supervision and control. We propose a self-supervised setup that considers an individual document as a target summary for a set of similar documents. This setting makes training simpler than previous approaches by relying only on standard log-likelihood loss. We address the problem of hallucinations through the use of control codes, to steer the generation towards more coherent and relevant summaries. Finally, we extend the Transformer architecture to allow for multiple reviews as input. Our benchmarks on two datasets against graph based and recent neural abstractive unsupervised models show that our proposed method generates summaries with a superior quality and relevance. This is confirmed in our human evaluation which focuses explicitly on the faithfulness of generated summaries We also provide an ablation study, which shows the importance of the control setup in controlling hallucinations and achieve high sentiment and topic alignment of the summaries with the input reviews.

Previous
Previous

Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages

Next
Next

Unsupervised Aspect-Based Abstractive Summarization