What is it about?

In today's world, digital communication is an integral part of how we interact with one another. This has led to a massive increase in textual dialogue happening over online messaging, IRC, and meeting platforms. However, when the chat history is extensive, it becomes difficult and time-consuming to review all the content before starting any conversations. Also, unlike news articles and documents with well-structured text, dialogue differs in the sense that it often comes from two or more interlocutors. Advances in the field of transfer learning due to the Transformer-based pretrained language models (PLMs) have demonstrated that these models can be effectively used for abstractive dialogue summarization, leading to various real-world applications. For instance, to save a massive amount of time customer service centers or hospitals would like to summarise customer service interactions and doctor-patient interactions Dialogue summarization is the task of extracting the highlights of dialogue and putting them in a concise manner. Summarization can be classified into two basic paradigms: extractive and abstractive. Extractive summarization selects relevant lines or phrases from the original text and uses them to produce a summary and Abstractive summarization aims to produce a succinct expression that captures the salient ideas of the source text. We propose a simple but effective hybrid approach that consists of two modules and leverages pretrained language models (PLMs) to generate abstractive summaries using transfer learning.

Featured Image

Why is it important?

Our findings presented a simple but effective hybrid approach for building an abstractive summarization system in the case of long dialogue transcripts which can be quite useful in scenarios like automatic meeting summary generation etc. The experiment results showed the effectiveness of the approach on several public benchmarks, especially AMI, and ICSI datasets. In this study, we demonstrate how our methodology carefully leverages pretrained language models (PLMs) and performs transfer learning effectively by adapting to dialogue scenarios, and can also be helpful in low-resource settings for domain adaptation.

Perspectives

Considering its wide range of applications, writing this article was a great pleasure and I hope you find this article thought-provoking. In general, NLG-based solutions can be challenging, but they can be quite helpful in assisting end-users. Additionally, the approach highlighted in the article was implemented for one of the hotels & hospitality industry use cases to automatically generate a concise abstractive summary for the agent and customer speaker after the call ended. This helped the business save time, money, and decision-making for improving the customer experience.

Rohit Sroch
Indian Institute of Technology Roorkee

Read the Original

This page is a summary of: Domain Adapted Abstractive Summarization of Dialogue using Transfer Learning, December 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3508546.3508640.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page