What is it about?

AI models for text summarization can benefit from learning multiple related tasks. Our research explores how grouping these tasks into "families" (like reading comprehension or language inference) affects the AI's summarization ability. We tested three strategies for training AI with these task families: sequential, simultaneous, and continual learning. Our findings show that certain combinations of task families, particularly advanced reading comprehension and natural language inference, significantly improve the AI's summarization performance. Interestingly, we discovered that the choice and combination of task families matter more than the specific training method used. This suggests that carefully selecting task families is crucial for enhancing AI text summarization capabilities. Our study highlights the importance of task families in developing more effective AI summarization tools.

Featured Image

Read the Original

This page is a summary of: Analyzing Multi-Task Learning for Abstractive Text Summarization, January 2022, Association for Computational Linguistics (ACL),
DOI: 10.18653/v1/2022.gem-1.5.
You can read the full text:

Read

Contributors

The following have contributed to this page