What is it about?

In this survey, we focus on textual classifiers, language models, transformers and embeddings. We answer the question on whether the attention mechanisms are explainable or not. We also show how such models can be explained post-training and how their architectures can be updated to allow for transparency. We also recognize three stages where explainability comes in handy: input-level, language representations and decisions or outcomes. We show a case-study on neural machine translations.

Featured Image

Read the Original

This page is a summary of: On the Explainability of Natural Language Processing Deep Models, ACM Computing Surveys, December 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3529755.
You can read the full text:

Read

Contributors

The following have contributed to this page