What is it about?

Paraphasias are speech errors often made by people with aphasia. The Philadelphia Naming Test (PNT) is a "picture-naming test", where individuals are shown commonplace pictures and asked to name them. Any paraphasias they make are categorized by a clinician. One of those categories is whether or not the error is semantically similar to the picture (e.g. saying "scissors" when the picture is of a ruler). We have software called ParAlg (paraphasia algorithms) which has tools for categorizing these errors automatically. In this paper, we fine-tune a modern machine learning based language model called BERT for the semantic classifier, to replace the original model we used (word2vec). With this new model, we reduce incorrect classifications by almost half.

Featured Image

Why is it important?

The PNT provides important information about a person with aphasia's speech, but it is very time-consuming for a clinician to administer and score. One hope with developing ParAlg is to automate some of that process. This paper improves upon an important aspect of that automation, the semantic similarity classifier. This work brings us closer to the goal of establishing ParAlg as a useful tool for assessing the speech of people with aphasia in research and clinical practice.


This paper is really fun because it applies cutting-edge AI (specifically the machine learning language model BERT) to healthcare research. People in the machine learning/AI field are really excited about the success of models like BERT, but most of the focus is on applying them to more classic technical tools like Google search or advanced chat bots. In my opinion, it's exciting to explore how we can use these tools in the sciences, and in this paper we had great success adapting the model to aphasia research.

Research Data Analyst II Alexandra Salem
Oregon Health and Science University Foundation

Read the Original

This page is a summary of: Refining Semantic Similarity of Paraphasias Using a Contextual Language Model, Journal of Speech Language and Hearing Research, January 2023, American Speech-Language-Hearing Association (ASHA),
DOI: 10.1044/2022_jslhr-22-00277.
You can read the full text:



The following have contributed to this page