What is it about?

The spread of harmful misinformation about COVID-19 has increased dramatically during the pandemic. To tackle this, we need computer systems that can sift through vast amounts of information to find and flag what's false. Many techniques have been developed, but they often focus only on certain types of content or specific platforms like Twitter. Our research looked at fifteen different AI models, known as Transformer models, to see how well they could spot misinformation across various sources including social media, news, and scientific papers. We found that specialized models designed specifically for COVID-19 misinformation aren't much better than general models. This study helps us understand what works best in identifying misinformation, aiding in the creation of better tools to combat false information during health crises.

Featured Image

Read the Original

This page is a summary of: Testing the Generalization of Neural Language Models for COVID-19 Misinformation Detection, January 2022, Springer Science + Business Media,
DOI: 10.1007/978-3-030-96957-8_33.
You can read the full text:

Read

Contributors

The following have contributed to this page