What is it about?

Since the world has shifted online and many interactions happen on social media, we propose a method to make the digital ecosystem safer. We present a growing dataset and build a model to identify a tweet as a threat. The model also can categorize the type of threat, whether it is sexist or non-sexist.

Featured Image

Why is it important?

Data are scarce for this issue as threatening tweets, and those with derogatory language are often taken down by Twitter or the user, which reduces the scope for their utilization. Our dataset has the means to expand by crowdsourcing the data collection. The other pertinent issue that we tackle is the categorization of tweets. Sexist threats are often marked as sexist tweets, and the action taken to investigate them is along those lines; however, a sexist threat demands more serious attention, and our method ensures that they are labeled accurately.

Perspectives

Working on this research has been a real challenge and exciting as well. It is our little effort to make online social interaction safer for all.

Dr. Sanjay Singh
Manipal Institute of Technology, Manipal

This was a very interesting problem to work on. It was an exciting opportunity to tackle an issue that is important to me and is needed in the world currently. I hope you find this article thought-provoking as a step towards making digital platforms safer

Sinchana Kumbale

Read the Original

This page is a summary of: BREE-HD: A Transformer-Based Model to Identify Threats on Twitter, IEEE Access, January 2023, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/access.2023.3291072.
You can read the full text:

Read

Contributors

The following have contributed to this page