What is it about?

Recently, there has been growing research on negative feedback in recommender systems. These studies use a fixed threshold to binarize feedback into positive or negative. However, such an approach bears limitations when the rating habits for expressing disappointment differ across users or when ratings are noisy. Motivated by the remarkable success of Large Language Models (LLMs), we investigate how LLM can address this challenge on the fly. To this end, we present ReFINe, Resurrecting Falsely Identified Negative feedback with LLM. ReFINe classifies the negative feedback into two distinct types: Falsely identified negative with positive signals and Confirmed negative with only negative signals. To the best of our knowledge, our work is the first to propose and demonstrate the distinction between two perspectives on negative feedback. We first leverage LLM to better separate between positive and negative sets for each user, and implement Re-Weighted BPR, a dedicated Bayesian Personalized Ranking loss function tailored to our perspective on negative feedback. Experimental results show that our model outperforms strong baseline models. The code is available at https://github.com/Chanwoo-Jeong-2000/ReFINe.

Featured Image

Read the Original

This page is a summary of: Leveraging Refined Negative Feedback with LLM for Recommender Systems, May 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3701716.3715538.
You can read the full text:

Read

Contributors

The following have contributed to this page