What is it about?
Large language models (LLMs) and LLM-infused applications such as ChatGPT, Copilot, and Gemini can produce convincing yet incorrect outputs, potentially misleading users who may rely on them as if they were correct. We find that when LLMs express uncertainty in natural language, users are less likely to overrely on incorrect outputs. However, perspective mattered: uncertainty expressions in first-person (e.g., "I’m not sure, but...") were more effective than expressions from a general perspective (e.g., "There is uncertainty, but...").
Featured Image
Photo by Crew on Unsplash
Why is it important?
When people overrely on LLMs, there can be disastrous outcomes, especially in high-stakes settings. Before implementing approaches for reducing overreliance, however, it is critical to evaluate them carefully with users. Our findings highlight that language choices, such as the use of personal pronouns, matter in how people perceive and act upon the outputs of LLMs.
Perspectives
LLMs have improved at an incredible pace and are transforming our everyday lives, both explicitly and behind the scenes. However, relatively few studies have been done on how different characteristics and behaviors of LLMs impact users. Understanding this impact is crucial for responsible development and deployment of LLMs. I hope our work can serve as an example and a guide for such studies.
Sunnie Kim
Read the Original
This page is a summary of: "I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust, June 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3630106.3658941.
You can read the full text:
Contributors
The following have contributed to this page