What is it about?
This work surveys the growing field of uncertainty estimation in deep neural networks, focusing on how it can enhance trustworthiness through the reject option. The reject option allows a model to avoid making predictions when it lacks confidence, which is crucial in high-stakes applications like healthcare or autonomous systems. By reviewing and organizing a wide range of methods, this paper provides a valuable resource for building safer, more reliable AI systems that know when not to answer.
Featured Image
Photo by Igor Omilaev on Unsplash
Why is it important?
It's important because AI systems are being used in critical areas like healthcare, transportation, and finance, where making a wrong decision can have serious consequences. If a model is unsure but still forced to make a prediction, it can lead to harmful or costly errors. By teaching AI to recognize and express uncertainty — and even choose not to answer when it’s not confident — we make these systems more trustworthy, safer, and responsible. This builds confidence for both users and developers, especially in real-world scenarios where reliability matters most.
Perspectives
This survey helps unify a scattered field by systematically organizing methods for the reject option. It offers a roadmap for future research aimed at building more accountable and transparent AI systems.
Md Mehedi Hasan
Deakin University
Read the Original
This page is a summary of: Survey on Leveraging Uncertainty Estimation Towards Trustworthy Deep Neural Networks: The Case of Reject Option and Post-training Processing, ACM Computing Surveys, April 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3727633.
You can read the full text:
Contributors
The following have contributed to this page







