What is it about?
When a group of people needs to make a joint decision—like dividing resources, assigning people to jobs, or choosing a winner—each person is usually asked to say what they prefer. But sometimes, people may try to lie in order to improve the result for themselves. For example, in an auction, someone might pretend to value an item less than they really do, hoping to pay a lower price. A widely studied objective in recent years is to design systems that guarantee people they will get the best possible outcome for themselves if they tell the truth. This is meant to save time and effort for the participants and to create order. However, it is well known and formally proven that in many cases, it is impossible to design systems that guarantee this while also satisfying certain fairness properties—for example, making sure that no one envies someone else. This paper approaches the problem from a new angle. We assume that people are less likely to lie in situations where lying might harm them, and we look for systems in which lying without risk requires a lot of information about others. Since in reality it is hard to collect such information, we argue that the more knowledge is needed in order to lie safely, the fewer people will actually lie. We propose to rank systems based on how many people one would need to know (or spy on) in order to lie without any risk of being harmed. Even when we cannot guarantee that truth-telling will always be best, we can still find systems that are stronger and more resilient to manipulation. We apply this idea to a wide range of well-known rule-based systems.
Featured Image
Photo by Alex Shute on Unsplash
Read the Original
This page is a summary of: It’s Not All Black and White: Degree of Truthfulness for Risk-Avoiding Agents, July 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3736252.3742658.
You can read the full text:
Contributors
The following have contributed to this page







