What is it about?

In this paper, we provide AI practitioners with a comprehensive guide for building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, and accountability. To unify currently available but fragmented approaches toward trustworthy AI, we organize them in a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to system development and deployment, finally to continuous monitoring and governance.

Featured Image

Why is it important?

With the widespread application of AI in areas such as transportation, finance, medicine, security, and entertainment, there is rising societal awareness that we need these systems to be trustworthy. This is because the breach of stakeholders’ trust can lead to severe societal consequences given the pervasiveness of these AI systems. Such breaches can range from biased treatment by automated systems in hiring and loan decisions to the loss of human life. By contrast, AI practitioners, including researchers, developers, and decision-makers, have traditionally considered system performance (i.e., accuracy) to be the main metric in their workflows. This metric is far from sufficient to reflect the trustworthiness of AI systems. Various aspects of AI systems beyond system performance should be considered to improve their trustworthiness, including but not limited to their robustness, algorithmic fairness, explainability, and transparency. These facts suggest that a systematic approach is necessary to shift the current AI paradigm toward trustworthiness. This requires awareness and cooperation from multi-disciplinary stakeholders who work on different aspects of trustworthiness and different stages of the system’s lifecycle.

Perspectives

This paper is based on the practical experience of the authors in building and operating multiple industrial AI systems. With this paper, we aim to provide the practitioners and stakeholders of AI systems not only with a comprehensive introduction to the foundations and future of AI trustworthiness but also with an operational guidebook for how to construct AI systems that are trustworthy.

Bo Li

In this survey, we outlined the key aspects of trustworthiness that we think are essential to AI systems. We further proposed a systematic approach to consider these aspects in the entire lifecycle of real-world AI systems. We recognize that fully adopting this systematic approach requires a shift of focus from performance-driven AI to trust-driven AI. We encourage practitioners to focus on the long-term benefits of gaining the trust of all stakeholders for the sustained use and development of these systems.

BOWEN ZHOU
Tsinghua University

Read the Original

This page is a summary of: Trustworthy AI: From Principles to Practices, ACM Computing Surveys, January 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3555803.
You can read the full text:

Read

Contributors

The following have contributed to this page