What is it about?
Using AI adequately is necessary for user companies to remain competitive. Studies show that nevertheless many companies are hesitant in this regard. In relation to the assumption that people’s ability to act is influenced by a lack of trust, particularly in the context of AI, we conducted a study as part of the TrustKI research project to analyze which factors are relevant to documenting trustworthiness in the context of AI. Our evaluation revealed that users demand holistic transparency; the provision of relevant information on the AI solution and proof of technical expertise is not sufficient to build trust, but there is a demand from users for specific information about the respective company. Based on the generally recognized components, we were able to identify further dimensions to provide the required information even more precisely. Thus, the study allows us to propose a preliminary set of information requirements for AI providers.
Featured Image
Photo by Kelly Sikkema on Unsplash
Why is it important?
Our findings shows that Trustworthiness must be addressed as a fundamental element in dealing with the complexity caused by digital transformation and, in particular, by AI. In particular, we have discovered that user companies demand a holistic transparency to be able to trust an AI provider.
Perspectives
The process of writing this article was very productive, especially the in-depth exploration of the interdependence between trust and trustworthiness, which proved to be a very intriguing subject.
Ulla Coester
Westphalian University of Applied Science
Read the Original
This page is a summary of: Trustworthiness needs for the use of AI solutions in business, Interaction Studies Social Behaviour and Communication in Biological and Artificial Systems, December 2025, John Benjamins,
DOI: 10.1075/is.24051.coe.
You can read the full text:
Contributors
The following have contributed to this page







