What is it about?

This study examines the current practices, challenges, and lessons in auditing AI systems for bias, with a focus on legal compliance in the US and EU. It highlights the need for standardized methodologies to ensure trustworthy AI systems that meet ethical and regulatory standards. The research reveals that AI systems are prone to biases from data, algorithms, and human oversight, and that existing audits lack standardization, resulting in inconsistent reports. The EU's risk-based approach offers a comprehensive framework but requires practical standards and consistent application. The study emphasizes the importance of socio-technical perspectives, stakeholder engagement, and robust auditing frameworks to ensure fairness and mitigate biases, particularly for marginalized groups. It calls for future research on the effectiveness of audits, standardized methodologies, and automated audit tools to advance equitable AI systems. This research is significant because it addresses the growing concern of biases in artificial intelligence (AI) systems and explores various approaches to auditing these systems for biases. By focusing on legal compliance audits in the United States and the European Union, the study highlights the need for standardized methodologies that ensure AI systems are trustworthy, ethical, and in line with regulatory expectations. The findings contribute to the development of equitable AI systems, which is crucial for promoting social justice, preventing discriminatory outcomes, and closing socioeconomic gaps, particularly for marginalized groups. The research also provides actionable insights for firms, regulators, and auditors, emphasizing the importance of adopting robust governance and risk assessment practices. Key Takeaways: 1. Standardization Needs: The study emphasizes the necessity for standardized methodologies in AI bias auditing to ensure consistent and trustworthy outcomes that align with ethical and regulatory expectations. 2. Legal Compliance: The research compares U.S. and EU approaches to AI bias audits, revealing the strengths and limitations of each. The EU's risk-based conformity assessment framework offers a comprehensive approach, but its effectiveness depends on developing practical standards and consistent application. 3. Socioeconomic Impact: Effective AI bias auditing practices can help identify and mitigate biases that disproportionately affect marginalized groups, promoting social justice and closing socioeconomic gaps in critical domains like employment, healthcare, and financial services.

Featured Image

Read the Original

This page is a summary of: Artificial intelligence bias auditing – current approaches, challenges and lessons from practice, Review of Accounting and Finance, March 2025, Emerald,
DOI: 10.1108/raf-01-2025-0006.
You can read the full text:

Read

Contributors

The following have contributed to this page