What is it about?
Chris Draper authors Chapter 2 of "Governing Artificial Intelligence" presented here, examining how the artificial intelligence (AI) industry defines and approaches “AI safety,” highlighting the gap between technical reliability and the broader social risks posed by AI systems. Industry leaders such as OpenAI and Anthropic often equate safety with system performance—ensuring models are robust, interpretable, and reliable. OpenAI emphasizes controls like age gating, privacy protections, and factual accuracy, but internal disputes have raised questions about how well these ideals are enforced. Anthropic’s Claude model illustrates the prevailing mindset by promoting “Constitutional AI,” which integrates ethical principles into the model so it can evaluate and adjust its own responses. This approach seeks to reduce harmful or biased outputs without requiring constant human oversight, but it risks lulling users into complacency, much like drivers relying on Tesla’s “autopilot.” The text warns that AI’s ability to act faster than humans can supervise creates dangers beyond technical malfunctions. The author argues that true safety must extend beyond platform reliability to include the broader societal impacts of AI deployment, and presents the Draper Curve as a practical threshold for defining when an AI augmentation is ethical. Without such measures, human oversight will erode as AI systems scale, and the public will face increasing hazards that current definitions of safety fail to address. To order Governing Artificial Intelligence (Wing, Draper, Cooper, Rainey, 2025): https://brill.com/display/title/72670?language=en&srsltid=ARcRdnrezQPkRc-pcBXbFi3-n8rFCFaLXjcatm4AAb3NXdMNA_1AfyQI
Featured Image
Photo by Sajad Nori on Unsplash
Read the Original
This page is a summary of: How the AI Industry Views AI Safety, September 2025, De Gruyter,
DOI: 10.1163/9789004737389_004.
You can read the full text:
Contributors
The following have contributed to this page







