What is it about?
AI tools built for civilian purposes often end up used in harmful or unintended ways — including in military conflicts. This raises a key question: who is morally responsible? The article applies the concept of reasonable foreseeability — the idea that you're responsible not just for what you intended, but for what you could have reasonably predicted. AI developers can't claim innocence simply because they didn't design their systems for harmful uses. If those uses were foreseeable, moral responsibility remains. The article calls on developers to actively assess the full range of ways their tools might be used, not just the intended ones.
Featured Image
Photo by Nahrizul Kadri on Unsplash
Why is it important?
As AI becomes more powerful and flexible, the gap between "designed purpose" and "actual use" will only grow. Without a clear framework for responsibility, developers can hide behind good intentions while their technologies cause real harm. Grounding AI ethics in reasonable foreseeability fills that gap — it's a principle already trusted in law and moral philosophy, and it translates naturally to AI. Establishing this now, before harmful uses become even more widespread, could shape how the industry governs itself and how regulators hold it accountable.
Read the Original
This page is a summary of: Multi-Use AI and Moral Responsibility, Communications of the ACM, February 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3764385.
You can read the full text:
Contributors
The following have contributed to this page







