What is it about?
The challenge in deploying AWS is not that it can kill people and break things, but ensuring that it only kills the right people and breaks the right things. In this paper, we use a hypothetical recently discussed at a military AI conference as a springboard to introduce important dimensions of the artificial intelligence (AI) ‘Alignment Problem’ into the discourse concerning AWS. This paper will consider important dimensions of what is known as the ‘Alignment Problem,’ why it is difficult to specify ‘smart’ goals for autonomous systems, why ‘intelligent’ systems can pursue ‘dumb’ goals, and what the implications for this are for the legal assurance of autonomous weapon systems. We begin with some preliminary remarks about what we mean by ‘intelligence,’ and ‘intelligent agents.’ We then outline the Alignment Problem at a conceptual level, including introducing the concept of objective functions and rewards. We then turn to an exploration of what the Alignment Problem implies for AWS testing, and why apparently simple solutions may not be effective. From here we discuss the implications that the Alignment Problem has for international law applicable to AWS, addressing legal obligations relating to the responsibility of states to respect and ensure respect for with international humanitarian law (IHL) and international human rights law (IHRL).
Featured Image
Read the Original
This page is a summary of: Autonomous Weapons Systems and the ai Alignment Problem, Journal of International Humanitarian Legal Studies, April 2025, De Gruyter,
DOI: 10.1163/18781527-bja10107.
You can read the full text:
Contributors
Be the first to contribute to this page







