KEY LEARNINGS
- Lethal Autonomous Weapons Systems (LAWS) are designed to select and engage targets without human intervention, effectively delegating life-or-death decisions to algorithms.
- Military autonomy exists on a spectrum: 'human-in-the-loop' (human decides), 'human-on-the-loop' (human supervises), and 'human-out-of-the-loop' (full autonomy).
- The 'accountability gap' presents a major legal challenge: if an autonomous weapon commits a war crime, it is unclear whether the programmer, commander, or manufacturer is responsible.
- Strategic risks include 'flash wars,' where autonomous systems escalate conflict faster than human diplomacy can de-escalate.
- Dual-use technology means that commercial AI developments in computer vision and navigation are increasingly relevant to military applications.
- 🌐Human Rights Watch: Stopping Killer RobotsCampaign resources for autonomous weapons ban.
- 📄ICRC Position on Autonomous Weapon SystemsInternational Committee of the Red Cross position statement.
- 📄Department of Defense Directive 3000.09US DoD policy on autonomy in weapon systems.
- United Nations Security Council. (2021). Final report of the Panel of Experts on Libya.
- Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W.W. Norton & Company.
- Horowitz, M.C., et al. (2019). A Stable Nuclear Future? The Impact of Autonomous Systems. arXiv.





