The Ethics of Autonomous Weapons Systems
As autonomous weapons systems move ahead, we must consider the moral consequences of such calculated combative measures that remove human choice from weapons usage in the military.
Reading Time: 4 minutes

Drones have proved to be a transformative innovation for 21st century warfare. Ukraine’s recent attacks on Russian cruise missile carrier aircraft have proven, among many other examples, the utility of drones. The use of drones in warfare has developed significantly since 2014, when cheaper first-person view (FPV) drones were extensively used in the Russian invasion of Crimea. Both sides, including Ukrainian and pro-Russian separatists, used drones extensively in the later stages of the 2014 conflict, specifically for reconnaissance. Their reliability and relatively low cost made them useful. Later on, ISIS used drones as kamikaze weapons during the Battle of Mosul in 2016 for surveillance. Given the clear advantages of drone use, the U.S., among other countries, is realizing that autonomous systems are the next step in the expansion of the role of unmanned systems in combat. As the Department of Defense described in 2024, 70 percent of the Defense Advanced Research Projects Agency’s programs interact with artificial intelligence in one way or another—many through the development of unmanned autonomous systems. If developed and used in a widespread setting, autonomous weapons could permanently alter how countries approach warfare. While they are both unmanned, there are key differences between a drone with a human operator and one with an autonomous system. An autonomous system can maneuver, detect targets, and make combat decisions all without human intervention. This poses new ethical challenges for the existing structure of warfare.
The Defense Innovation Portal, a platform to submit proposals to the Department of Defense, shows the U.S. military’s push towards autonomous systems. Recently, requests by the U.S. Army for artificial intelligence-based autonomous systems have become more and more prevalent. This also reflects the programs and doctrines undertaken by the U.S. Army. $13.2 billion was recently allocated to the Armies Next Generation Combat Vehicles program (NGCV), which centers around an optionally-manned autonomous robotic system. The NGCV is planned to replace the Bradley Fighting Vehicle and “bring transformative flexibility and lethality to the battlefield,” according to the U.S. Army.
Part of the logic behind the army acquiring autonomous systems is that these systems would fix many of the issues that plague human-operated drones. Autonomous systems would cut down significantly on the number of personnel required and allow infantry to focus on their already significant tactical responsibilities. Artificial intelligence is also the next step in the arms race of signal jamming, which is when combatants disrupt or tamper with the signal being relayed to drones or similar devices. Jamming guns emit strong electromagnetic frequencies in order to tamper with the link between the drone operator and its system. These usually crash the drone or force its return to the operator or preset location. In Ukraine, this tactic is being utilized by both the Ukrainian and Russian militaries to combat FPV as well as other drones; this tactic is also used similarly in other conflicts across the world.
Autonomous systems with artificial intelligence can easily circumvent these issues. In a situation where jamming is employed, AI systems don’t require that link because they are able to make decisions for themselves and can later return to friendly territory following mission completion. Usually, that signal is used by combatants to track and identify drones in the area they’re operating in. Once that drone signal is locked onto, the location of the drone and that of the operator are revealed, making them targets. However, artificial intelligence also surmounts that issue. If autonomous systems only sparingly use that signal to relay mission changes, ammunition counts, and other details, instead of relying 24/7 on human-guided decision making, the stealth capabilities of drones would increase dramatically.
Finally, autonomous drone swarms could contribute significantly to breaking anti-access area denial (A2/AD)—a layered defense system used to prevent enemy forces from accessing an area. In naval combat, the layered defense system plan is used by creating several layers of interception. Should the enemy launch a cruise missile towards a carrier strike group, there are multiple missile interception systems it has to move through. The missile would have to bypass long-range SM-6s, F/A-18s, and point defense systems. An autonomous drone swarm would completely change layered defense through sheer numbers. Instead of an expensive drone or missile such as a tomahawk or MQ-9, an artificial intelligence-based swarm of drones with massive losses built-in can breach these layers of defense. This is only possible with autonomy, since not every drone in a swarm of this size can be controlled by a human operator while relying on a numbers advantage.
From a purely tactical point of view, pushing for full autonomy development is a sound battle strategy, but the lack of human monitoring of autonomous weapon systems is a point of contention. This argument concerns the ethical principle of “human in the loop,” which refers to how much human decision-making should play in an autonomous attack path. The current system, which requires a human pilot to give the greenlight before allowing a drone to engage, places more responsibility on people but also reduces the efficiency of these systems. A fully autonomous system could have practical combat advantages but leads to ethical questions regarding possible violations of the rules of engagement, which is a war crime. While fully autonomous systems could help America win the wars of the future, humans should still give the go-aheads for these systems.