Autonomy or Accountability?
Conversations on the legal, ethical, and technical implications of AI weapons have been brought up by the UN for years. Recently, however, when an automatic drone tried to hunt down a civilian in Libya, the UN was at a loss on what to do. The atrocious lack of accountability can and will quickly snowball into chaos, and until a way to hold perpetrators accountable is decided upon, the usage of LAWS should be suspended.
Reading Time: 4 minutes
The level of “human touch,” of human autonomy or control, in automatic weapons like drones is a topic that has been extensively debated by human rights organizations, such as the United Nations (UN), and some of the most relevant engineers and scientists in the world, such as Elon Musk and Stephen Hawking. It wasn’t until last year, when autonomous drone Kargu-2 was reported to be hunting down and targeting retreating Libyan soldiers, that experts’ worries became a reality. In a rapid sequence of events that sounds like the plot of a ‘90s sci-fi novel, the UN was at a loss as to who to blame for the attack and said that the drone had “a mind of its own.” Lethal autonomous weapons systems (LAWS) have the intention to be used in areas with few civilians or civilian property. LAWS supposedly allow for more accurate attacks and smaller risk for collateral damage. However, the Kargu-2 is different. The Kargu-2 was the first LAWS that attempted to kill someone on its own accord, spelling a frightening future for warfare and accountability.
Many LAWS developers have recently begun resorting to a method called swarming. Swarm intelligence has occurred in the natural world for millions of years. The behaviors of flocks of birds, schools of fish, colonies of ants, and swarms of bees are some examples. What draws these types of behaviors together are three key rules: communication between entities, lack of a leader, and a set of rules. Swarm intelligence in the context of weaponry acts the same way: the drones possess a set of rules dictated by the programmer. However, just like how birds in a flock can identify a threat and change their flight accordingly, drones can do the same. Hence, “communication” between the drones allows them to divulge in complex threat analyses of the surroundings and act accordingly while signaling to their fellow drones of the threat. The beauty and complexity of swarming is that there is no leader. As Sean A. Williams of the U.S. Air Force says, drones can accomplish “all of this without a leader by using a local and decentralized network of communication where each [drone] is only communicating with his neighbor.” These swarms are completely adaptable, allowing them to make decisions on the fly (no pun intended) and “bounce” ideas off of each other while doing so.
Many lead engineers and scientists believe that these drones are incapable of distinguishing between soldiers and civilians. For example, the UN Security Council concedes that these drones cannot distinguish between packages delivered and branded by the United Nations International Children’s Emergency Fund versus those branded with the Turkish flag. Clearly, bombing packages of aid offered by humanitarian organizations would be dangerous for innocent civilians, not to mention the money lost if such a mistake were to occur. While supporters defend swarming intelligence by saying that it allows for fewer mistakes than a human soldier, there is still no adequate risk factor that can support the argument. Hence, because there is no sufficient statistical evidence for the value of autonomous drones, the risk is simply not worth the reward of the possibility that civilian casualties would decrease.
NATO recently realized that the regulations on LAWS should be drawn and revised. Key revisions should include accountability if a drone attack goes wrong, checks on how much autonomy a drone has, and, in the context of swarming, how extensive and controlling the set of rules issued by the programmer are. The usage of LAWS is not remotely new; however, Libya’s incident was the first time the world witnessed a lack of accountability and a possible innocent casualty as a result of human absence. The “what if” clause here is very realistic, as wars are becoming more and more rampant in today’s society, many of them with disregard for civilians. See the Russian and Ukrainian war for example. The possibility that erroneous LAWS pose lethal threats to innocents is not something to be taken lightly.
The clearest solution to preventing this problem is to reintroduce human supervision. Stationing soldiers who understand the functionality of the drone would create a safety net if drones began to target civilians or decisions began to deviate from initial commands. However, the question of whether that soldier would be fully responsible for the mishap arises. The answer isn’t cut and dried. It leads us to look for precedence for AI technologies being prosecuted, and who exactly was prosecuted. Sanjay Srivastava, Chief Digital Officer at professional services firm Genpact, even admitted that “if you use AI, you cannot separate yourself from the liability or the consequences of those uses.” With this logic, the user of LAWS should stand on trial if a calamity were to occur. However, with drones, it's trickier because of the nature of the military. Soldiers who possess control over the drones could simply argue that they were taking orders from higher-ups, invoking atrocities of the past like the Holocaust or the Vietnam War. The ambiguity of accountability is enough to say that autonomous drones should be illegalized in warfare until there is a clear traceback to the group that directed the order. Unfortunately, the public has been very quiet about these recent atrocities, and simply spreading awareness online about the topic would help draw attention and spur public outrage in a greater scope. This would ensure that LAWS restrictions aren’t put on the back burner in UN conferences and in the greater political and military scheme.