Press "Enter" to skip to content

Killer Robots Are Real, And We Need To Ban Them

Photo by Thomas Galler on Unsplash

Technology is developing more rapidly than it ever has before. As we look to the future, developments like slimmer iPhones, safer cars, and space exploration are certainly exciting. However, there is some cause for concern. Technology in the military is developing at a rapid rate as well. Those “killer robots” that you see in science fiction movies? They’re very much real. 

For example, the KARGU autonomous drone utilizes facial recognition technology and carries explosives that it can deploy without a human operator. A drone in Israel – called the IAI Harop – seeks radar signals that it doesn’t recognize. It then self-destructs into the source of those signals. Lethal autonomous weapons, or “killer robots,” use AI technology to select and engage a target on their own. And many are asking the question: What will be the consequence of these weapons becoming more developed and prevalent in society?

The United Nations has expressed much concern over the issue of lethal autonomous weapons. When reviewing the UN’s priorities for the upcoming year, Secretary-General Antonio Guterres urged for “a total ban on lethal autonomous weapons, the most dangerous dimension that artificial intelligence can bring to the future of war.” 

Why is Guterres so worried? He isn’t concerned about a terminator-Esque scenario like some would expect. While it’s unlikely that lethal autonomous weapons would turn against humans, they still pose dreadful possibilities. And those possibilities will become reality if we do not enact a ban now. 

Studies have found that lethal autonomous weapons are likely to be cheaper than human soldiers. Because of this, it is expected that countries will replace their troops with these weapons. And in future wars, states will be losing machinery instead of lives. Those advocating for the use of lethal autonomous weapons often cite this as a benefit that needs pursuing. If we have an opportunity to keep human soldiers from going into war, we should take it, right?

Absolutely not. When there is nothing to lose and everything to gain, states will be more willing to go to war. German political scientists Sauer and Schörnig are confident that war will happen more often if states have access to lethal autonomous weapons. This becomes increasingly problematic in situations where less developed countries don’t have the same access to lethal autonomous weapons that major powers do, and therefore can’t retaliate at the same level. As these major powers dominate with lethal autonomous weapons, inequality in warfare will grow. 

And with more war happening, there will be more bloodshed. While there may be fewer human soldiers on the battlefield, lethal autonomous weapons will mercilessly cause harm to civilians. 

How so? In warfare, international humanitarian law protects civilians through the rules of distinction and proportionality. Distinction “permits direct attacks only against the armed forces of the parties to the conflict, while the peaceful civilian population must be spared and protected against the effects of the hostilities.” (Melzer) And proportionality “prohibits attacks in which expected civilian harm outweighs anticipated military advantage.” (Human Rights Watch

First, lethal autonomous weapons violate the principle of distinction. Artificial intelligence is incapable of gauging human intention, according to the Human Rights Watch. As a result, lethal autonomous weapons will not be able to distinguish between a soldier and a civilian. 

Second, lethal autonomous weapons violate the rules of proportionality. Proportionality is solved on a “case-by-case basis.” (Human Rights Watch) Restricted by its set amount of programming, it is impossible for lethal autonomous weapons to correctly respond to a situation like a human can. When a lethal autonomous weapon needs to make a literal life and death decision, it will likely make the wrong one. 

Because they violate both distinction and proportionality, lethal autonomous weapons are bound to unlawfully kill civilians on a mass scale. 

Furthermore, as stated by UPenn Law, there will not be anyone to hold accountable when a lethal autonomous weapon makes these violations of international humanitarian law. Lethal autonomous weapons make their own decisions, and – quite obviously – you can’t prosecute a machine. Impunity for these violations is incredibly unjust for victims.

There is also the issue of compassion: the problem being that lethal autonomous weapons cannot have it. Compassion is unique to humans, and AI simply cannot replicate it. Vox recalls an event where it was legal to kill a six-year-old girl. The human soldiers given this decision understood the universal idea that it is wrong to kill children. In contrast, lethal autonomous weapons do not harbor compassion and would not make decisions in this way. 

Because lethal autonomous weapons are not compassionate, they can be effective tools of oppression for authoritarian regimes and dictators. By allowing for the development of this technology, we are opening another avenue for oppression. 

It is impossible to replace humans with AI without dire consequences, and we cannot allow for any more development of these weapons. If we want to protect our future, we have to ensure that this method of warfare – death by computer – isn’t one we have to experience. 

Be First to Comment

Leave a Reply

%d bloggers like this: