It is quite possible that
the biggest nations of the world, the United States, Russia and China might
engage themselves in a hostile war in the near future. The war will be fought
ferociously on air, water and land with the most advanced weapons ever
developed. The scale could be epic – we might even call it the Third World War.
Everything will be at stake. Except humans. The soldiers will most likely be
killer robots programmed to destroy the enemy’s resources until they surrender.
There will be destruction, there will be catastrophe, and there will be victory
and defeat. But there will be no blood! Human casualties could be little to
nothing. Yet it will bring the losing nation to its knees.
So, here’s the question of
the future – Are autonomous weapons (weapons not controlled by humans but
rather artificial intelligence systems) a good thing?
What if nations agree upon
a new code of military engagement that no humans would be targeted? Let’s
assume that nations and their deployment of killer robots or autonomous weapons
follow this code of military engagement in the same way that they have not used
nuclear weapons since the middle of the last century. In that case, would it
not be safe to say that human casualties from war would be significantly
reduced? One opposing view is that we may not be able to restrain ourselves
during military conflict and might act beyond the rules of engagement causing
widespread casualties when autonomous weapons are allowed to target humans.
Another opposing view is that the threshold for engaging in war would be
greatly reduced if the human cost of war is apparently reduced, causing a Third
World War to be fought sooner in the future.
On the other hand, a
supporting view is that artificial intelligence will also be used in military
strategy that will tend to deter nations from war. Just like how the computer
Watson could analyze the next possible moves of its opponent in Chess and beat
the best players on Earth, AI would help nations predict the outcome of war even
before engaging in war. The battle of the future could be won or lost even
before fighting it. In other words, there would be no war but just plain
surrender.
The debate is getting
stronger as artificial intelligence takes shape in laboratories around the
world. A limited amount of artificial intelligence is already being used by covert
governmental agencies like CIA and MI5. And their capabilities are
exponentially rising. The question is no longer about ‘if’ but about ‘when’.
Killer robots or the cyborgs as seen in blockbuster movies of the past could become
real in the near future. We might be divided upon whether this could help us or
hurt us. But the community at large is already reacting to it.
On July 27th 2015, an open letter was presented at the International
Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, calling
for a “ban on offensive autonomous weapons.” More than 1,000 experts and
leading robotics researchers have signed it. I’ll end this article with the
letter in full:
Autonomous Weapons:
an Open Letter from AI & Robotics Researchers
Autonomous weapons select and engage targets without human intervention.
They might include, for example, armed quadcopters that can search for and
eliminate people meeting certain pre-defined criteria, but do not include
cruise missiles or remotely piloted drones for which humans make all targeting
decisions. Artificial Intelligence (AI) technology has reached a point where
the deployment of such systems is — practically if not legally — feasible
within years, not decades, and the stakes are high: autonomous weapons have
been described as the third revolution in warfare, after gunpowder and nuclear
arms.
Many arguments have been made for and against autonomous weapons,
for example that replacing human soldiers by machines is good by reducing
casualties for the owner but bad by thereby lowering the threshold for going to
battle. The key question for humanity today is whether to start a global AI
arms race or to prevent it from starting. If any major military power pushes
ahead with AI weapon development, a global arms race is virtually inevitable,
and the endpoint of this technological trajectory is obvious: autonomous
weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they
require no costly or hard-to-obtain raw materials, so they will become
ubiquitous and cheap for all significant military powers to mass-produce. It
will only be a matter of time until they appear on the black market and in the
hands of terrorists, dictators wishing to better control their populace,
warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are
ideal for tasks such as assassinations, destabilizing nations, subduing
populations and selectively killing a particular ethnic group. We therefore
believe that a military AI arms race would not be beneficial for humanity.
There are many ways in which AI can make battlefields safer for humans,
especially civilians, without creating new tools for killing people.
Just as most chemists and biologists
have no interest in building chemical or biological weapons, most AI
researchers have no interest in building AI weapons — and do not want others to
tarnish their field by doing so, potentially creating a major public backlash
against AI that curtails its future societal benefits. Indeed, chemists and
biologists have broadly supported international agreements that have successfully
prohibited chemical and biological weapons, just as most physicists supported
the treaties banning space-based nuclear weapons and blinding laser weapons.
In summary, we believe that AI has great potential to benefit
humanity in many ways, and that the goal of the field should be to do so.
Starting a military AI arms race is a bad idea, and should be prevented by a
ban on offensiveautonomous weapons beyond meaningful human control.