Killer Robots: A Third Revolution in Warfare? |
To download a pdf of this article, click here.
|
by Jody Williams
(This is an excerpt of a longer article that appeared in the Georgetown Journal of International Affairs)
Gunpowder first revolutionized warfare, followed by a second far more destructive revolution: nuclear weapons. The nine countries possessing nuclear weapons hold the fate of the world—nearly eight billion lives—in their hands. And now, the marriage of weapons of war and artificial intelligence (AI) into fully autonomous weapons systems, or killer robots, further threaten the world’s future in incalculable ways.
To most, such weapons are science fiction. But for those who contribute to their ongoing
development, production, and testing, whether it be academics and universities, computer scientists and roboticists, or the weapons industry and the militaries that fund them, killer robots are very real. In a five-year period ending in 2023, the Pentagon’s Defense Advanced Research Project Agency (DARPA) aims to spend $2 billion on AI. Organized movements and coalitions must confront the delegation of life and death decisions made by algorithmic-driven weapons.
One such coalition, the Campaign to Stop Killer Robots, was launched in April 2013. The campaign does not oppose AI and autonomy on principle. Rather, it believes that death by autonomous machines is not only morally and ethically bankrupt, but also that the use of such weapons would violate human rights and humanitarian law—the laws of war. Building on the work of other humanitarian disarmament campaigns that led to the 1997 Mine Ban Treaty, the 2008 Cluster Munition Convention, and the 2017 Treaty on the Prohibition of Nuclear Weapons, the Campaign to Stop Killer Robots seeks a new treaty requiring that human beings be meaningfully and directly involved in all target and kill decisions. Such a treaty would effectively “ban” fully autonomous weapons.
Within months of the campaign’s launch, governments responded by opening discussions on autonomous weapons in 2013 at the United Nations in Geneva during meetings of the Convention on Conventional Weapons (CCW). Negotiated at the end of the Vietnam War, the CCW entered into force on December 2, 1983, and bans or restricts the use of weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.” The various protocols attached to the Convention deal with different weapons and the addition of new protocols—which could include a new protocol on autonomous weapons.
Unfortunately, more than seven years later, the CCW meetings have done little more than tread water as research and development on weaponized AI continues at a dizzying pace.
Often when most people hear “autonomous weapons” or “killer robots,” they assume the terms are synonyms for drones. This is not the case, but using drones as a point of departure makes it relatively easy to make the leap to grasp the concept of fully autonomous weapons. The simplest of these weapons systems would resemble drones with no human pilots or human control once launched.
Autonomous weapons systems would range from a “human-free drone,” to complex swarms of weapons operating independently of humans once launched. On December 15, 2020, the US Air Force carried out its first test of such weapons as part of its Golden Horde program. “The ultimate aim of this effort is to develop artificial intelligence-driven systems that could allow the networking together of various types of precision munitions into an autonomous swarm.” In early 2020, the Pentagon’s Joint Artificial Intelligence Center began work on a project that “would be its first AI project directly connected to killing people on the battlefield.”
In addition to crossing the moral and ethical Rubicon of deciding to create and use autonomous weapons that can attack and kill people on their own, such weapons violate provisions of the laws of war, as well as human rights.
Jody Williams received the 1997 Nobel Peace Prize for her work to ban antipersonnel landmines. She is cofounder of the Campaign to Stop Killer Robots. Williams also chairs the Nobel Women’s Initiative, supporting women’s organizations working in conflict zones to bring about sustainable peace with justice and equality.
(This is an excerpt of a longer article that appeared in the Georgetown Journal of International Affairs)
Gunpowder first revolutionized warfare, followed by a second far more destructive revolution: nuclear weapons. The nine countries possessing nuclear weapons hold the fate of the world—nearly eight billion lives—in their hands. And now, the marriage of weapons of war and artificial intelligence (AI) into fully autonomous weapons systems, or killer robots, further threaten the world’s future in incalculable ways.
To most, such weapons are science fiction. But for those who contribute to their ongoing
development, production, and testing, whether it be academics and universities, computer scientists and roboticists, or the weapons industry and the militaries that fund them, killer robots are very real. In a five-year period ending in 2023, the Pentagon’s Defense Advanced Research Project Agency (DARPA) aims to spend $2 billion on AI. Organized movements and coalitions must confront the delegation of life and death decisions made by algorithmic-driven weapons.
One such coalition, the Campaign to Stop Killer Robots, was launched in April 2013. The campaign does not oppose AI and autonomy on principle. Rather, it believes that death by autonomous machines is not only morally and ethically bankrupt, but also that the use of such weapons would violate human rights and humanitarian law—the laws of war. Building on the work of other humanitarian disarmament campaigns that led to the 1997 Mine Ban Treaty, the 2008 Cluster Munition Convention, and the 2017 Treaty on the Prohibition of Nuclear Weapons, the Campaign to Stop Killer Robots seeks a new treaty requiring that human beings be meaningfully and directly involved in all target and kill decisions. Such a treaty would effectively “ban” fully autonomous weapons.
Within months of the campaign’s launch, governments responded by opening discussions on autonomous weapons in 2013 at the United Nations in Geneva during meetings of the Convention on Conventional Weapons (CCW). Negotiated at the end of the Vietnam War, the CCW entered into force on December 2, 1983, and bans or restricts the use of weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.” The various protocols attached to the Convention deal with different weapons and the addition of new protocols—which could include a new protocol on autonomous weapons.
Unfortunately, more than seven years later, the CCW meetings have done little more than tread water as research and development on weaponized AI continues at a dizzying pace.
Often when most people hear “autonomous weapons” or “killer robots,” they assume the terms are synonyms for drones. This is not the case, but using drones as a point of departure makes it relatively easy to make the leap to grasp the concept of fully autonomous weapons. The simplest of these weapons systems would resemble drones with no human pilots or human control once launched.
Autonomous weapons systems would range from a “human-free drone,” to complex swarms of weapons operating independently of humans once launched. On December 15, 2020, the US Air Force carried out its first test of such weapons as part of its Golden Horde program. “The ultimate aim of this effort is to develop artificial intelligence-driven systems that could allow the networking together of various types of precision munitions into an autonomous swarm.” In early 2020, the Pentagon’s Joint Artificial Intelligence Center began work on a project that “would be its first AI project directly connected to killing people on the battlefield.”
In addition to crossing the moral and ethical Rubicon of deciding to create and use autonomous weapons that can attack and kill people on their own, such weapons violate provisions of the laws of war, as well as human rights.
Jody Williams received the 1997 Nobel Peace Prize for her work to ban antipersonnel landmines. She is cofounder of the Campaign to Stop Killer Robots. Williams also chairs the Nobel Women’s Initiative, supporting women’s organizations working in conflict zones to bring about sustainable peace with justice and equality.
StopKillerRobots.org
Operates globally with 180+ member organizations. Their goals are to:
See their Take Action page here.
- Reject autonomous killing by machine in warfare, policing, etc.;
- Ensure human control, responsibility and accountability in any use of force;
- Counter digital dehumanization and protect human rights;
- Build recognition that we are individually and collectively responsible for developing and shaping the technologies that frame interaction between us;
- Challenge the inequalities and oppressions in society that are reproduced or exacerbated through technology.
See their Take Action page here.