The UN Has a Rare Shot at Reducing the Risks of AI in Warfare
Inside the push to regulate lethal autonomous weapons — "killer robots" — and the diplomatic roadblocks ahead.
On May 11 and 12, representatives from governments and civil society met at the United Nations in New York to discuss the prospect of an international agreement to regulate Lethal Autonomous Weapons Systems. These AI-powered “killer robots” are no longer the stuff of science fiction. They can select and strike targets on their own, without meaningful human control.
Advocacy groups—including major human rights organizations like Human Rights Watch and Amnesty International—have been pushing for a legally binding treaty to regulate these weapons systems for about a decade. They are now joined by around 120 UN member states that agree, at least in principle, on the need to impose controls on what autonomous weapons can and cannot do. But as artificial intelligence technologies advance rapidly, the key question is: can governments reach a treaty before these weapons become too ubiquitous to constrain through norms or international law?
It’s a race between the slow, onerous process of reaching an international treaty and the swift development of military applications of artificial intelligence. And it is unfolding today at the United Nations.
A History of Autonomous Violence
The first confirmed use of a lethal autonomous weapon was in Libya in 2020, in a civil war that pitted the internationally recognized government against a renegade general for control of eastern Libya. According to a UN report, the government deployed drones programmed to hunt down targets without connectivity to a drone operator—essentially unleashing a machine that pursued retreating soldiers without human control. Since then, there have been other suspected uses of these systems, including in Ukraine and Gaza.
Some governments and companies are building these systems—and advancing at a rapid pace. In 2024, the Israeli magazine +972 revealed that the Israeli Defense Forces were using a machine learning program to generate tens of thousands of targets throughout Gaza—a pace and scale that no human could reasonably vet. While there is no evidence that the systems carrying out these attacks were themselves autonomous, the episode suggests it’s only a matter of time before machine learning targeting systems are directly connected to weapons platforms capable of executing attacks at scale.
“There is a governance gap,” Ayca Ariyoruk, of the Center for AI and Digital Policy told me on the sidelines of meetings at the UN. “There are several regional and international frameworks aimed at AI governance, none apply to lethal autonomous weapons systems. Nearly all contain broad national security exceptions, and do not address the military applications of AI. At the same time, lethal autonomous weapons systems are not covered under any of the existing arms control agreements. The most dangerous uses cases of AI are falling through the cracks.”
The most dangerous uses cases of AI are falling through the cracks.
As these technologies evolve, there is growing momentum within the UN system to close that gap—potentially through a new treaty that would regulate lethal autonomous weapons systems, including an outright ban on some systems altogether.
Getting to a Treaty
In 2013, the UN Special Rapporteur on extrajudicial, summary, or arbitrary executions, Christof Heyns, released a groundbreaking report outlining how these weapons could undermine longstanding norms of international humanitarian and human rights law. Around the same time, a coalition of NGOs launched the Stop Killer Robots campaign to advocate for strong restrictions on the development and use of such weapons. These efforts caught the attention of diplomats in Geneva, prompting the Convention on Certain Conventional Weapons (CCW) to begin exploring the issue.
The CCW, formally known as the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, was adopted in 1980 and entered into force in 1983. Today, it has 125 member states. Its purpose is to ban or restrict weapons that cause unnecessary suffering or indiscriminate harm. The CCW operates through a framework of protocols, each addressing a different weapon type—such as landmines, incendiary weapons, or blinding lasers.
Over the years, the CCW has met dozens of times to discuss lethal autonomous weapons systems. “It’s been an excellent technical forum to work out what the main issues are, where there is convergence, and where there is divergence. And after all these years, they’ve basically settled on a draft treaty text,” Anna Hehir, Head of Military AI at the Future of Life Institute tells me in a recent Global Dispatches podcast interview.
However, the CCW has stalled in taking the crucial next step: actually turning this draft into a treaty that could be adopted by member states. The CCW operates by consensus—any one country can block a proposal from moving forward. And this is precisely where progress has faltered. Countries including the United States, Russia, Israel, Turkey, and India remain adamantly opposed to a legally binding international treaty on autonomous weapons. These governments routinely block consensus on both procedural and substantive grounds, effectively stalling the CCW’s ability to advance a treaty for wider UN consideration. The proposals may be sound, but they are essentially dying in committee.
At the UN, Process Determines Outcomes
With the CCW paralyzed, treaty advocates—both governments and civil society groups—are pushing to move negotiations from Geneva to the General Assembly in New York. Unlike the CCW, where consensus is required, a treaty at the General Assembly can be adopted with a simple majority vote of member states. No country wields a veto.
Still, the procedural shift is controversial—even among some governments that support the goal of a treaty. Several key European countries, including Germany and the Netherlands, oppose moving the process to New York. They argue that the CCW remains valuable precisely because it includes the countries with the most advanced autonomous weapons capabilities. Their position is that any meaningful treaty must have buy-in from those countries—especially the United States and Russia—in order to be effective.
That logic is sound, in theory. But in practice, ten years of negotiation has yielded little progress, and opposition from key states has hardened. “It’s a self-imposed checkmate,” says Magnus Løvold of Lex International. “By insisting that progress can only happen in a consensus-bound forum, Western states are giving spoiler powers a veto—and depriving themselves of any leverage.”
What Next?
Countries leading the push for a legally binding treaty—chiefly Austria, Costa Rica, and Sierra Leone—are working to bring the process to the General Assembly, where a treaty could be adopted by a majority vote.
Over two days of informal consultations, momentum clearly shifted in that direction. More than 90 countries participated—well over half of all UN member states, including many that are not party to the CCW. “There was widespread recognition by states, the UN Secretary General, the President of the International Committee for the Red Cross and wider civil society, that new international law is urgently needed to address ethical, technological, legal and security concerns that autonomous weapons pose to humanity,” says Nicole van Rooijen, Executive Director of the Stop Killer Robots campaign.
One reason this treaty campaign enjoys widespread support—even from some unlikely quarters—is the fear that lethal autonomous weapons could soon be easily and cheaply procured. For many states, particularly in the developing world, the most pressing security threats are internal: insurgency, terrorism, and criminal violence. There is deep concern that these weapons could soon be used not just by powerful states, but by non-state actors and terrorist groups.
“Terrorism has shattered West Africa,” Sierra Leone’s Foreign Minister Timothy Musa Kabba tells me. “It has overtaken countries, and the resulting insecurity has led to coups against democratically elected governments in Burkina Faso, Niger, and Mali. Other countries along the Gulf of Guinea have also been affected.”
The consultations in New York were informal by design—meant to gauge political will. These conversations will continue. The next likely step is for the General Assembly to authorize another round of consultations when it reconvenes this fall. According to observers, the hope is that by 2026 the General Assembly will formally endorse the start of treaty negotiations.
If that happens, the timeline could move quickly. The 2017 Treaty on the Prohibition of Nuclear Weapons, for example, took just nine months from mandate to adoption. Given the groundwork already laid by the CCW, there is reason to believe a treaty on autonomous weapons could follow a similarly swift path—once the political will is there.
The challenge is getting to that point. While more than 120 of the UN’s 193 member states support a legally binding treaty, the question of where negotiations should occur—Geneva or New York—is stalling progress. In New York, a treaty could be finalized in the next couple of years. In Geneva, there’s little reason to believe it could ever happen, given the intransigence of certain countries that routinely block consensus.
“There are two races happening,” says Anna Hehir of the Future of Life Institute. “There’s the arms race—proliferation, fast technological development, and a lot of posturing. And then there’s the race to codify existing norms into law. We have a very narrow window to get the rules in place before these systems proliferate beyond control.”
Want to learn more? Here’s my full interview with Anna Hehir of the Future of Life Institute, recorded on the sidelines of UN consultations on regulating Lethal Autonomous Weapons Systems