Can a Series of Global AI Summits Actually Shape the Rules that will Govern the World’s Most Powerful Technology?
How to get AI global governance right
In 2023, the United Kingdom hosted the world’s first major international summit on the risks of advanced artificial intelligence. That Bletchley Park gathering kicked off a series of high-profile summits — in Seoul in 2024, in Paris earlier this year, and with India set to host the next in 2026.
But here’s the big question: are these summits actually building the guardrails the world needs to keep AI safe, or are they falling short?
My Global Dispatches podcast guest today, Robert F. Trager, says the answer depends on whether this summit series is reformed. Trager is Co-Director of the Oxford Martin AI Governance Initiative, which recently published a roadmap for how the AI Summit Series can live up to its potential.
We dig into those recommendations, and we also take stock of what these summits have accomplished so far — and what’s at stake if they fail.
This episode was supported through a grant from the Tarbell Center for AI Journalism. It is freely available on Apple Podcasts, Spotify, or wherever you get your podcasts.
Transcript edited for clarity
Mark Leon Goldberg: Robert, thank you so much for speaking with me today. I’d like to focus our conversation on both what the AI summit series have achieved and how it might be strengthened going forward. But before we get there, could I have you just take me back to Bletchley Park in 2003 and have you explain the impetus for that initial meeting in this series?
Robert Trager:
Back in Bletchley, the conversation around AI was really quite different from what it is now. There was just much less talk about governance of AI, except in kind of niche circles. And I think the UK government thought it was something super cool to work on, and also super important, and they decided to focus on it. They brought the world together. Not everyone, but dozens of countries, including the United States and China. And, actually, the United States and China don’t interact in that many fora outside of the UN. So, that, in and of itself, was kind of a big deal. And when they got together, they released a statement that highlighted potential opportunities and risks from AI. It changed the conversation because before that, people weren’t really talking about AI safety in the same sort of way.
And they weren’t thinking about the security implications of AI. And that’s really what Bletchley wanted to focus on. They debated, in advance, what that summit should focus on. They thought about a very broad definition of governance and a more narrow one, and they settled on a relatively narrow one, where they would focus on what, at that time, they called safety. Now, we often are speaking more about security. But at that time, they focused on safety because they thought that those were the issues that the whole globe had in common. So, they want to focus on big safety risks that would apply to everybody so that they could potentially get agreement in the summit.
Mark Leon Goldberg:
What were some of those safety risks that were kind of percolating at the time? And it’s funny; we’re talking about this like it’s ancient history, but it was two years ago. But I suppose the pace of AI development is such that, two years ago, what we understand as AI safety is different than it is today. But, just at the time, what were they talking about?
Robert Trager:
The risks that they were highlighting were, first of all, upskilling. So, it might be the case that you could release a system that could train lots of people in creating a bioweapon or creating a nuclear weapon or a cyber weapon, etc. And that was something they wanted to worry about. They thought — we need to make sure that these systems aren’t putting dangerous capabilities in the hands of people who would misuse them. And that was often referred to as misuse.
And then there were something, and there’s still something called loss of control risks. That was the one that some people think sounds science fictiony. Doesn’t sound that science fictiony to me, but this was the idea that you might create a system that would do things that you didn’t intended or wanted to do. And as we were able to create more and more sophisticated systems, that kind of risk would become more and more important. And then there was another category of risks that people were always talking about, which was sort of societal risks and things that you might just have a technology that changed a social equilibrium for the worse, potentially.
I mean, you might have something that affected the workplace, for instance, and meant that young people had a harder time finding jobs, as, at the moment, it looks like it might actually be true, although I think it’s not certain yet.
Mark Leon Goldberg:
And so that Bletchley Declaration was essentially a recognition that these are risks that ought to be controlled in some way through international cooperation of some sort and through some sort of governance mechanisms. What did that declaration lead to in practice?
Robert Trager:
In practice, I think it changed the tenor of the conversation. The process around the summit led to the creation of the AI Safety Institute. That’s how the AI Security Institute in the UK. And it also galvanized action in the space and led to a similar institute in the United States. And now other institutes popping up to do similar things in many corners of the world. And that was, I think, really a key outcome of that summit process. Now, the UK AI Safety Institute is really a very significant institution in the space. It has hundreds of millions of pounds in funding compared to really just fractions of that amount in other places, including the U.S. The funding in the U.S. is just a fraction of the funding in the UK.
So, that was sort of a key thing that did, and then it rhetorically got a large number of actors behind the idea that the security risks were real, and we should have international conversations about AI governance. But I think some of the more concrete effects of the Summit Series really had to wait for the next chapter, if you will, which happened in Seoul.
Mark Leon Goldberg:
Yeah. So, I cover the UN very closely. And, oftentimes, the most important outcomes of meetings are scheduling another meeting. And I don’t mean that in any kind of pejorative way, but it’s a way to maintain and sustain progress towards what you were working on. So, the next iteration came in Seoul the next year. What did that meeting accomplish or set out to accomplish?
Robert Trager:
There were a number of statements that came out of the Seoul meeting by both companies and countries. I think probably the most lasting thing that came out of the Seoul meeting were the commitments that companies made. These were voluntary commitments, but they followed on other voluntary commitments, including the White House commitments that Western companies had made previously. And those were important. Still are important. I think when you look at the kind of follow-on attempts to govern AI, for instance, in the EU, they really took those commitments as their starting place. And I think it’s hard to imagine that companies and governments would have been in the place to sort of think about what became the code of practice in the EU AI Act, regulating frontier AI.
They wouldn’t have been ready to do the things that they did then if there hadn’t been the Seoul commitments previously that kind of socialized some of those ideas and got things started. And I think you’re exactly right that these processes are valuable when there’s a kind of norm of incremental progress, when you make some progress in one meeting and you have the expectation that you’re going to run a similar set of walls in the next meeting. And that can be just a great thing. And I think that was, in some ways, the expectation and the hope of Bletchley and Seoul. But it didn’t exactly turn out that way in the sense that the French summit then looked a bit different.
Mark Leon Goldberg:
So, the most recent iteration of this summit series was in February earlier this year. How did what happened in France in that summit series differ, as you described it just now?
Robert Trager:
Well, there was a different focus. The French organizers didn’t want to focus on the same set of issues that Bletchley focused on. And they might have just expanded the set of issues, but instead, they focused really on a different set of issues. So, there was really not that much discussion of the so-called safety and security issues in France. There was some. There was an international scientific report that was led by Yoshua Bengio, and that was presented at the summit there, although actually it really was presented at a venue kind of outside the city. So, people had to travel really quite far to see that presentation. And it was at something called the Science Days instead of the summit itself.
So, there was a kind of minimization of some of those issues that had been discussed in Bletchley and Seoul. And there was more discussion of broadening voice in the governance of AI and access to AI, and probably some more discussion of some of the workplace issues. And, in my view, all of those issues are incredibly important, I think, in most people’s views. I would like to see a broader summit that encompassed all of these issues, rather than a narrower summit that encompassed just a subset.
Mark Leon Goldberg:
Yet, big picture, as I understand from your research, you think that this summit series has been a valuable exercise thus far.
Robert Trager:
I think so, yeah. I think it has been, particularly Seoul, in some ways, was really very valuable because of the commitments that companies were making and the way that that advanced the conversation. And we have lots of other international processes that deal with AI governance. We have things at the ministerial level, and we have things at the UN, and just really all around. There are quite a few processes that deal with international governance. But the Summit Series was a place where world leaders and CEOs gathered. And the fact that they were all gathering there provided a social pressure. Because one CEO does not want to be the CEO left out of some set of commitments, and doesn’t want to have to be there explaining to world leaders on the world stage, “Well, why didn’t you sign the commitments that these other companies did sign?”
So, it’s interesting that the spectacle of this event, the fact that so many people are gathering, that it will be covered on the news, that world leaders are there, the spectacle of it is what causes some of the leverage that negotiators had in convincing companies to sign a set of commitments. And now, depending on your point of view, that’s good or bad. I think some people feel like, ‘well, we don’t want to be pressuring companies into making some of those commitments.’ But for those of us who felt like the commitments were actually pretty good thing and pretty reasonable — commitments could go too far. Regulation can go off the tracks. It’s not that all attempts at governance are good. But those seem good to many of us. And I think the social process of the summit series is really what allowed it to happen.
Mark Leon Goldberg:
Yeah. In general, having covered many a summit throughout my career, it is a fact that the summit itself, scheduled on a date certain, provides something of a forcing mechanism for individuals, for governments, for companies to make decisions by a certain time and to convene all in one place for the social and political pressures that you described. Yet this summit series, thus far, has been somewhat ad hoc. And I know the Oxford Martin School has done a lot of thinking and research on how to make this summit series a more effective and impactful way of governing AI, or providing a forum for the global governance of AI. So, having had this experience now of these three summits, how might this series be strengthened going forward?
Robert Trager:
We did think about this a little while ago, and we published a report with many, many coauthors, so drawing on the insights of many folks. And I think one core of that was the need for continuity, because right now, the summit series really is very ad hoc. There’s no institutionalized procedure, for instance, for choosing the next host. And whoever that next host is is going to have a huge impact on what issues the summit series focuses on. So, whatever the different stakeholders were drilling down into in the previous summit, that may just be out the window, and a whole new set of issues could be focused on in the next summit. And so, creating a bit of continuity, creating a continuation of some of the work plan of previous summits, in between the summits, that could then be announced at a new summit, struck us as a really good idea.
And we have watched different countries accept a summit series and then try very quickly to upskill in some of these issues, which is hard. We watched the UK do it at the first summit, which was actually called, and happened pretty quickly after they decided to do it. And I think many people at the FCDO, the Foreign Commonwealth and Development Office, did not think that they could get a summit done in that time. It’s hard. It’s really hard to see these civil service organizations try to upskill that quickly. And it would really help them and help everybody and help the summit series if there were something like, for instance, a secretariat that was in existence that could help everybody, from summit to summit, that would provide continuity.
Or if there were working groups that could do something similar, could continue the work of the previous summit, leading up to the next summit. If there was a troika system, like, for instance, they have at the G20, where the previous host, the current host, and the host that will be after the current hosts, those three countries get together and form a troika, and actually work together to provide continuity. That’s common in some other international processes. That’s exactly the kind of thing that we’d like to see here. And we’d like to see some institutionalization of the different tracks that could exist from summit to summit.
I think it’s like there’s a fight over what the summit should focus on. And I think many of us would rather see just different tracks that happened at each summit, so that there wouldn’t need to be a fight about, okay, should we have this thing or that thing? No. You just have one track for that thing. The people that want to work on that are over there on that track, and the countries that want to focus on that track are over there, and then another track for the folks that want to focus on a different set of issues. So, greater institutionalization, we think, would be a key thing. And right now, we know that the next host after India’s upcoming summit in February of the next year, we know that the hosts after that will be Switzerland. We have a little bit of idea about what India will focus on now.
We have no idea what Switzerland will focus on. And we have no idea who will be after that. So, it’s hard to create really momentum without some foreknowledge about what will happen.
Mark Leon Goldberg:
Then, as you described earlier, progress happens when there is that kind of momentum where you’re building on what happened in the previous meeting. I think you use like incremental progress is a good outcome for a series like this and for a series of meetings like this. You also note in that paper you reference from the Oxford Martin School that, broadly speaking, instead of like a proliferation of things that these summits ought to be focused on, you advocate for two broad thematic focuses. The governance of advanced AI, on the one side, and then also opportunities for leveraging AI for the public interest on the other side. Could you just unpack that a little bit?
Robert Trager:
I mean, right now, there’s a huge number of countries around the world that are really trying to figure out how should they interact with this technology that seems to be changing the world. And they don’t know, for instance, how should they develop tests to see if AI that maybe is being used in their country is the sort of AI that they want to be used. Does it follow the law fundamentally within their country? They’re not sure. Do we need to develop our own test to do that? Should we be coordinating with others, then test what they have? But maybe they don’t know what we would want and what our law mandates. So, countries are trying to figure out how they should interact with this new technology that’s having such an impact.
And that’s a kind of thing, as well as thinking about broadening access to the technology, that we thought the public interest group could focus on. The access to the technology, of course, is very uneven at the moment. The development of the technology is very uneven. And, again, lots of people around the world are asking, “Well, how can we participate?” So, thinking through some of those problems and thinking through broad-based voice and access is something that we thought a lot of people wanted to focus on at the French summit. And we think that’s good. They should be focused on that. And that’s what the public interest track could focus on.
And then you have a governance track that is addressing maybe some of the potential risks. And those are a variety of different things and is more carrying on the mandate from Bletchley. So, that’s sort of how we thought about it. I don’t think that’s the only way that you could do it, but it’s something we thought would be productive.
Mark Leon Goldberg:
I suppose the value that I see in that approach, as you described it, is, again, having covered these kinds of summit and these kinds of global governance efforts in the past, like the key fault lines in these situations, tend to be between the global north and the global south. And by creating that dual track system, by prioritizing public interest opportunities as much as the governance of AI, which is the provenance of wealthier countries and the companies in the global north, you seem to attempt to bridge that gap.
Robert Trager:
I think you’re right. We see that in the climate negotiations. We see that all over the place. And I think there are reasons why there are these tensions. There tend to be a small set of countries that have interests that are different from this broad, larger set of countries. And sometimes that can be hard to bridge. But I hope that you’re right, that this two-track system is a way that, at least in the sense of discussing and finding solutions, that can be bridged.
Mark Leon Goldberg:
So, there are a number of existing attempts on platforms and forums around AI governance. How might an empowered AI summit series, as you describe it, interact with those other platforms and forums?
Robert Trager:
I do think that it has to adapt as other initiatives gain steam. It’ll have to adapt. I think it would have been potentially a very productive approach if there had been kind of a norm that, okay, the companies are going to get together and they’re going to make incremental progress, as you were saying before. And so, all that was, okay, you’re going to develop a frontier safety framework. The companies were committing to do that. That’s one of the commitments. They also thought that they would be presenting their frontier safety frameworks at the French summit, and that really didn’t end up happening. I think that was a surprise to the companies that they weren’t asked to do that.
But that’s the kind of thing — Let’s make some commitments based on the needs of the day. Let’s have that discussion. Let’s present the ways that companies are thinking about it. And we can have a kind of global discussion with them, and incentives globally to converge towards what, hopefully, one day will be a set of global standards.
Mark Leon Goldberg:
On that last point — hopefully, one day there might be a set of global standards — what are the stakes involved in getting this right or getting it wrong, or letting this series languish without the reforms that might make it more impactful?
Robert Trager:
Well, we don’t know the future course of the technology, and it’s hard to say on what timeline different capabilities will emerge. But right now, we are poised to deploy AI agents globally that will have ever more sophisticated world models, will have their own sets of preferences, and maybe, in the near future, will have capabilities that exceed our own. And we don’t know if that’s true. When it happens, it will be a revolution in world affairs. So, that’s something that we have to deal with. Now, these agents are having impacts that cross borders. They can use the tools of the digital world to have an impact far beyond where they are created and hosted. And so, we might like them, for instance, to follow the rules and the laws of the place where they are having that impact.
But in order to do that, we probably need to engage with the data centers that are hosting the agents. But those data centers may be across borders and half a world away. So, fundamentally, we don’t know exactly what the stakes are, but it’s now reasonable to think that the stakes could be quite high, and maybe extremely high.
Mark Leon Goldberg:
So, in the coming weeks or months or even years, what will you be looking towards that will suggest to you whether or not some of these meaningful reforms to the AI summit series, or AI governance in general, are actually coming to fruition?
Robert Trager:
I think we’re going to have to look to see if these actors, like companies, like governments, are implementing policies that are in societal interests. And that’s the fundamental thing that we’re going to have to be looking out for. I think the summit series is one place where things can happen. It’s played an important role in the past, and, therefore, I’m of the view about many things that if it isn’t broken, then maybe keep using it until it’s not working.
So, things about the summit series that we look for are continuity. Are we actually seeing that they’re making incremental progress each time, that they are making the needed progress each time in order to have the appropriate sets of norms, best practices, in some cases regulations, to lead to flourishing societies?
Mark Leon Goldberg:
Robert, thank you so much for your time. This was really helpful.
Robert Trager:
Thanks so much.
Mark Leon Goldberg:
Thanks for listening to Global Dispatches. The show is produced by me, Mark Leon Goldberg. It is edited and mixed by Levi Sharpe. If you are listening on Apple Podcasts, make sure to follow the show and enable automatic downloads to get new episodes as soon as they’re released. On Spotify, tap the bell icon to get a notification when we publish new episodes. And, of course, please visit globaldispatches.org to get on our free mailing list, get in touch with me, and access our full archive. Thank you!