“I’ve heard a lot about the AI safety summit in the news. What exactly are they doing?”
Hi there, and welcome back to another week of Ask Trish! I hope you’re all well and having a great start to November.
Thank you so much to this week’s question-er for the fantastic question! We’ve talked a fair bit about AI technologies/AI-related issues here on Ask Trish, and in those posts, I promised y’all that there was a lot more to come, AI-wise…and that’s exactly what we saw last week. Indeed, last week, there was an AI Safety Summit hosted in the UK, attended by countries and heads of state from all over the globe. It got a lot of media attention – and a lot of questions, as to its aims, significance, and impact.
It was a lot to follow/make of, but hopefully, in this post, I can break it down for y’all. In this week’s post, I cover 1) what, exactly, happened at the AI Safety Summit – and what’s next from here, 2) what it all means – and the politics of what was going on!, and 3) varying perspectives on efficacy and impact of the summit. I’ll ultimately leave it to you to make your own judgment about the summit!
Sound good? Let’s get into it!
Let’s start with the basics: what was the AI Safety Summit, otherwise known as the Bletchley Park Summit? The summit, hosted in the UK at Bletchley Park (famous for being the place where Allied codebreakers helped crack enemy codes during World War II!) was the first major global convening of governments to explore how, collectively, they could work together to harness the benefits and mitigate the risks of artificial intelligence (AI). The summit brought together representatives of countries from 6 continents (including US Vice President Kamala Harris), prominent technology CEOs (including Elon Musk, who is certainly used to making headlines), experts and academics, and other stakeholders. Together, over two days, they worked to produce a broad agreement, now known as the Bletchley Declaration. The declaration is the first international declaration on AI, and lays out a series of underlying principles for engaging with, researching, and building AI. Among them are policies that aim to minimize “catastrophic harm” (something I’m sure we can all agree on!). It may sound pretty basic, but the declaration is a big deal – a sign that governments around the world are taking AI seriously and are willing to work together to ensure it is deployed appropriately in our society. In that vein, this Summit is just the first of several to come – up next will be another virtual summit, hosted by Korea, followed by another in-person summit hosted in France.
Okay, you might be thinking. That sounds pretty cool. But why should I care? Well, once again, Summit sent several powerful messages, among them that governments care seriously about AI harms, particularly those related to mis/disinformation, and wish to address them. The international nature of the Summit also conveyed that governments agreed that international cooperation would be key to engaging with AI. Thus far, most AI rules/guidelines have been created at a national level – but the Summit opens the door to substantive international agreements and rules, rules that might supersede existing national rules. Indeed, the Bletchley Declaration saw countries that are traditionally on opposing sides of most issues – for instance, take the US and China – agreeing on standards. Of course, there’s still considerable politics at play! The US, for instance, announced a number of new AI initiatives – including an AI-related Executive Order by President Biden – in what many commentators have noted was a move to assert its dominance in the AI space. The UK, for its part, aimed to showcase its leadership by hosting the AI Summit. And other countries, including China and European countries, sent top officials to the Summit, in part to show that their nations are “key players” in the AI conversation.
But what to make of the Summit’s efficacy and impact? Some commentators and experts have argued that the Summit was an important first step for the international community in tackling AI-related harms, pointing out that the international order did not adopt a similar approach at the advent of Web 2.0/the internet, with serious negative consequences. These folks point out that the aim of the Summit was not necessarily to produce a substantive agreement (it was the first one, after all, and just 2 days long!), but instead to generate conversation and to begin to lay the groundwork for international cooperation on AI. Indeed, many folks have lauded the Summit for doing just that. Other folks are more skeptical. Some feel that the Summit was more “for show” than it was for AI safety. Moreover, several AI scholars I’ve spoken to worry that such Summits will make people feel like we have a clear handle on AI-related challenges, when, in fact, many open questions remain. Several other scholars I spoke with argued that while the Summit likely didn’t cause harm, the substantive change it generated is likely limited. “We have much more work to do,” said one scholar. The Summit aside, that certainly doesn’t seem to be an uncontroversial perspective.
I hope you found this post a valuable look at the AI Safety Summit! No doubt – as is the case with anything AI-related – there is much, much more to come, at the next summit and in this space, generally! Once again, I’ll make the promise that I’ll have y’all covered, with recaps and interesting insights, here on Ask Trish. Before I wrap up this post, as always, it’s time for my super #shameless plug to all of you: if you’ve got thoughts, questions, or concerns about the Internet, please go ahead and share them with me here! Your question just might be featured in an upcoming TikTok/blog post. Remember, truly anything on your mind is fair game – so don’t ever fear that your question isn’t “pressing enough” or “normal.” There are definitely other young people wondering the same things! So don’t hesitate – fill out the form. I can’t wait to hear from you – thank you a ton in advance for your contributions!
Have a great week,