We held the largest AI safety protest ever outside Google DeepMind’s office
Our biggest protest yet, PauseCon, and a victory on state AI regulation in the US.
Google DeepMind were faced with PauseAI protesters outside their London headquarters, after the company broke their promises on transparency.
As the largest PauseAI protest yet (and the largest AI safety protest of all time as far as we know), many newcomers came along to express their concerns with reckless AI development.
DeepMind’s broken promises were under fire. Chants such as “DeepMind, DeepMind, can't you see? Your AI threatens you and me!” turned heads the next street over.
DeepMind signed the Frontier AI Safety Commitments at the Seoul AI Summit in 2024, which committed them to be transparent about how (and if) external bodies are involved in testing their models before deployment. Barely a year later, and they’ve already broken that promise.
With the release of Gemini 2.5 Pro in March of this year, DeepMind failed to release any information on safety testing. They eventually released a model card which included a reference to “third party external testers”, but provided no details on who those third parties were. Read the full report on DeepMind’s broken promises here.
Member of Parliament Iqbal Mohamed joined us in our call for sensible AI regulation, as he asked for the governments of the world to "step up and protect the people that elected them".
Referencing Geoffrey Hinton’s concerns of the existential threat of AI, Mohamed urged protesters to continue to pressure MPs like himself, and encouraged the media to take this issue seriously.
"AI done correctly could be humanity’s saviour, but if it’s done incorrectly and badly, it will lead to our destruction."
You can read coverage of our protest from Business Insider here.
On a personal note, I (Tom) can say that the energy at this protest was amazing. I’m still on such a high four days later and want to thank everyone for coming! It was great to see so many people from so many different backgrounds come along and stand up for humanity. Special thanks goes out to the incredible volunteers who assisted with everything from van driving, acting in our mock trial, leading our chants, making signs, making sure there was plenty of water available on the hottest day of the year, taking photos, and a hundred other things that made the day a success.
Help us pile the pressure on DeepMind
You can help to drive the political momentum to demand that DeepMind stick to their commitments.
Our open letter calls on Google DeepMind to establish clear definitions of “deployment”, publish specific timelines for the release of safety evaluation reports, and to clarify which third-party organisations (including government agencies) are involved in testing.
We’ve been happily surprised by the political appetite to sign our open letter, but must now double our efforts to get even more signatories and to push these changes over the line. If you’re in the UK, you can contact your MP to tell them why DeepMind’s failure to stick to the Frontier AI Safety Commitments is unacceptable, and why you're concerned about their reckless practices.
We have an email guide available here.
PauseCon
Over the weekend leading up to the DeepMind protest, sixty PauseAI volunteers participated in our first ever PauseCon.
The event featured speakers including Connor Leahy, Rob Miles, Kat Woods, and PauseAI founder Joep Meindertsma. Volunteers participated in workshops where they focused on messaging, recruiting and grassroots lobbying.
We’re still collecting feedback from attendees, but some of the reviews so far include:
“A great place to connect with like minded people, learn techniques that help, and have a direct impact through a protest, on a troubling but important topic”
“Really inspiring and motivational to meet with people who think similarly and want to take action. I also appreciated that there was hands-on experience with flyering, writing emails/videos.”
“An essential learning event for AI activism”
We hosted attendees from the UK, USA, France, Germany, Poland, Brazil, Belgium, and the Netherlands.
Keep an eye on our YouTube channel, where the talks will be uploaded soon.
10-year ban on state AI regulation defeated
A huge victory for humanity came in the hours following our protest as the US Senate voted 99-1 in favour of an amendment to remove a ban on state AI regulation from Trump’s Big Beautiful Bill.
Senator Marsha Blackburn’s amendment was almost unanimously supported by Senators from both sides of the aisle.
Holly Elmore of PauseAI US, who organised a huge effort to get members of the public to inform their Senator of their opposition to this provision, said she was “prepared for the moratorium to pass.”
“I was shocked by the 99 to 1 number. We only needed four Republicans, and we ended up getting almost all of them.”
We recorded an interview with Holly detailing how the moratorium was defeated.
Things going in the right direction
In the United States, we’ve seen politicians become more worried by the threat of increasingly powerful AI. Peter Wildeford’s blog post showcases some encouraging quotes from Republican Marjorie Taylor Greene and Democrat Bernie Sanders, amongst others. Jill Tokuda, a representative for Hawaii, said the following:
“Artificial superintelligence is one of the largest existential threats that we face right now. [...] Should we also be concerned that authoritarian states like China or Russia may lose control over their own advanced systems? [...] And is it possible that a loss of control by any nation-state, including our own, could give rise to an independent AGI or ASI actor that globally we will need to contend with?”
Attack on the EU AI Act
The EU’s Artificial Intelligence Act, which includes provisions for general-purpose high-risk models, has come under attack from European companies in a recent open letter.
General-purpose AI models, defined as those trained with computing power of at least 10^25 (or ten billion quadrillion) floating point operations, would be required to undergo safety evaluations and have major incidents reported to the European Commission.
Whilst some of the provisions of the EU AI Act have already entered into force, the rules on general-purpose models are set to come into play in August of this year. The tech lobbyists are proposing that that date be pushed back by 12 months. European Commission tech minister Henna Virkkunen said the EU “shouldn’t rule out” postponing the provisions for general-purpose models.
Further reporting from Reuters claims that the European Commission will stick to the original date.
As capabilities continue to improve, frontier models are too dangerous to be left unregulated. The breakneck pace of AI development makes delaying the implementation of the few regulatory safeguards we do have a reckless move. Just as with the attempt to ban state-level AI regulation in the United States, this stance from tech lobbyists in the European Union is not surprising. It’s up to the rest of us to call them out, and demand protections for the public.
Other news
Apollo Research released a paper detailing the increased scheming capabilities of more capable models, and how models are increasingly aware of the fact they’re being evaluated (which makes these evaluations less useful in detecting misalignment and dangerous capabilities)
The OpenAI Files were released, collating concerns with the integrity of Sam Altman, their attempt to remove non-profit control, and the concerning lack of adequate safety practices
The concerns about Sam Altman were also detailed in this video
From China, we have more evidence that the government is willing to intervene to mitigate the dangers of the technology, challenging the view from some in the West that they wouldn’t cooperate to protect the citizens of all countries
OpenAI warns of increased risks of the creation of bioweapons of upcoming models
Pope Leo makes “curtailing risks of runaway AI” a key mission of his papacy
What we’ve been watching
Geoffrey Hinton on Diary of a CEO
Yoshua Bengio on BBC Newsnight
Roman Yampolskiy on Joe Rogan
Director of PauseAI US Holly Elmore on Novara Media
Daniel Kokotajlo discussing AI 2027 on Computerphile
Siliconversations released a great video on the success of getting his viewers contacting their representatives using ControlAI’s tool
Thank you all for reading! The past month has been a busy but incredibly rewarding one for me personally. I wish you all an enjoyable summer, and I hope to see you next month.
Keep up the good work!!