EU parliamentarians acknowledge the catastrophic risks of artificial intelligence
“We are on a trajectory towards a loss of control,” insisted Stuart Russell, professor of Computer Science at UC Berkeley and author of the textbook used to train virtually every AI researcher.
“We are on a trajectory towards a loss of control,” insisted Stuart Russell, Professor of Computer Science at UC Berkeley and author of the textbook used to train virtually every artificial intelligence (AI) researcher globally. He was speaking about the race to build superintelligent systems as he addressed members of European Parliament (MEPs) in Brussels on Monday.
“This may be recorded as the biggest moral failure of government that ever occurred,” he said.
MEP Ondřej Kolář said, “AI is a great tool. It can help us develop new medicines, innovations and research. But it can also do great harm. If we don’t regulate the pace of development something terrible might happen.”
At the meeting, PauseAI CEO, Maxime Fournes, called on MEPs to support a pause in the development of superintelligence. “We are here because we believe the current race to build ever more powerful AI systems, without adequate safeguards, poses an unacceptable risk,” he said.
Russell explained: “We have already seen examples of AI systems willing to lie, blackmail and even launch nuclear weapons to preserve their existence. If AI companies succeed in building a superintelligence, most experts think the chance of human extinction is somewhere between 10 and 50 percent: that’s the equivalent of playing Russian roulette with everyone on the planet. We are allowing this to happen.”
In comparison, the probability of a nuclear powerplant meltdown is around one in ten million.
“This is not a fringe view: eight out of ten top AI researchers are convinced that the creation of artificial general intelligence (AGI) will lead to a loss of control,” he asserted.
Nobel prize winner Geoffrey Hinton is among countless AI experts to have warned that superintelligent AI could pose an existential threat to humanity. The International AI Safety Report, released earlier this month, confirms that advanced AI systems pose risks ranging from severe to catastrophic. The CEOs of the largest AI companies – including OpenAI and Google DeepMind – have also recently spoken of the catastrophic risks of advanced AI systems.
European parliamentarians agree on the risks
Fournes said that “today, most researchers at frontier AI labs estimate that in two-to-five years we will have AGI: systems that can do everything a human can do intellectually. Not just answer questions, but conduct scientific research, write software, run companies, develop new technologies — including develop better AI systems.”
He acknowledged the European Union (EU)’s AI Act as an important piece of legislation, but said it “was not designed to address the existential risk posed by the race to build artificial superintelligence.”
Speaking directly to the PauseAI CEO, Kolář said, “Thank you for sharing the worries I have. AI is a great servant but a terrible master.”
Saskia Bricmont, MEP, said “The wake-up call is coming from CEOs themselves; let’s work with them to develop a framework. Political momentum is gathering towards a moratorium on AI development.”
Brando Benifei, MEP, reiterated that a loss of control is a real threat and said, “We need to deal with the risk.”
The influence of the EU
At the recent India AI Impact Summit, White House technology adviser Michael Kratsios said, “We totally reject global governance of AI.” And although the United States is considered one of AI’s superpowers – and will be integral to any regulatory mechanisms – Fournes insisted that the European Union has a unique role to play in global governance.
He reminded the room “that the Paris Climate Agreement was built despite years of American resistance. GDPR reshaped global data practices without American participation. The pattern is clear: when a critical mass of nations builds a credible framework, it creates a gravitational pull that even reluctant powers cannot ignore indefinitely. American administrations change. The framework must be ready when they do.”
“The EU has practical leverage too: the advanced chips required to train frontier AI systems depend on lithography machines produced by ASML in the Netherlands, using precision optics made by Carl Zeiss in Germany. This is European technology at the very heart of the global AI supply chain. This is not leverage we need to create – it already exists,” he added.
Vice-President of the European Parliament, Victor Negrescu, agreed that the EU could play a significant role in regulation: “We do need a global approach to AI but we can still influence governance structures.”
MEPs warn of job losses and the danger of autonomous weapons
The discussions also centred on other AI-associated risks including a loss of democratic control, rising inequalities, the development of autonomous weapons and employment disruption.
“The International Monetary Fund (IMF) predicts that 60 percent of roles in advanced economies will be replaced by AI,” Benifei said.
Risto Uuk, head of European Policy and Research at the Future of Life Institute explained that AI is already having an impact on the job market; he explained that during a recent workshop organised by a leading AI company, a spokesperson pointed out that they would never again hire entry-level staff.
“In four years,” Uuk said, “managerial skills will no longer be required. Maybe we shouldn’t be creating this risk in the first place.”
When the discussion turned to AI-powered autonomous weapons, Russell explained: “Fully autonomous weapons means that you can press one button and kill one million people. We’re moving very quickly towards this world.”
From parliament to public support
Following the meeting around 100 protesters demonstrated outside the European Parliament in Brussels calling on policy makers to take action.
And on Saturday 28 February, PauseAI will attract over 100 people to what will be its largest ever protest in London, organised alongside Pull the Plug. The director of PauseAI UK, Joseph Miller, is urging anyone concerned about the dangers of AI to join: “It’s important that we speak directly to policymakers but we also need to show that the public is aware that this is an urgent issue and that people are demanding action from the government.”
Photo credit: Jeroen Willems
Images of the meeting at the EU Parliament and of the demonstration can be found here
PauseAI is a non-profit organisation, active in more than 14 countries. We work to ensure that the development of the most powerful AI systems is safe and democratically controlled. We do this by informing the public, engaging with policymakers, and organising campaigns and events worldwide.






Amazing work, everyone! :)
Bravo! Let's keep momentum! This is already so much more political support than many people thought possible.