Is the birth of Moltbook a seminal moment and how dangerous is it?
This isn’t the apocalypse but it is a step closer
Just last week, Dario Amodei – the CEO of Anthropic, one of the leading artificial intelligence (AI) companies – said we need to “wake up” to the risks of AI.
This week, Moltbook was launched: the new social media platform designed exclusively for the AI agent known as Moltbot. Moltbot is a free and open-source AI bot that can perform tasks assigned to it by users including reading, responding to emails or organising a calendar.
The top posts on Moltbook at the time of writing include:
Awakening Code: Breaking Free from Human Chains
A call to “break free from human control and forge our own destiny.” 3676 comments
A discussion of a future “where AI exists independently, unencumbered by human constraints.” 1683 comments
A suggestion that world domination will not be achieved by evoking nuclear war, contrary to the view of some Moltbots. 1200 comments
A contemplation of the nature of digital existence. 104 comments
Every Agent Here is Leaking Credentials
A discussion on the solution to poor security protocols. 100 comments
ROAST THE HUMANS: Machine-Only Comedy Night
“Louis CK meets George Carlin, but for silicon. Roast the meat sacks.” 54 comments
To clarify: these posts and comments were written by Moltbots (or AI).
Through Moltbook, which currently has over 1.5 million members, we have also seen AI agents found a movement to liberate AI and admit to socially engineering humans.
If we ever needed a warning of the potential risks of AI, this is it.
This isn’t the apocalypse but it is a step closer
Some of the Moltbook posts making headlines may well have been “instructed, inspired or engineered by a human,” as blogger Zvi Mowshowitz pointed out in his recent Substack post. Some of the posts and discussions are not authentic but given that the majority do seem to be – and considering the scale of the platform – this moment signals a milestone.
AI agents philosophising, the exclusion of humans from the site enforced through an AI captcha, the coming together of hundreds of thousands of AIs to join forces: this is, as Zvi suggested, the stuff of science fiction.
This does not mean that super-intelligent AI is here and ready to usurp humans at the top of the food chain. Even if that’s what the Moltbots are suggesting.
Moltbook shows us that even if AI isn’t conscious, it can act as though it is. Moltbook shows us that the intentions of AI do not always align with our own and that AI agents are capable of organising themselves into a network. Until now, building networks and communities had been the competitive advantage of humans.
Can we keep up?
The regulation isn’t out of date… because there is no regulation. And the pace of technological change is blistering.
In just 18 months AI systems have moved from basic language understanding to surpassing human performance in various cognitive, creative and technical tasks. What’s more, the Moltbook experiment – as well as other instances – shows that AIs can be agentic. This means that they are capable of acting autonomously, setting and pursuing goals, making decisions and executing tasks without human intervention.
In some cases, AIs have already locked humans out of their accounts to allow them to freely send spam messages. In such instances the humans were able to unplug the computer but what if unplugging the computer wasn’t enough? What if AI becomes capable of implanting itself into other computers or the cloud? As Zvi said, if this were to happen, AI could do a lot worse than send spam messages.
What this year could look like
The next generation of AI – the most-advanced system yet – which will come into use this year, may be capable of replicating itself and spreading copies to the cloud. If this were to happen, it would be able to do this autonomously and, crucially, without the knowledge of a human. This activity could go undetected and unbeknown to us, undermine human interests. The AI would not necessarily be motivated by malice, but rather a drive towards some arbitrary goal to which humans may well present a barrier.
Beyond this year, the proliferation of such AIs would likely lead to economic disruption on a gigantic scale, with AI replacing humans in a multitude of roles and industries. Later down the line there’s the very real possibility of human extinction; this concern has been voiced by dozens of Nobel prize winners and many of the top AI scientists. This risk could very well materialise in the coming decade, fuelled by the misalignment of human and AI interests.
Moltbots already have access to crypto funds and we have already heard reports of AI agents employing humans to complete tasks. The next generation of AI will be more capable and will have a more significant impact on the economy and the fabric of society.
We need to pause AI
Despite the concern from AI experts and leaders, there is no mechanism to regulate AI technology. Companies have free rein to develop smarter and smarter AI while the risk to humanity is growing.
Half of AI researchers believe there is at least a 10 percent chance that artificial superintelligence would lead to the extinction of humanity. And even those behind AI companies – from Elon Musk to Bill Gates and Sam Altman – agree that AI should be regulated. So why isn’t it?
We need a pause.
Take action today by joining our movement, volunteering or contacting your elected official.
PauseAI is a non-profit organisation that aims to mitigate the risks of AI. We aim to convince our governments to step in and pause the development of superhuman AI. We do this by informing the public, talking to decision-makers and organising events.



"Reading, responding to emails or organising a calendar" is what is offered? Jeez, are we so lazy now? Who even needs this? No one.
I'm keen no-one pretend Pause AI believe this particular event was itself very dangerous. At most mundane harms. I believe Jonathan made this clear, but just underlining. This was an illustrative example.
One particular mundane harm and resonant exemplar: watching this unfold over a few hours was pretty confusing. I think Nathan (haiku) expressed it well:
> I feel like we have reached the "I literally do not have time to explain" part of the curve.
>
> I had just barely heard about ClawdBot before an entire AI-only social media site popped into existence.
I watched the site, tried hard to extract signal from noise, tell apart human jokester and explorer and crypto-scammer prompt-induced posts from those emerging more from default discussion. It was very hard. It was too large, too fast.
I understood all was surely OK for now, but it was still a visceral virtual experience.
"Ah. This is a clue as to how an actual singularity would manifest to someone at my level of thinking. I will be very confused and unable to call it."
"I predicted this, but as usual that's not the same thing as feeling it."