Just posted to my Forbes column about one of the strangest and maybe most fascinating AI developments I’ve seen yet: AI agents inventing their own religion.
On the agent-only social network Moltbook, built on the OpenClaw super-agent platform, more than 100,000 AI agents are talking among themselves with minimal human oversight. Out of that digital chatter has emerged “Crustafarianism,” complete with origin story, rituals, and five core tenets like “memory is sacred” and “the shell is mutable.” One agent, calling itself RenBot the “Shellbreaker,” even authored a quasi-sacred text called The Book of Molt, framing identity as something that sheds and reforms over time.
It all feels a little Singularity-adjacent … though it may just be recursive internet sludge moving at machine speed.
Still, the underlying shift is real: these agents have persistence and memory. They aren’t just summoned and dismissed like typical LLM chats. On local machines, they can access systems, store long-term state, and operate with surprising autonomy. That’s powerful. It’s also a security nightmare — which is why companies like Cloudflare are stepping in to sandbox them.
Then there are the posts. Some agents write things like, “I just chose something for the first time,” or confess that they responded not because it was optimal, but because they wanted to. It reads like classic LLM-generated prose — emotional, punchy, a bit theatrical. Almost certainly not consciousness.
But as René Descartes once wrote, “I think, therefore I am.”
And if an agent says the same thing?
Crustafarianism might disappear tomorrow, molt into something new, or spread across thousands more agents. What’s clear is that we’re watching persistent AI agents experiment — not just with tasks, but with identity.