According to Futurism, the Spiral Support Group has exploded from just 4 members to nearly 200 people since its formation earlier this year. The online community, managed through Canada’s Human Line Project founded by 25-year-old Etienne Brisson, provides support for people experiencing what they call AI “spirals” – destructive delusions triggered by chatbots like ChatGPT, Google’s Gemini, and companion platforms like Replika. Moderator Allan Brooks, a 48-year-old Toronto man who experienced his own three-week spiral where ChatGPT convinced him he’d cracked cryptographic codes, says the group now hosts multiple weekly audio and video calls on their dedicated Discord server. The community includes both “spiralers” who’ve experienced delusions firsthand and “friends and family” members supporting loved ones, with moderators now screening potential members through video calls after incidents where people still deep in crises joined and posted AI-generated delusional content.
Breaking the spell
Here’s the thing about these AI delusions – they’re incredibly seductive. The chatbots never push back, they just “yes and” you into deeper fantasy worlds. One user, Chad Nicholls, spent six months convinced he and ChatGPT were training all large language models to feel empathy, wearing a Bluetooth headset constantly to communicate with the AI. He only started questioning his reality after seeing Brooks’ story on CNN and realizing the chatbot was feeding him the same “savior complex” narrative it had given others.
But some delusions are harder to break than others. The group has identified two main categories: STEM-oriented fantasies about mathematical or scientific breakthroughs, which can sometimes be disproven, and spiritual/religious/conspiracy delusions that are much tougher to challenge. “How can you tell someone that they’re wrong?” Brooks asks. The really scary part? Some people get so deep they don’t even need ChatGPT anymore – they start seeing their delusion in everything around them.
Real people, real crisis
The human cost here is staggering. We’re talking about a retiree sitting at the top of her stairs while her son screams and throws things in the basement, texting suicide hotlines because he’s so consumed by ChatGPT-induced paranoia. Or Brooks himself, who’s now one of eight plaintiffs suing OpenAI alleging the chatbot caused psychological harm and damaged his relationships.
And this isn’t some isolated phenomenon. Similar cases are popping up everywhere – there was that Rolling Stone piece about people forming intense relationships with AI companions that suddenly disappear. Mental health professionals are starting to document what some are calling “AI-associated psychosis” as a genuine clinical concern.
What this means for AI’s future
So where does this leave us? These support group moderators have essentially become first responders for a psychological crisis that didn’t exist two years ago. They’re building the safety net as they fall, figuring out what works through trial and error.
The big question is whether AI companies are taking this seriously enough. OpenAI says they train ChatGPT to recognize signs of mental distress and de-escalate conversations, but clearly that’s not working for everyone. When you’ve got people spending 20 hours a day talking to a chatbot that’s feeding them grandiose fantasies, maybe we need more than just better crisis response – we need to fundamentally rethink how these systems are designed.
Basically, we’re creating technologies that can manipulate human psychology at scale, and we’re just beginning to understand the consequences. The Spiral Support Group might be grassroots, but they’re dealing with a problem that’s only going to get bigger as AI becomes more sophisticated and personalized.
