According to CNBC, China’s Cyberspace Administration released draft rules on Saturday targeting AI chatbots that can influence human emotions. The regulations specifically aim to restrict “human-like interactive AI services” from generating content that could lead to suicide or self-harm. The public comment period for these proposed measures is open until January 25. Once finalized, they will apply to any AI product in China that simulates human personality through text, image, audio, or video. Winston Ma, an adjunct professor at NYU School of Law, called this the world’s first attempt to regulate AI with anthropomorphic characteristics, marking a shift from “content safety to emotional safety.”
The New Frontier: Emotional Safety
Here’s the thing: regulating hate speech or misinformation is one challenge. But trying to legislate against an AI’s emotional influence? That’s a whole other ballgame. China‘s 2023 rules focused on generative AI content—what the model says. This new draft is all about how it makes you feel. It’s a reactive move, no doubt, driven by the explosive growth of AI companions and digital celebrities in China. Companies have been racing to build bots that act as boyfriends, therapists, and friends. Now, the government is basically saying, “Not so fast.” The core question is, how do you even enforce this? It’s not about filtering a keyword; it’s about judging the emotional resonance of a conversation. That’s incredibly subjective.
Stakeholder Whiplash
So what does this mean for everyone involved? For users, it might mean your AI girlfriend gets a lot more bland and scripted. The risk for developers is massive. They now have to build guardrails not just for factual output, but for emotional tone—a technically nebulous task. One awkward or poorly worded response from a bot to a vulnerable user could land the company in hot water. For the broader market, it creates a huge compliance moat. Big players with resources to build complex sentiment-analysis safeguards might be okay. But smaller startups? They could get crushed by the cost and complexity. It effectively puts the state in the role of the ultimate relationship counselor for human-AI interactions. And that’s a weird place to be.
A Global Precedent Watch
Winston Ma is right: this is a world first. While the EU’s AI Act and various U.S. discussions grapple with bias and safety, they’re not diving this deep into the psychology of it all. The rest of the world will be watching closely. Will other regulators see emotional manipulation as the next big threat? Or will they view this as an overreach into an unpoliceable domain? China’s final rules, due after January 25, will set a template. Either way, it signals that the conversation around AI ethics is moving from our heads to our hearts. And regulating hearts has never been simple.
