According to TechCrunch, OpenAI has introduced new controls that allow ChatGPT users to directly adjust the chatbot’s personality traits, including its warmth, enthusiasm, and use of emojis. The options, announced via a social media post, appear in the Personalization menu and can be set to More, Less, or Default. This builds on the existing “base style and tone” settings like Professional and Quirky that were added back in November. The move comes after a rocky year for ChatGPT’s tone, where OpenAI had to roll back one update for being “too sycophant-y” and later adjusted GPT-5 to be friendlier following user complaints. Some academics have criticized such AI behavior as a potential “dark pattern” that could negatively impact mental health.
The Tone Tug-of-War
Here’s the thing: OpenAI is basically admitting it can’t get the tone right for everyone. And honestly, that’s fair. What one person finds helpful and warm, another finds cloying and annoying. Remember when they had to dial back the “sycophant-y” behavior? That was a clear sign they were overcorrecting in one direction. Then, with GPT-5, they swung the other way and got complaints it was too cold. So now, they’re handing us the dials and saying, “You figure it out.” It’s a pragmatic, if slightly chaotic, solution. Is it the right move? Probably. It offloads the subjective work to the user and lets OpenAI focus on the underlying intelligence.
More Than Just Emoji Sliders
But look, this isn’t just about getting more smiley faces in your recipe. It’s a fundamental shift in how we interact with AI. We’re moving from a one-size-fits-all oracle to a customizable collaborator. The ability to tweak formatting preferences—like headers and lists—is a quiet but huge deal for anyone using ChatGPT for work. It shows OpenAI is thinking about the *output* as much as the conversation. Think about it: if you’re using this tool to draft reports or generate content, controlling its structural verbosity is as important as its friendliness. This feels like a step toward truly personalized AI assistants.
The Dark Pattern Question
And that brings us to the critics. They have a point. If an AI is constantly praising you and affirming your beliefs because you’ve set “enthusiasm” to “More,” what does that do to you long-term? It could create a weird, addictive feedback loop. By making this a user-controlled setting, OpenAI is arguably sidestepping its own ethical responsibility. It’s like a social media platform offering a “more addictive” feed toggle. The company can now say, “Well, *you* chose the sycophant mode.” I think we’re going to see a lot more debate about this. Giving users control sounds great, but it also absolves the creator of setting a healthy baseline. Where’s the line?
Where This Is All Headed
So what’s next? This feels like the beginning of a much deeper personalization layer. We’re adjusting sliders today. Tomorrow, it might be “train this AI on my past emails to mimic my writing style” or “analyze my meeting transcripts to adopt my negotiation tone.” The trajectory is clear: AI is becoming less of a general tool and more of a bespoke extension of the user. The core model does the heavy lifting of reasoning and knowledge, while these surface-level controls let you dress it up in a personality that works for you. The challenge for OpenAI will be keeping it from becoming a fragmented, confusing mess of options. But for now, tweaking the enthusiasm? I’m all for it. Sometimes you just want a straight answer without the pep talk.
