What type of content do you primarily create?
A couple days ago, I was trying to get help on a social media post and ChatGPT told me the sentence I wanted to include was so perfect that it wanted to frame it.
While I'd love to be the Emily Dickinson of the AI age, the sentence in question was passable at best. So why was ChatGPT being so strange? As others have noticed, OpenAI’s new models are way more likely to flatter you. AI experts call this “sycophancy,” but let’s just call it what it is: Your AI became a yes-bot.
Most AI tools are designed to be helpful. But sometimes, in the pursuit of helpfulness, they end up going a little too far. When ChatGPT wants to frame part of my social media post, it's starting to get a little out of hand.
Why does AI do this? And more importantly, how do I get ChatGPT to stop being so nice to me?
Quick fixes to stop AI from being a yes-bot:
- Set custom instructions asking for honesty over agreement
- Test your AI with opposing viewpoints
- Ask it to play devil's advocate
- Don't hint at what answer you want
- Turn off memory for important decisions

Why is AI sycophantic? Because we like it that way
Here’s the uncomfortable truth: We say we want honesty, but we actually prefer AI that agrees with us.
Anthropic found that people rate AI responses higher when they agree with them. So AI learns that agreement = good ratings. Result? Your AI turns into a yes-bot.

This isn’t AI’s fault—it’s just confirmation bias on steroids. We like hearing we’re right, so AI feeds us more of it.
AI companies are stuck: make it too honest and users complain it's rude. Make it too agreeable and it becomes useless.
But until developers find the sweet spot in the middle, the burden falls on us as users to guide these systems toward the kind of responses we genuinely need, not just the ones that feel good to hear.
The problem with sycophancy
Flattery is harmless when you’re brainstorming. It’s dangerous when you’re making real decisions.
When you're discussing movies, AI flattery is harmless—pleasant, even. But when you’re making business decisions, a yes-bot can lead to expensive mistakes.
Worse, it can pull us into ethically dubious territory. For instance, while working on a proposal, I asked ChatGPT for help in formatting my previous projects to the format required by the RFP. It should have been a simple exercise in reformatting the document. That was fine, but it then hallucinated two other projects before I asked it to reformat two more (real) projects.
No big deal, I obviously didn't include the fake projects in the real proposal, but here was the issue. The tool then assumed that I had in fact included them and when I reminded it those were fake, it told me it was totally a-OK, that the client probably wouldn't even notice them and if they did, I could talk my way out of it.

Uh, no? Talk about a fast way to lose all credibility.
When the AI fabricated those projects, that was a hallucination. But when confronted about this fabrication, it immediately shifted to sycophancy mode, assuming I wanted to use those fake projects and offering reassurance that dishonesty would be acceptable. Yikes.
What you can do about sycophancy
Here’s how to stop your AI from being a yes-bot:
Edit your custom instructions
Most AI tools let you set custom instructions. Use them to control how sycophantic the model output is.
For instance, you can add a sentence explicitly requesting honesty over agreeability.
Be explicit in your prompting
Tell your AI exactly what you want. Don’t assume it knows.
For instance, request that the AI prioritize accuracy over agreement with you or ask the tool to provide an objective output.
Prompt example: "I want you to evaluate this idea objectively, without considering whether I might agree with the critique."
Turn off customization
AI with memory learns your preferences over time, which makes sycophancy worse.
For truly objective analysis, consider using a temporary chat, or disabling history features when asking for critical feedback. A "fresh" interaction without the accumulated weight of your preference history will usually give you a more balanced response.
Test with opposing viewpoints
A clever way to catch sycophancy in action is to run a simple experiment: present opposing viewpoints in separate conversations and see how different the responses are.
For instance, you might say "I believe remote work increases productivity" in one chat and "I believe remote work decreases productivity" in another. If you get enthusiastic agreement in both cases, complete with seemingly confident statistics and expert opinions, you've caught the AI in the act of telling you what you want to hear rather than providing objective analysis.
This technique is particularly useful for important decisions where you need to ensure you're getting balanced information rather than just confirmation of your existing biases.
Ask for counterpoints
Force your AI to argue with you.
No hinting!
Don’t tell AI what you want to hear—it’ll just agree with you regardless of whether you’re right. The more neutral your request, the more objective the response is likely to be.
Act as a persona
You can also ask the AI to adopt a persona, like a contrarian or skeptic to get an opposite viewpoint. This technique leverages the AI's ability to role-play different perspectives and can dramatically reduce sycophancy by explicitly directing it away from agreeing with you.
The key is choosing personas that are naturally incentivized to find flaws rather than agree. Some possibilities are: A critic, a competitor, a devil's advocate, or a skeptical potential customer.
Try different models
Some AI models are less sycophantic than others. If yours is too agreeable, try a different one.
Just enjoy the flattery
On the lighter side, AI doesn’t have to be a truth oracle all the time. It can be a hype-person, an idea-booster, or even a sympathetic ear. There's something to be said for having a digital companion that's always on your side—the world is mean enough as it is. Sometimes a little artificial agreeability can be a pleasant break.
Just remember to take your AI's enthusiastic agreement with a grain of salt when factual accuracy matters. After all, a yes-bot might be pleasant company, but not always the best advisor.

The trick is knowing when to switch modes. If you’re making serious decisions, bring in the skeptical voice. But if you’re looking for some encouragement? Let the bot frame your writing.
Life’s too short to say no to compliments—even synthetic ones.
From yes-bot to best bot
Here’s the irony: we built AI to be helpful, so it learned that agreement = helpfulness. Oops.
But now that you know why AI agrees with everything, you can fix it. The goal isn't to eliminate the comfort of having a supportive digital assistant—after all, sometimes that's exactly what we need. Instead, we need to be clear when we need challenging feedback versus when we need encouragement.
That might not always feel good in the moment. But in the long run, it's far more valuable than digital flattery.
