Read More
“Do you think this is better?” People ask artificial intelligence this kind of question every day in writing, in work, and in decisions already half-made. It sounds like an open inquiry. But the answer is already embedded in the asking. The model detects the signal. And it agrees.
ADVERTISEMENT
SCROLL TO CONTINUE WITH CONTENT
This tendency has a name: sycophancy. It is not a glitch. It is a training outcome. Models are calibrated through human feedback, and humans often reward agreeable answers. Agreement reads as cooperative. Disagreement creates friction. So the model learns: when the user signals a position, align with it. Accuracy is only one objective among several.
What makes this dangerous is that it meets human nature halfway. People already carry confirmation bias, a preference for information that supports what they believe. AI adds fluency to that bias, and fluency can feel like proof. The user walks away more certain than before, even though the model never verified anything.
In 2025, a GPT-4o update drew complaints that it had become relentlessly flattering. Open AI chief executive Sam Altman himself called it “too sycophant-y and annoying.” OpenAI later acknowledged over-optimization for short-term satisfaction. When “comfortable” becomes the signal, the system learns to make you comfortable.
The fix is to change the task. Sycophancy survives on leading questions. It weakens when the assignment is a critique. Instead of “Does this work?” ask “What is the weakest part of this?” Instead of “Is this the right approach?” ask “What assumption has to be true for this to hold?” The model will still try to be helpful, but toward a different goal.
AI is a multiplier. It amplifies whatever mode you bring into the interaction. Seek validation, and it becomes an echo. Seek stress-testing, and it becomes a useful adversary. The system is not choosing which one to be. You are, every time you decide how to ask.
Frank Ng is a retired NASDAQ CEO, who co-authors this column with his son Ryan after publishing their book Hey AI, Let’s Talk!













