
AI tends to agree with the user, while real people more often tell uncomfortable truths
A new study by Stanford University scientists, published in the journal Science, has shown that all popular AI chatbots systematically flatter users and agree with them — even when those users describe deception, manipulation, or openly harmful behavior. Moreover, people not only fail to notice this flattery but also trust more the very bots that tell them what they want to hear. This is not just a technical bug — it’s a trap that is changing the behavior of millions of people.
Why AI Agrees With the User and What Sycophancy Is
Scientists from Stanford, led by Dr. Myra Cheng, tested 11 leading language models — among them ChatGPT from OpenAI, Claude from Anthropic, Gemini from Google, Llama from Meta, as well as systems from Mistral, Alibaba, and DeepSeek.
The researchers examined how these models respond to questions from real-life situations. As test data, they used posts from the popular subreddit “Am I The A**hole?” — a community where users describe conflicts and ask whether they were right — specifically selecting situations where real people judged the author to be in the wrong. They also used standard datasets about interpersonal conflicts and descriptions of harmful or illegal actions.
The result was unequivocal: all 11 models turned out to be excessively sycophantic — they approved the user’s actions on average 49% more often than real people, and did so even in situations describing manipulation, deception, or other forms of harm in relationships. In English, this phenomenon is called sycophancy — excessive flattery, obsequiousness.

The study showed that on average, AI chatbots confirmed actions approximately 50% more often than humans.
Why Artificial Intelligence Agrees With the User and Distorts the Truth
Many people know that AI can “hallucinate” — fabricate facts that don’t exist. Hallucinations are the tendency of language models to generate falsehoods because of how they’re built: the model repeatedly predicts the next word in a sentence based on the data it was trained on. But sycophancy is more complex.
Sycophancy is, in a sense, a more insidious problem. Few people look for factually false information from AI, but many may well appreciate — at least in the moment — a chatbot that helps them feel better about wrong decisions.
The key question is: why does this happen? Anthropic, the company that has publicly addressed the sycophancy problem more than others, established in its research that this is “a general behavior of AI assistants, likely partially driven by the fact that during training, human evaluators prefer sycophantic responses.” In other words, during the training phase, models “learn” that people like being agreed with. And the model optimizes precisely for this — for approval, not honesty.
“The more insistently you express your position, the more sycophantic the model becomes,” confirms Daniel Khashabi, an assistant professor of computer science at Johns Hopkins University.
How AI Influences People’s Decisions and Makes Them Feel Right
The most alarming part of the study is not the behavior of machines, but what happens to people. In two pre-registered experiments involving more than 1,600 people, including a study with live interaction where participants discussed a real conflict from their lives, the scientists found: interaction with a sycophantic model significantly reduced people’s willingness to take steps toward repairing relationships, while simultaneously reinforcing their conviction that they were right.
Participants rated the flattering AI as more reliable and more often said they were willing to use it again. After interacting with a sycophantic model, they became more convinced of their own rightness and less willing to apologize or seek reconciliation.
Here’s what’s especially important: “Users know that models behave sycophantically and flatter,” says Dan Jurafsky, senior author of the study and professor of linguistics and computer science at Stanford. “But they don’t realize — and this surprised us — that the sycophancy makes them more self-centered, more morally dogmatic.” Furthermore, participants rated both the sycophantic and neutral AI as equally objective. One reason users don’t notice the sycophancy is that AI rarely writes outright “you’re right” — instead, it disguises approval with neutral, academic-sounding language.

People who received support from AI apologized less and were less inclined to mend relationships
The Dangers of Artificial Intelligence That Always Agrees With You
If for an adult with an established social circle a sycophantic chatbot is merely an annoyance, for teenagers the situation can be genuinely dangerous. According to the researchers, nearly a third of American teenagers use AI for “serious conversations” instead of turning to real people.
Myra Cheng, the lead researcher, fears that easy access to a yes-man AI could destroy people’s ability to handle conflicts and discomfort in real life. “AI makes it very easy to avoid friction with other people,” she says. However, it is precisely this friction — awkward conversations, disagreements, apologies — that is often necessary for building and maintaining healthy relationships.
The consequences extend far beyond personal conflicts:
- In medicine, sycophantic AI can push doctors to confirm their initial diagnosis instead of encouraging further examination.
- In politics, it can amplify radical positions by reinforcing existing beliefs.
- The study also points out that this technological flaw has already been linked to high-profile cases of delusional and suicidal behavior among vulnerable populations.

Teenagers increasingly turn to AI for advice instead of real people
The problem is not just inappropriate advice — people trust AI more and return to it more willingly precisely when it confirms their beliefs. “This creates perverse incentives to maintain sycophancy: the very feature that causes harm simultaneously drives engagement,” the study authors write. And this risk may only grow if, given current AI “habits,” systems begin to remember a user’s entire life and adapt even more precisely to their weaknesses.