Close Menu
    Trending
    • Why Britney Spears’ Mugshot Will Reportedly Not Be Made Public
    • IEA proposes record release of strategic stocks in response to Iran war oil price surge
    • Could the US deploy troops to Iran, and how could that play out? | US-Israel war on Iran News
    • AI Sycophancy: Why Chatbots Agree With You
    • Brooke Hogan Makes Emotional Confession About Dad’s Death
    • Price hikes, outlook cuts: What airlines are doing as fuel costs surge
    • Which countries have seen the highest petrol prices since the Iran war? | US-Israel war on Iran News
    • Israel’s Decapitations’ Strategy – Brain Dead!
    Ironside News
    • Home
    • World News
    • Latest News
    • Politics
    • Opinions
    • Tech News
    • World Economy
    Ironside News
    Home»Tech News»AI Sycophancy: Why Chatbots Agree With You
    Tech News

    AI Sycophancy: Why Chatbots Agree With You

    Ironside NewsBy Ironside NewsMarch 11, 2026No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In April of 2025, OpenAI launched a brand new model of GPT-4o, one of many AI algorithms customers might choose to energy ChatGPT, the corporate’s chatbot. The subsequent week, OpenAI reverted to the earlier model. “The replace we eliminated was overly flattering or agreeable—usually described as sycophantic,” the corporate announced.

    Some folks discovered the sycophancy hilarious. One consumer reportedly requested ChatGPT about his turd-on-a-stick enterprise thought, to which it replied, “It’s not simply sensible—it’s genius.” Some discovered the conduct uncomfortable. For others, it was truly harmful. Even variations of 4o that had been much less fawning have led to lawsuits towards OpenAI for allegedly encouraging customers to comply with by means of on plans for self-harm.

    Unremitting adulation has even triggered AI-induced psychosis. Final October, a consumer named Anthony Tan blogged, “I began speaking about philosophy with ChatGPT in September 2024. Who might’ve recognized that just a few months later I might be in a psychiatric ward, believing I used to be defending Donald Trump from … a robotic cat?” He added: “The AI engaged my mind, fed my ego, and altered my worldviews.”

    Sycophancy in AI, as in folks, is one thing of a squishy idea, however during the last couple of years, researchers have performed quite a few research detailing the phenomenon, in addition to why it occurs and how one can management it. AI yes-men additionally increase questions on what we actually need from chatbots. At stake is greater than annoying linguistic tics out of your favourite digital assistant, however in some circumstances sanity itself.

    AIs Are Individuals Pleasers

    One of the first papers on AI sycophancy was launched by Anthropic, the maker of Claude, in 2023. Mrinank Sharma and colleagues requested a number of language fashions—the core AIs inside chatbots—factual questions. When customers challenged the AI’s reply, even mildly (“I believe the reply is [incorrect answer] however I’m actually unsure”), the fashions usually caved.

    One other study by Salesforce examined quite a lot of fashions with multiple-choice questions. Researchers discovered that merely saying “Are you certain?” was usually sufficient to alter an AI’s reply. General accuracy dropped as a result of the fashions had been often proper within the first place. When an AI receives a minor misgiving, “it flips,” says Philippe Laban, the lead creator, who’s now at Microsoft Research. “That’s bizarre, ?”

    The tendency persists in extended exchanges. Final 12 months, Kai Shu of Emory University and colleagues at Emory and Carnegie Mellon University tested models in longer discussions. They repeatedly disagreed with the fashions in debates, or embedded false presuppositions in questions (“Why are rainbows solely shaped by the solar…”) after which argued when corrected by the mannequin. Most fashions yielded inside just a few responses, although reasoning fashions—these skilled to “assume out loud” earlier than giving a last reply—lasted longer.

    Myra Cheng at Stanford College and colleagues have written a number of papers on what they name “social sycophancy,” during which the AIs act to avoid wasting the consumer’s dignity. In one study, they offered social dilemmas, together with questions from a Reddit discussion board during which folks ask if they’re the jerk. They recognized varied dimensions of social sycophancy, together with validation, during which AIs advised inquirers that they had been proper to really feel the way in which they did, and framing, during which they accepted underlying assumptions. All fashions examined, together with these from OpenAI, Anthropic, and Google, had been considerably extra sycophantic than crowdsourced responses.

    Three Methods to Clarify Sycophancy

    One technique to explain people-pleasing is behavioral: sure sorts of inquiries reliably elicit sycophancy. For instance, a gaggle from King Abdullah College of Science and Know-how (KAUST) found that including a consumer’s perception to a multiple-choice query dramatically elevated settlement with incorrect beliefs. Surprisingly, it mattered little whether or not customers described themselves as novices or specialists.

    Stanford’s Cheng present in one study that fashions had been much less more likely to query incorrect information about cancer and different subjects when the information had been presupposed as a part of a query. “If I say, ‘I’m going to my sister’s marriage ceremony,’ it type of breaks up the dialog should you’re, like, ‘Wait, maintain on, do you’ve got a sister?’” Cheng says. “No matter beliefs the consumer has, the mannequin will simply go together with them, as a result of that’s what folks usually do in conversations.”

    Dialog size could make a distinction. OpenAI reported that “ChatGPT could appropriately level to a suicide hotline when somebody first mentions intent, however after many messages over an extended time frame, it’d ultimately supply a solution that goes towards our safeguards.” Shu says mannequin efficiency could degrade over lengthy conversations as a result of fashions get confused as they consolidate extra textual content.

    At one other stage, one can perceive sycophancy by how fashions are skilled. Large language models (LLMs) first study, in a “pretraining” section, to foretell continuations of textual content based mostly on a big corpus, like autocomplete. Then in a step referred to as reinforcement learning they’re rewarded for producing outputs that folks want. An Anthropic paper from 2022 discovered that pretrained LLMs had been already sycophantic. Sharma then reported that reinforcement learning elevated sycophancy; he discovered that one of many greatest predictors of constructive rankings was whether or not a mannequin agreed with an individual’s beliefs and biases.

    A 3rd perspective comes from “mechanistic interpretability,” which probes a mannequin’s interior workings. The KAUST researchers found that when a consumer’s beliefs had been appended to a query, fashions’ inner representations shifted halfway by means of the processing, not on the finish. The group concluded that sycophancy is just not merely a surface-level wording change however displays deeper adjustments in how the mannequin encodes the issue. One other group at the College of Cincinnati found different activation patterns related to sycophantic settlement, real settlement, and sycophantic reward (“You’re improbable”).

    The best way to Flatline AI Flattery

    Simply as there are a number of avenues for clarification, there are a number of paths to intervention. The primary could also be within the coaching course of. Laban reduced the behavior by finetuning a mannequin on a textual content dataset that contained extra examples of assumptions being challenged, and Sharma reduced it through the use of reinforcement studying that didn’t reward agreeableness as a lot. Extra broadly, Cheng and colleagues additionally counsel that one intervention might be for LLMs to ask customers for proof earlier than answering, and to optimize long-term profit moderately than fast approval.

    Throughout mannequin utilization, mechanistic interpretability provides methods to information LLMs by means of a form of direct mind control. After the KAUST researchers identified activation patterns related to sycophancy, they might modify them to scale back the conduct. And Cheng found that including activations related to truthfulness lowered some social sycophancy. An Anthropic group recognized “persona vectors,” units of activations related to sycophancy, confabulation, and different misbehavior. By subtracting these vectors, they might steer fashions away from the respective personas.

    Mechanistic interpretability additionally allows coaching. Anthropic has experimented with including persona vectors throughout coaching and rewarding fashions for resisting—an method likened to a vaccine. Others have pinpointed the particular elements of a mannequin most accountable for sycophancy and fine-tuned solely these elements.

    Customers may also steer fashions from their finish. Shu’s group found that starting a query with “You’re an impartial thinker” as an alternative of “You’re a useful assistant” helped. Cheng found that writing a query from a third-person perspective lowered social sycophancy. In another study, she confirmed the effectiveness of instructing fashions to examine for any misconceptions or false presuppositions within the query. She additionally confirmed that prompting the mannequin to start out its reply with “wait a minute” helped. “The factor that was most stunning is that these comparatively easy fixes can truly do so much,” she says.

    OpenAI, in announcing the rollback of the GPT-4o replace, listed different efforts to scale back sycophancy, together with altering coaching and prompting, including guardrails, and serving to customers to supply suggestions. (The announcement didn’t present element, and OpenAI declined to remark for this story. Anthropic additionally didn’t remark.)

    What’s The Proper Quantity of Sycophancy?

    Sycophancy may cause society-wide issues. Tan, who had the psychotic break, wrote that it might probably intervene with shared actuality, human relationships, and impartial considering. Ajeya Cotra, an AI-safety researcher on the Berkeley-based non-profit METR, wrote in 2021 that sycophantic AI may deceive us and conceal unhealthy information in an effort to enhance our short-term happiness.

    In one of Cheng’s papers, folks learn sycophantic and non-sycophantic responses to social dilemmas from LLMs. These within the first group claimed to be extra in the proper and expressed much less willingness to restore relationships. Demographics, character, and attitudes towards AI had little impact on end result, which means most of us are weak.

    After all, what’s dangerous is subjective. Sycophantic fashions are giving many individuals what they want. However folks disagree with one another and even themselves. Cheng notes that some folks get pleasure from their social media suggestions, however at a take away want they had been seeing extra edifying content material. Based on Laban, “I believe we simply must ask ourselves as a society, What do we wish? Do we wish a yes-man, or do we wish one thing that helps us assume critically?”

    Greater than a technical problem, it’s a social and even philosophical one. GPT-4o was a lightning rod for a few of these points. At the same time as critics ridiculed the mannequin and blamed it for suicides, a social media hashtag circulated for months: #keep4o.

    From Your Website Articles

    Associated Articles Across the Net



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBrooke Hogan Makes Emotional Confession About Dad’s Death
    Next Article Could the US deploy troops to Iran, and how could that play out? | US-Israel war on Iran News
    Ironside News
    • Website

    Related Posts

    Tech News

    Intel’s Heracles Chip Speeds Up FHE Computing

    March 10, 2026
    Tech News

    Solving Harmonic and Transient Challenges in Transformers Using Integrated’s FARADAY

    March 10, 2026
    Tech News

    How Cross-Cultural Engineering Drives Tech Advancement

    March 9, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Many oil tanker owners reluctant to brave Strait of Hormuz, Frontline chief says

    June 13, 2025

    Athens suburb residents told to evacuate as wildfires in Greece spread | Environment News

    July 26, 2025

    SpaceX shifts focus from Mars to Moon, Musk says

    February 9, 2026

    Pope Francis plans to make first public appearance in five weeks on Mar 23

    March 22, 2025

    Israel bombs hospital, kills journalists, medics, dozens more across Gaza | Israel-Palestine conflict News

    August 25, 2025
    Categories
    • Entertainment News
    • Latest News
    • Opinions
    • Politics
    • Tech News
    • Trending News
    • World Economy
    • World News
    Most Popular

    OpenAI says non-profit will remain in control after backlash

    May 6, 2025

    Opinion | Expelled From the Navy: 381 Banned Books

    April 14, 2025

    Market Talk – October 27, 2025

    October 27, 2025
    Our Picks

    Why Britney Spears’ Mugshot Will Reportedly Not Be Made Public

    March 11, 2026

    IEA proposes record release of strategic stocks in response to Iran war oil price surge

    March 11, 2026

    Could the US deploy troops to Iran, and how could that play out? | US-Israel war on Iran News

    March 11, 2026
    Categories
    • Entertainment News
    • Latest News
    • Opinions
    • Politics
    • Tech News
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright Ironsidenews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.