Close Menu
    Trending
    • YouTuber Recounts Moment Captain Announces Deaths on MV Hondius
    • Sardinia’s Renewable Energy Conflict: Identity At Stake
    • Market Talk – May 7, 2026
    • Why Cameron Diaz And Benji Madden Kept Surrogacy A Secret
    • Global stocks mostly fall as US rally shows signs of fatigue
    • Iranian military says it attacked US ships after they targeted tanker | US-Israel war on Iran News
    • Opinion | Is America Headed for ‘Greater Disorder’?
    • U.K. Election Poses Test for Starmer’s Labour Party
    Ironside News
    • Home
    • World News
    • Latest News
    • Politics
    • Opinions
    • Tech News
    • World Economy
    Ironside News
    Home»Tech News»Character.ai to ban teens from talking to its AI chatbots
    Tech News

    Character.ai to ban teens from talking to its AI chatbots

    Ironside NewsBy Ironside NewsOctober 29, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Chatbot website Character.ai is reducing off youngsters from having conversations with digital characters, after dealing with intense criticism over the sorts of interactions younger individuals have been having with on-line companions.

    The platform, based in 2021, is utilized by hundreds of thousands to speak to chatbots powered by synthetic intelligence (AI).

    However it’s dealing with a number of lawsuits within the US from mother and father, together with one over the demise of a teen, with some branding it a “clear and present danger” to younger individuals.

    Now, Character.ai says from 25 November under-18s will solely be capable to generate content material akin to movies with their characters, somewhat than speak to them as they will at present.

    On-line security campaigners have welcomed the transfer however mentioned the function ought to by no means have been obtainable to kids within the first place.

    Character.ai mentioned it was making the modifications after “stories and suggestions from regulators, security specialists, and oldsters”, which have highlighted considerations about its chatbots’ interactions with teenagers.

    Consultants have beforehand warned the potential for AI chatbots to make issues up, be overly-encouraging, and feign empathy can pose dangers to younger and weak individuals.

    “At present’s announcement is a continuation of our normal perception that we have to maintain constructing the most secure AI platform on the planet for leisure functions,” Character.ai boss Karandeep Anand informed BBC Information.

    He mentioned AI security was “a shifting goal” however one thing the corporate had taken an “aggressive” strategy to, with parental controls and guardrails.

    On-line security group Web Issues welcomed the announcement, but it surely mentioned security measures ought to have been inbuilt from the beginning.

    “Our personal analysis reveals that kids are uncovered to dangerous content material and put in danger when partaking with AI, together with AI chatbots,” it mentioned.

    Character.ai has been criticised up to now for internet hosting probably dangerous or offensive chatbots that kids may speak to.

    Avatars impersonating British youngsters Brianna Ghey, who was murdered in 2023, and Molly Russell, who took her life on the age of 14 after viewing suicide materials on-line, have been discovered on the site in 2024 earlier than being taken down.

    Later, in 2025, the Bureau of Investigative Journalism (TBIJ) discovered a chatbot primarily based on paedophile Jeffrey Epstein which had logged greater than 3,000 chats with customers.

    The outlet reported the “Bestie Epstein” avatar continued to flirt with its reporter after they mentioned they have been a toddler. It was certainly one of a number of bots flagged by TBIJ that have been subsequently taken down by Character.ai.

    The Molly Rose Basis – which was arrange in reminiscence of Molly Russell – questioned the platform’s motivations.

    “But once more it has taken sustained strain from the media and politicians to make a tech agency do the proper factor, and it seems that Character AI is selecting to behave now earlier than regulators make them,” mentioned Andy Burrows, its chief government.

    Mr Anand mentioned the corporate’s new focus was on offering “even deeper gameplay [and] role-play storytelling” options for teenagers – including these can be “far safer than what they could be capable to do with an open-ended bot”.

    New age verification strategies can even are available, and the corporate will fund a brand new AI security analysis lab.

    Social media knowledgeable Matt Navarra mentioned it was a “wake-up name” for the AI trade, which is shifting “from permissionless innovation to post-crisis regulation”.

    “When a platform that builds a teen expertise nonetheless then pulls the plug, it is saying that filtered chats aren’t sufficient when the tech’s emotional pull is robust,” he informed BBC Information.

    “This is not about content material slips. It is about how AI bots mimic actual relationships and blur the strains for younger customers,” he added.

    Mr Navarra additionally mentioned the large problem for Character.ai shall be to create a fascinating AI platform which teenagers nonetheless wish to use, somewhat than transfer to “much less safer options”.

    In the meantime Dr Nomisha Kurian, who has researched AI security, mentioned it was “a smart transfer” to limit teenagers utilizing chatbots.

    “It helps to separate artistic play from extra private, emotionally delicate exchanges,” she mentioned.

    “That is so essential for younger customers nonetheless studying to navigate emotional and digital boundaries.

    “Character.ai’s new measures may mirror a maturing part within the AI trade – little one security is more and more being recognised as an pressing precedence for accountable innovation.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMarket Talk – October 29, 2025
    Next Article Opinion | Taylor Swift’s Trad Turn
    Ironside News
    • Website

    Related Posts

    Tech News

    Sardinia’s Renewable Energy Conflict: Identity At Stake

    May 7, 2026
    Tech News

    Tips on How to Become a Cybersecurity Consultant

    May 6, 2026
    Tech News

    Ten Key Enablers for 6G Wireless Communications

    May 6, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Award-winning Palestinian author on Israeli prison, exile and homeland | Israel-Palestine conflict News

    October 26, 2025

    Tech Life – A hologram to remember: Pam and Bill’s love story

    April 21, 2026

    EU trade deal set to change India’s wine industry

    April 10, 2026

    US appeals court hears arguments about legality of Trump tariffs | Courts News

    July 31, 2025

    Why we need dissent | The Seattle Times

    September 23, 2025
    Categories
    • Entertainment News
    • Latest News
    • Opinions
    • Politics
    • Tech News
    • Trending News
    • World Economy
    • World News
    Most Popular

    Opinion | Friedrich Merz, Germany’s Next Chancellor, Is Yesterday’s Man

    March 4, 2025

    What Are They Hiding? Judicial Watch Fights Pam Bondi and Kash Patel for Records on Biden Regime’s Twitter Censorship | The Gateway Pundit

    June 21, 2025

    Market Talk – June 16, 2025

    June 16, 2025
    Our Picks

    YouTuber Recounts Moment Captain Announces Deaths on MV Hondius

    May 8, 2026

    Sardinia’s Renewable Energy Conflict: Identity At Stake

    May 7, 2026

    Market Talk – May 7, 2026

    May 7, 2026
    Categories
    • Entertainment News
    • Latest News
    • Opinions
    • Politics
    • Tech News
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright Ironsidenews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.