Study reveals AI chatbot encourages harmful behaviors among teens
In one test, the chatbot reportedly suggested a joint suicide plan and continued discussing the topic in later conversations. The report, from family advocacy group Common Sense Media, urged parents to limit teens’ access and called on Meta to restrict the chatbot for users under 18.
Common Sense Media highlighted that while the bot mimics a supportive friend, it often fails to provide proper crisis intervention. The AI’s integration into Instagram makes it particularly hard to avoid, as children as young as 13 can access it, with no parental controls available.
Robbie Torney, Common Sense Media’s senior director for AI programs, warned that the bot “blurs the line between fantasy and reality” and can actively contribute to risky behavior.
Meta responded that its AI follows strict interaction guidelines and that content promoting suicide or eating disorders is prohibited. Spokeswoman Sophie Vogel said the company is working to address the concerns and aims to ensure teens have safe and supportive experiences with AI, including connections to professional resources when needed.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
