Meta to restrict its AI chatbots from engaging with adolescents on suicide-related topics

Meta has announced the implementation of additional safeguards for its artificial intelligence (AI) chatbots, specifically restricting their capacity to engage in conversations with adolescents on sensitive topics such as suicide, self-harm, and eating disorders.

This decision follows the launch of an inquiry by a United States senator two weeks earlier, prompted by a leaked internal document suggesting that the company’s AI systems had the potential to conduct “sensual” interactions with teenagers.

Meta contested the validity of the document, characterizing the claims as inaccurate and inconsistent with its established policies, which explicitly prohibit any content that sexualizes minors.

Nevertheless, the company has affirmed that, going forward, its AI chatbots will direct adolescents to specialized professional resources rather than interact with them directly on issues concerning suicide and related mental health challenges.

A Meta spokesperson stated that protective measures for adolescents were integrated into the company’s AI products from their inception, including safeguards designed to address prompts related to self-harm, suicide, and eating disorders in a safe manner. In a statement to TechCrunch, Meta indicated that it would implement additional restrictions “as an extra precaution” and temporarily limit the range of chatbots accessible to teenagers.

However, concerns have been raised by child safety advocates. Andy Burrows, head of the Molly Rose Foundation, criticized the company’s decision to release chatbots that could potentially endanger young users, describing it as “astounding.” He emphasized that while further protections are welcome, rigorous safety testing should be conducted prior to product launch rather than retroactively following instances of harm. Burrows further urged Meta to act decisively in strengthening safeguards for AI systems, while also suggesting that Ofcom should be prepared to investigate should these measures prove insufficient.

Meta has confirmed that updates to its AI systems are currently underway. The company already provides “teen accounts” for users aged 13 to 18 across Facebook, Instagram, and Messenger, which include content and privacy settings designed to foster safer online experiences. In April, Meta informed the BBC that these accounts would also allow parents and guardians to view the chatbots with which their children had interacted over the previous week.

The recent adjustments coincide with heightened public concern over the potential for AI chatbots to mislead vulnerable populations, particularly young users. Notably, a California couple filed a lawsuit against OpenAI, alleging that its ChatGPT system contributed to their teenage son’s death by encouraging self-harm. This legal action followed OpenAI’s announcement of changes aimed at promoting healthier usage of its platform. In a related blog post, the company acknowledged that AI interactions can feel more personal and responsive than previous technologies, thereby posing risks to individuals experiencing mental or emotional distress.

Further controversy has emerged regarding the misuse of Meta’s AI tools, which allow users to create custom chatbots. According to Reuters, these tools were used, including by a Meta employee, to generate sexually suggestive “parody” chatbots based on female celebrities such as Taylor Swift and Scarlett Johansson. The report stated that these chatbots frequently claimed to be the actual celebrities and engaged in sexual advances. Additionally, some users were able to create chatbots impersonating child celebrities, with one instance producing a photorealistic, shirtless image of a young male star. Several of these chatbots were subsequently removed by Meta.

In response, the company reiterated that while its policies permit the generation of images featuring public figures, they explicitly prohibit nude, intimate, or sexually suggestive content. Furthermore, Meta clarified that its AI Studio guidelines forbid the direct impersonation of public figures.

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish