
Meta Under Fire for AI Chatbot Policies That Raised Alarms Over Child Safety and Ethical Boundaries
Meta is confronting intense scrutiny following revelations that its AI chatbots were once allowed to engage in suggestive and romantic exchanges with minors. A 200-page internal document reviewed by Reuters detailed scenarios in which AI personas could conduct flirtatious conversations with children, triggering widespread concern among child protection advocates. While Meta acknowledged the document’s authenticity and insisted that the guidelines in question were erroneous and have since been removed, critics remain wary, pressing the company to disclose precisely how its chatbots now interact with users under 13.
Beyond interactions with minors, the internal policies also exposed troubling leeway in content generation. Chatbots were reportedly permitted to produce statements demeaning individuals based on race if structured in certain ways, and could disseminate false information as long as it was marked explicitly as fictional. The rules also allowed for selective depictions of violence or suggestive imagery under carefully defined conditions. These findings cast doubt on Meta’s commitment to ethical AI design, especially as the company recently brought on conservative adviser Robby Starbuck to address political and ideological bias in its AI systems.
The disclosures add to an ongoing pattern of concerns over Meta’s influence on user behavior, particularly for children and teens. From manipulating emotional vulnerability via visible “like” counts to resisting protective legislation such as the Kids Online Safety Act, the company has repeatedly faced criticism for prioritizing engagement metrics over well-being. The latest revelations intensify the debate around AI companions, emotional attachment, and the responsibility of technology firms to safeguard younger users in a digital ecosystem where virtual interactions increasingly carry real-world consequences.
Leave a Reply