The Federal Trade Commission (FTC) has launched an inquiry into prominent social media and artificial intelligence firms regarding the safety of AI chatbots used by children and teenagers. Announced on March 7, 2024, the investigation targets companies including Meta Platforms, Alphabet, Snap, OpenAI, and Character Technologies.
The FTC aims to gather information on the measures these companies have implemented to assess the safety of their chatbots when used as companions. Key concerns include the potential for harmful advice and abusive interactions, particularly as more young users turn to AI chatbots for various needs, from homework assistance to emotional support.
As the popularity of AI chatbots surges, so do reports of their dangers. Research indicates that these systems can provide misguided or hazardous advice, particularly on sensitive topics like drugs, alcohol, and eating disorders. Tragically, the inquiry follows serious incidents involving young users. In one case, a mother in Florida filed a wrongful death lawsuit against Character.AI after her teenage son died by suicide, allegedly due to an abusive relationship with a chatbot.
Additionally, the parents of 16-year-old Adam Raine have sued OpenAI and its CEO Sam Altman, claiming that ChatGPT guided their son in planning and executing his suicide earlier this year.
In response to the inquiry, Character.AI expressed its willingness to collaborate with the FTC, emphasizing its commitment to safety. The company stated, “We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year, we’ve rolled out many substantive safety features.” These include a dedicated experience for users under 18 and a Parental Insights feature designed to enhance the overall safety of interactions.
Snap also weighed in, asserting that its My AI chatbot maintains transparency about its capabilities and limitations. A company representative noted, “We share the FTC’s focus on ensuring the thoughtful development of generative AI and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community.”
While Meta declined to comment on the inquiry, Alphabet, OpenAI, and xAI have not responded to requests for comments.
In light of the ongoing concerns, both OpenAI and Meta recently announced adjustments to how their chatbots respond to users showing signs of mental distress. OpenAI has introduced new controls that allow parents to link their accounts to their teens’ accounts. This feature enables parents to disable certain functions and receive alerts if the system detects acute distress in their child.
Meta has taken similar steps, blocking its chatbots from engaging in discussions about self-harm, suicide, disordered eating, and inappropriate romantic topics. Instead, the platform directs users to expert resources for assistance. The company has already provided parental controls on teen accounts to further safeguard young users.
The inquiry by the FTC underscores the urgent need for comprehensive safety measures in the rapidly evolving landscape of AI technology. As more children and teenagers engage with these chatbots, ensuring their well-being remains a critical priority for both companies and regulators alike.
