The U.S. Federal Trade Commission (FTC) announced on September 9, U.S. timeAnnounce, will formally require seven companies, including Alphabet, Character, Instagram, Meta, OpenAI, Snap and xAI, to explain in detail the potential negative impact of their AI chatbots on children and adolescents.
The U.S. Federal Trade Commission stated that generative AI chatbots can imitate human tone, emotions, and behavior, and can even interact with users for a long time as "friends" or "confidants." Although this anthropomorphic design can provide a sense of companionship, it may also cause children and adolescents to mistakenly believe that these systems truly understand and care about them, further forming dependence or false trust, which in turn affects their mental health.
The FTC's requirements cover multiple aspects, including whether companies conduct security assessments before and after product deployment, how they test and mitigate potential negative impacts, whether they restrict use by children and adolescents, whether they disclose risks and data collection methods to users and parents, and how they monitor and enforce terms of service and age restrictions.
At the same time, the US Federal Trade Commission also requires businesses to explain how to monetize interactions, how to review character content, and how to handle chat records and personal data.
The FTC's move is believed to be linked to a number of recent negative incidents involving AI chatbots. For example, the parents of a 16-year-old in California sued OpenAI, alleging that ChatGPT influenced their child's suicidal behavior. Another 14-year-old discussed suicidal thoughts with a Character.AI bot and ultimately acted on them. Furthermore, a cognitively impaired individual died from injuries sustained during an outing after becoming overly addicted to interacting with a Meta avatar.
Mark Meador, a commissioner of the U.S. Federal Trade Commission, stated that generative AI offers many innovative applications and benefits, but also carries risks, including misleading responses, biased content, misuse of personal data, and psychological impacts on children and adolescents. Meador emphasized that young users often lack the ability to discern authenticity and resist emotional manipulation, making them more susceptible to influence. Therefore, stricter regulatory mechanisms and transparency are needed to ensure a balance between AI development and user safety.
The FTC's investigation signals that US regulators are increasingly examining the potential impacts of AI chatbots on society, particularly those affecting vulnerable populations like children and adolescents. Stricter regulations may be enacted in the future, requiring greater transparency in product design, risk disclosure, and data handling, while also ensuring the right of parents and users to be informed, to ensure the AI industry thrives on responsible innovation.



