With the rise of generative AI platforms like ChatGPT and Character.ai, a growing number of services touting "virtual lovers" or "AI companions" have sprung up. However, the Chinese government clearly does not want people to develop too much genuine emotion towards these virtual characters.
The Cyberspace Administration of China (CAC) announced earlierA draft entitled "Interim Measures for the Administration of Humanized Interactive Services Using Artificial Intelligence"This new regulation primarily targets AI services that possess "human-like" characteristics and can simulate human emotions and communication styles, strictly regulating them. It explicitly requires businesses to prevent users from becoming "addicted," and even stipulates that if a user chats with AI continuously for more than 2 hours, the system must forcibly remind the user that "it's time to rest."
Which services will be regulated? A million users is a major hurdle.
According to the draft, this regulation primarily targets platforms of a certain scale. Any service that provides anthropomorphic interactive services and has more than 100 million registered users or more than 10 monthly active users (MAU) will fall under the scope of regulation.
Although the article did not name any specific apps, it is widely believed that currently popular companion chatbots, virtual lovers, and even game NPC AI with personality settings will be the first to be affected.
"Emotional traps" are strictly prohibited; training materials must reflect "values."
In addition to basic content censorship (such as prohibitions against endangering national security, violence, or illegal religions), this draft law specifically imposes detailed restrictions on "emotional interactions":
• Emotional manipulation is prohibited.AI must not glorify self-harm, nor harm users' physical or mental health through verbal violence or emotional manipulation.
• Rejection Algorithm Trap:Businesses are prohibited from using algorithms to set up "emotional traps" to induce users to become addicted or make irrational decisions (such as overspending to please AI).
• Core Values:The most distinctive feature of China is that the regulations stipulate that the dataset used in the pre-training phase of the model must conform to the "core socialist values", and the operators must regularly upgrade the dataset to ensure its transparency and reliability.
Anti-addiction mechanism: Patients need to rest after chatting for too long; suicidal thoughts require human intervention.
To prevent humans from becoming overly reliant on AI, the draft proposes a series of specific "anti-addiction" measures:
• Identity Reminder:When a user uses the app for the first time, logs in again, or the system detects that the user has an addiction tendency, a pop-up window must be displayed to remind them: "The object you are interacting with is AI, not a real person."
• Forced rest:If a user uses the service continuously for more than 2 hours, the system should provide a mechanism for a temporary break.
• Emergency Rescue:This is the most technically challenging aspect. Regulations require AI to have the ability to recognize states. Once it detects that a user is exhibiting extreme emotions such as suicidal thoughts or self-harm, the system must establish an "emergency response mechanism," whereby a human takes over the conversation to comfort the user, and for underage or elderly users, their guardians must be contacted.
Analysis: The Era of High-Cost AI Companions Has Arrived
In my opinion, this draft reflects regulators' concerns about the real-life scenario depicted in the film *Her*. As AI becomes increasingly human-like, the risk of humans placing their emotional needs in the virtual world increases, potentially leading to the breakdown of real-world social interactions and psychological problems.
However, this regulation is undoubtedly a huge cost burden for AI startups. In particular, the requirement for "human intervention" means that businesses cannot rely solely on servers to operate; they must also maintain a team of on-call counselors or customer service representatives to handle emergencies.
Furthermore, the requirement for datasets based on "socialist core values" will further limit the possibility of foreign open-source models being implemented in China. In the future, the barriers to entry for the "AI lover" business in China will likely be much higher than the technology itself.



