Beijing: China’s cyber regulator has proposed new rules to increase oversight of AI services designed to mimic human traits and form emotional connections with users. The measures would require providers to monitor users’ emotional states, warn against excessive use, and intervene if signs of addiction or extreme emotional responses are detected.
The draft rules extend across the full lifecycle of AI products, mandating stronger safeguards for algorithmic checks, data security, and the protection of personal information. Providers must also prevent AI systems from generating content that threatens national security, spreads misinformation, or disseminates harmful material.
The move comes amid an intensified enforcement drive. During a recent three-month campaign, Chinese authorities removed about 960,000 AI-generated items deemed illegal or harmful, underscoring the government’s determination to maintain tight control over rapidly advancing technologies.
President Xi Jinping has repeatedly warned that artificial intelligence brings “unprecedented opportunities” alongside “unprecedented risks and challenges.” In April 2025, he presided over a rare Politburo study session focused entirely on AI, calling for stronger safety oversight and faster development of laws, regulations, and ethical frameworks. Earlier, at the World Economic Forum in Davos, Vice-Premier Ding Xuexiang likened AI governance to driving at high speed without reliable brakes, stressing the need for caution.
Under existing national AI standards, companies are already required to rigorously screen their training data, with human reviewers examining thousands of samples. At least 96 percent of training data must be deemed safe, and regulators have identified 31 categories of risk, including content that encourages the overthrow of state power or the socialist system. All AI-generated text, images, and videos must be clearly labeled and traceable.
China’s approach also emphasises ideological alignment. Chatbots are required to adhere to socialist core values and avoid generating content that could undermine Communist Party rule. Since interim measures were introduced in 2023, more than 302 AI systems have registered with authorities after undergoing detailed safety and ideological reviews.
Despite strict controls, occasional lapses have highlighted the challenges of regulating complex AI systems. In 2024, several AI products generated content that contradicted official narratives or caused public embarrassment, prompting renewed scrutiny and tighter supervision.
China’s broader strategy is to position AI safety as a national security issue, now listed alongside pandemics, cyberattacks, and financial risks in its National Emergency Response Plan. In the first half of 2025 alone, the country issued more national AI standards than in the previous three years combined.
China's cyber regulator has introduced new rules to oversee AI services that mimic human emotions, requiring providers to monitor users' emotional states and prevent misuse. The regulations aim to safeguard data security and prevent misinformation, reflecting China's commitment to controlling AI technologies amid national security concerns.