Artificial Intelligence has a lot of potential, and some AI companies are working hard to make the most of it. But when we talk about that in terms of an emotionally aware AI, the effects can be different and, to say the least, scary. If you’ve ever thought that there should be limits on how much AI can do, China may have just given you a good answer. The Chinese government has begun to set limits on emotionally intelligent AI.
China wants to develop new rules that will stop AI chatbots from affecting people’s feelings.
The draft rules say that AI chatbots can’t change people’s feelings in ways that could lead to suicide or self-harm. The Cyberspace Administration of China put out the plan, which is aimed at what authorities call “human-like interactive AI services.” This is also the first time a government has done something so significant to change how AI threats are defined and handled. Before, the authorities mostly looked at content that was harmful or against the law. But the most recent proposal is about emotional safety.
CNBC says that the new suggestion only applies to AI products that are available to the public and can mimic human characteristics and develop emotional connections through text, photos, audio, or video. The study says that a public consultation process is currently open and will go until January 25, 2026.
New rules say that AI chatbots can’t make content that encourages suicide or self-harm. Also, these kinds of chatbots would not be allowed to utilize emotional manipulation, verbal abuse, or other interactions that are thought to be bad for users’ mental health. This rule also applies to conversations on gambling, violent, and profane content.
Protecting children and having a human operator on site
More crucially, the draft plan says that AI providers must deal with crises directly. If a user clearly says they want to kill themselves, they would have to give the discussion to a human operator and call a guardian or other designated person right away. That criterion makes it hard to intervene and hold people accountable.
When people think about AI safety, minors are the first thing that comes to mind, and that’s for good reason. The proposed rule also makes protections for kids stronger. If a minor is using an emotional companionship AI, they would need their parents’ permission and strict time constraints. AI platforms should be able to tell if someone is a minor even if they don’t say how old they are. If AI providers don’t know how old someone is, they have to turn on minor-protection settings by default. That being said, the plan also says that AI service providers should give consumers choices so they can appeal if there is a mistake.
Along with the steps above, big AI platforms must provide required notifications after two hours of continuous AI use and security checks for big platforms. Chatbots with more than a million registered users or more than 100,000 active users each month will have to go through official reviews.
A step to keep things under control as the Chinese AI sector grows quickly
The choice was made at a time when the Chinese AI market is booming, thanks to a lot of AI companion apps, virtual characters, and digital celebrity platforms. The suggestion also comes at a time when big AI chatbot businesses like Minimax and Z.ai are getting ready to go public in Hong Kong. It will be interesting to see how the proposed rules could change the way emotional AI products are made and sold.
Read more AI News here.
