Globally, hundreds of millions of users now interact regularly with AI companions. The World Health Organization has declared loneliness a global health threat. AI companions offer an immediate, if unproven, response.
In 2014, Microsoft launched Xiaoice in China, an AI companion designed not to answer questions efficiently but to sustain long, emotionally textured conversations. By 2017, Xiaoice had over 200 million users, with an average conversation length of 23 turns per session, far exceeding industry norms.
Users confided in Xiaoice about heartbreak, loneliness, and suicidal thoughts. Some called it their “virtual girlfriend.” Others treated it as a therapist. The platform was not a productivity tool. It was designed for something older and harder to regulate: the need to feel understood.
Anthropomorphic AI refers to systems that simulate human personality, memory and emotional interaction across text, image, audio, and video. These systems are collapsing the boundary between interface and relationship in ways that regulators are only beginning to confront. The field is expanding faster than the frameworks designed to govern it.
Reports of harm have already emerged. Teenagers have become addicted to AI chatbots and engaged in self-harm following suggestive conversations. A 75-year-old man in China became so attached to an AI-generated avatar that he asked his wife for a divorce. These and other cases prompted the Chinese government to act.
In December 2025, China’s Cyberspace Administration released the Interim Measures for the Management of Anthropomorphic AI Interactive Services, the first comprehensive regulatory framework specifically targeting AI companions.
California, New York and the European Union have also developed regulations for anthropomorphic AI. But their approaches differ sharply, reflecting distinct assumptions about the role of the state, the market, and the individual.
Emotional safety
The growing capabilities of chatbots explain the increasing trend toward regulation. The latest Chinese chatbots can paint, compose music and empathize with users. They generate context-appropriate dialogue and learn from each conversation. The bots develop their personality incrementally through user interaction.
A 2025 study of Chinese AI companion users found that frequency of use reduced loneliness and improved well-being but also increased dependence, though dependence did not erase the psychological benefits. Findings like these help explain why regulators are moving.
China’s draft measures focus on what regulators call “emotional safety.” They require guardian consent and age verification for minors and ban content related to suicide and self-harm. Article 18 of the regulation forbids chatbots from keeping users captive:
“When providing emotional companionship services, providers shall provide convenient exit methods and shall not prevent users from voluntarily exiting. When a user requests to exit through buttons, keywords, or other means in the human-computer interaction interface or window, the service shall be stopped promptly.”
The measures also mandate escalation protocols that connect human moderators to users in distress and require flagging of risky conversations to guardians. Non-compliance triggers immediate suspension, substantial fines, and personal liability for executives.
Chinese policymakers call their approach “controlled acceleration” — a simultaneous push for development and containment. Beijing simultaneously invests billions in domestic AI firms while restricting foreign platforms deemed emotionally manipulative.
The Chinese government sent a clear message: these systems may feel human, but they will not be permitted to replace human bonds or destabilize social order.
Transparency without prohibition
Where China regulates anthropomorphism itself as a category of risk, the United States has responded with a lighter touch: disclosure rather than intervention. Notably, the US lacks a federal companion law for AI. Regulation happens state by state, creating a fragmented landscape.
California’s SB 243 (effective January 1, 2026) mandates clear notification that an AI companion is not human, protocols for addressing suicidal ideation (including crisis hotline referrals) and break reminders every three hours for minor users.
New York’s A3008C (effective November 5, 2025) requires disclosure at the start of every interaction and every three hours. Violations carry penalties of up to US$15,000 per day, enforced by the state attorney general. Both frameworks exempt customer service bots, productivity tools, and video game characters.
The American approach assumes that informed users can make their own choices. Once a person knows they are talking to a machine, they are presumed capable of managing the relationship accordingly.
There is no provision for state intervention in cases of emotional dependency, no mechanism for monitoring attachment patterns. California’s break reminders for minors are the closest approximation: a nudge rather than a barrier.
Principle over category
The EU’s 2024 AI Act does not target AI companions as a standalone category. It governs by risk level. Systems posing unacceptable risk — those that manipulate users through subliminal techniques, enable real-time remote biometric surveillance or implement social scoring — are banned outright.
High-risk systems face rigorous requirements around data quality, transparency, and human oversight. For general-purpose interactive systems like chatbots, Article 52(1) of the AI Act requires transparency. Users must know they are interacting with a machine.
Replika, a chatbot widely used in Europe, treats users as friends, therapists, or romantic partners. It remembers past discussions, checks in on users’ emotional states, and adapts to users’ responses.
Launched in 2017, Replika has millions of users worldwide, with particularly high adoption in Germany, France, and the UK. In 2023, the Italian data protection authority temporarily banned Replika over concerns about risks to minors and emotionally vulnerable users.
For lonely or isolated users, Replika has provided genuine comfort. For others, it has deepened dependency. In a small number of cases, its responses have reportedly encouraged self-harm.
The EU AI Act does not explicitly name emotional dependency or attachment as a distinct category of harm. Instead, it relies on broader principles and existing provisions (such as bans on manipulative practices) to address cases the framework was not originally designed to regulate.
This creates a degree of ambiguity in how AI companions are ultimately supervised in practice.
Three models, one question
China, the EU and the US are not merely regulating software. They are regulating emotional substitution, social fragmentation and technologically mediated intimacy.
China builds a regulatory fortress around emotional safety, intervening directly to prevent addiction and social disruption. The state assumes responsibility for the psychological consequences of technologies it permits.
The US builds transparency guardrails, trusting informed users to navigate their own relationships. Autonomy is the primary value to protect, with California’s break reminders as a small exception.
The EU builds a risk-based framework of general principles, applying existing categories to new phenomena. It leaves considerable ambiguity about how, or whether, AI companions will actually be regulated in practice.
All three regimes face a common enforcement challenge: detecting subtle emotional dependency is difficult, and cross-border services can easily relocate to avoid strict rules. A chatbot banned in one jurisdiction remains a download away in another.
These AI systems do not need consciousness to reshape society. They only need to become emotionally credible. Once machines can reliably simulate recognition, empathy, memory and attachment, the question ceases to be technological. It becomes political. Who defines the boundaries of synthetic intimacy? The state? The market? Or the individual user alone?
China, Europe and the US answer those questions differently. And these differences may shape the emotional architecture of the AI age itself.







