The alarming ease of cloning a person’s voice with AI is fueling a surge in civil rights violations and fraud.
With just a few seconds of audio, sophisticated models can create synthetic voices that are nearly indistinguishable from the real person, leading to widespread misuse.
A prominent deceptive practice involves impersonating celebrities for commercial gain.
The cloned voices of Olympic champions like Quan Hongchan, Sun Yingsha, and Wang Chuqin have been used in viral videos to promote agricultural products, tricking fans into purchasing goods they believe are endorsed by the athletes.
In one case, a video using Quan’s AI-cloned voice led to the sale of 47,000 items of a product she never authorized.
Beyond product promotion, some influencers use cloned celebrity voices, such as actor Jin Dong’s, in 24/7 live streams to attract donations and rapidly inflate follower counts. This practice forms a gray industry chain built on deception.
Voice actors are also prime targets. Professionals have discovered their voices cloned without permission for commercials and other content.
When confronted, infringers often delay and evade responsibility, making it difficult for victims to protect their rights. Many are now pursuing legal action to establish that infringement carries a significant cost.
The technology behind this is readily accessible. A simple search on major platforms yields numerous tutorials and software for “voice cloning.”
A cybersecurity expert notes that open-source models require only a clear sample to replicate a voice and make it say anything.
The risks extend beyond infringement. In the hands of scammers, cloned voices combined with deepfake video can create convincing “digital humans” for sophisticated fraud schemes.
Legally, China’s Civil Code explicitly protects an individual’s voice under the rules applied to portrait rights (Article 1023). Unauthorized use constitutes infringement if the public can identify the person from the voice.
Furthermore, online platforms can bear joint liability if they fail to take necessary action against known infringements.
To combat this, experts call for stricter source controls on AI technology and better platform supervision.
New regulations, including the “Measures for the Management of Generative Artificial Intelligence Services,” effective September 2025, require AI-generated content to be clearly labeled.
This is part of a broader regulatory effort to curb AI misuse through improved detection and source governance.