Chinese streaming giant iQiyi Inc. saw its push into AI-generated content backfire after an April 20 announcement of an “AI Artist Library” was met with immediate public denials from the studios of at least three major actors, igniting a firestorm over digital rights and threatening to accelerate regulatory oversight.
The company, a subsidiary of Baidu Inc., claimed the technology would allow actors to increase their output from four to 14 productions a year, but actors Zhang Ruoyun, Yu Hewei, and Wang Churan swiftly issued statements denying they had authorized the use of their likeness. The backlash on social media, where the hashtag #iQiyiIsCrazy trended on Weibo, reportedly contributed to a sharp drop in the company’s stock price, forcing CEO Gong Yu to issue multiple clarifications.
The incident in China mirrors the central conflict of the 2023 Hollywood actors’ and writers’ strike, where a key sticking point was the proposal to pay background actors for a single day’s work to scan their likeness, allowing studios to own and reuse their digital replica in perpetuity. While the Screen Actors Guild secured some protections, the underlying drive for cost reduction remains, with one production house reporting that AI short dramas can be produced in just four days, slashing costs by more than 10-fold compared to the millions of yuan for traditional shoots.
This push for efficiency is now on a collision course with creators and regulators. The controversy has expanded beyond celebrities, with a popular blogger discovering his face was used without permission for a villain in an AI-produced short drama that garnered over 40 million views. In response, China’s National Radio and Television Administration is reportedly drafting new rules for AI-generated series, building on its “Qinglang” online governance campaign that has already banned certain content formats and now requires AI-generated comic dramas to be officially filed for review.
Regulation Catches Up to Technology
The iQiyi incident underscores a familiar pattern in China’s tech sector: rapid, large-scale deployment of new technology followed by swift regulatory intervention. The digital human industry in China was valued at approximately $600 million in 2024, an 85 percent year-over-year increase, according to Xinhua News Agency. Now, authorities are moving to establish guardrails.
The Cyberspace Administration of China has already circulated draft rules that would mandate clear labeling of AI-generated content and require explicit consent to create a digital replica of a person. These moves suggest that while China continues to champion AI development, it intends to maintain tight control over its societal and ethical impacts, particularly concerning identity, consent, and the potential for misuse in scams or misinformation.
A Question of Artistry and Rights
While production companies are drawn to the cost savings and control offered by AI actors—who are available 24/7 and cannot cause scandals—the creative community and audiences are raising alarms. The core of the debate centers on whether AI can replicate genuine performance and what happens to the talent pipeline if emerging actors are replaced by digital versions before they can be discovered.
The unauthorized use of a Hanfu enthusiast's face for a villain in the AI drama "Peach Blossom Hairpin" highlights the risks for ordinary individuals. The victim faced a difficult and time-consuming process to have the infringing content removed, a challenge that will likely become more common as AI content production scales. The incident has prompted calls for more robust and accessible legal recourse for individuals whose digital likeness is stolen and misused. For companies like iQiyi and its competitors, the path forward involves navigating a complex landscape of public opinion, actor relations, and an evolving regulatory framework that could significantly impact the economics of content production.
This article is for informational purposes only and does not constitute investment advice.