A Chinese company's experiment in digital immortality for a former employee has ignited a firestorm over data ownership, AI ethics, and the very definition of a person in the workplace.
A Shandong-based game media firm has created a digital twin of one of its former employees using their work data, an experiment that raises profound questions for the $500 billion global AI industry. The AI avatar, trained on the departed staffer's data with his consent, is now performing basic HR functions, blurring the line between tool and person and triggering widespread debate across Chinese social media.
"This is a classic 'uncanny valley' effect," one AI ethics researcher noted. "When an AI is close enough to mimic a person but is clearly not human in key ways, it creates a natural sense of unease. The technology itself isn't revolutionary, but the application touches a raw nerve about human identity and value."
The digital employee, based on a former HR specialist, can currently handle simple queries, schedule meetings, and generate basic presentations. The underlying tech, similar to a viral open-source project called "Colleague.Skill," is more of a sophisticated prompt-and-script engine than a true artificial general intelligence. It lacks memory of past interactions and cannot replicate the nuanced judgment or "soft skills" of its human predecessor.
The experiment's viral spread highlights a growing anxiety among white-collar workers about "monitoring capitalism," where personal data, communication styles, and even thought processes are harvested to create corporate assets. This trend could see employees reduced to "digital fuel," their value extracted and replicated while the individual is left behind, posing significant challenges to data privacy laws and the future of knowledge work.
Data Rights and Legal Gray Areas
The creation of AI employee clones immediately runs into legal questions. According to legal experts citing China's Personal Information Protection Law, work communications and habits are considered personal information. Using this data to train an AI without explicit, informed consent could directly infringe on an individual's rights.
The country's regulations on generative AI also mandate that service providers must obtain personal consent for data used in model training. However, a significant gray area exists. While private emails are clearly personal, what about messages in a public work group chat or contributions to a company-wide report? The ownership of work product created on company time is difficult to untangle from the personal data embedded within it. For corporations, the most valuable data isn't an employee's conversational style, but their repeatable processes, decision-making logic, and accumulated experience—assets that companies feel they have a right to retain after an employee's departure.
The Human Response to AI Duplication
The online reaction to the digital twin was a mix of dark humor and genuine fear, with users commenting that their colleague was "alchemized" into "digital fuel." This anxiety points to a deeper fear that the most automatable jobs—those that are mechanical, highly standardized, and reliant on established procedures—are the most vulnerable to being "distilled" by AI.
This event serves as a stark reminder for professionals to proactively adapt. The challenge is not to resist AI, but to master it. By using AI tools to automate the replicable parts of their own workflows, employees can focus on developing uniquely human capabilities: creativity, critical thinking, and complex problem-solving. The mantra for the modern workplace is shifting: AI will not take your job, but a person who knows how to use AI will. The ultimate defense against being defined by data is to become someone whose value cannot be captured in a dataset.
This article is for informational purposes only and does not constitute investment advice.