} ?>
(Yicai) Oct. 21 -- ByteDance, which owns TikTok, has confirmed that an intern disrupted the training of one of the Chinese tech giant’s artificial intelligence models by using malicious code.
The intern interfered with the AI model training task of the tech commercialization team he was working in, the Beijing-based company said in a statement on Oct. 19. The incident had no impact on the project or the firm’s businesses.
According to recent online rumors, the intern used a Hugging Face malware to add broken code to the AI model because he was dissatisfied with the allocation of the team’s resources, causing losses in the tens of millions of US dollars.
That figure was exaggerated, ByteDance said, though the exact amount was not disclosed. The intern was dismissed in August and the company has shared details of the incident with industry partners and the intern's school, it added.
An insider at the company told Yicai that the incident happened at the end of June and that Doubao, the chatbot developed by ByteDance’s cloud computing unit Volcano Engine, was unaffected.
The breach demonstrates ByteDance’s lack of security management measures during technical AI model training, including permission isolation. Permission isolation is conducive to protecting a company’s core data and intellectual property rights, preventing data leakage, and improving data and system security, an industry insider told Yicai.
Real-time monitoring can detect permission abuse and abnormal operations, the person noted, though there are difficulties, including significant investment in cross-department coordination and resources.
Editor: Futura Costaglione