} ?>
(Yicai Global) April 12 -- China's internet regulator is seeking to control artificial intelligence-generated content models. But according to a Chinese expert in data law, it is impossible at the moment as the delivery process is intelligent and random.
Language models touch on the interests of the individual and the general public, so establishing control over AIGC models before they reach users will be a difficult task and the focus of China’s latest regulations, Gao Fuping, director of the Data Law Research Center at East China University of Political Science and Law, told Yicai Global.
The Cyberspace Administration of China released a draft of the AIGC Service Administration Measures yesterday to regulate possible security and ethical issues that may arise from ChatGPT-like products. The watchdog will accept comments until May 10.
The draft measures were published quickly in line with the swift pace of technological development and application trends, showing China's mature, agile, and efficient supervision of its digital environment, Wu Shenkuo, law professor at Beijing Normal University, said to Yicai Global.
The measures put forward requirements about the admission of AIGC service providers, algorithm designs, training data selection, content models, and users' real names, personal privacy, and business secrets.
China will continue to support independent innovation and international cooperation on AI algorithms and basic framework technologies, the draft showed, and will encourage firms to prioritize using safe and reliable software, tools, computing, and data resources.
The draft has three areas for supervising AIGC in China, Wu pointed out. The first is to utilize the systems and mechanisms provided by laws and regulations. The second is to emphasize the basics of prevention, response, and management of risk, and the third is to highlight the ecological and procedural regulatory approach, especially in the multi-level supervision of the entire lifecycle of tech application, Wu noted.
AIGC providers must submit their products and services to China's internet watchdog for security assessment, registration, alteration, or cancellation of the algorithm before being released to the general public, according to the draft.
The content produced by AIGC must be true and accurate, and measures must be taken to avoid any false information, the draft showed. Institutions or individuals which provide texts, images, and voice generation services by using AIGC will be accountable for the content produced by the models.
Not all countries are embracing AI chatbots. The Italian Data Protection Authority announced an interim ban on US startup OpenAI’s ChatGPT on March 31, becoming the first country to restrict the service. Other European nations have since followed suit and began working on regulatory measures. The US government is also reviewing if it is necessary to probe into ChatGPT and other AI tools.
Editor: Martin Kadiev