} ?>
(1) At this year's National People's Congress and the National People's Congress, nearly 10 deputies and committee members have made suggestions related to cracking down on AIGC fake news, improving the management mechanism of AI video dissemination, and preventing AI fraud and infringement.
(2) At present, various forms of AI disinformation are frequently disseminated, and the industry is concerned about AI fraud and infringement, and relevant rules and regulations still need to be improved.
(3) With the in-depth application of artificial intelligence technology, the challenges surrounding the security of large models are becoming increasingly severe.
Finance Associated Press, March 4 (Reporter Fu Jing) The domestic large model DeepSeek was born to amaze the industry, the current artificial intelligence technology is changing with each passing day, but the phenomenon of evil with the help of artificial intelligence in society is also endless.
As of press time, nearly ten representatives and members have written many AI security considerations, including cracking down on AIGC fake news, improving the management mechanism of AI video transmission, and preventing AI fraud and infringement, into the suggestions of the two sessions, and gave solutions to the important topic of how artificial intelligence balances technological innovation and security risks.
AI is frequently used to spread false information, causing concern
The reporter of the Financial Associated Press learned in a previous interview that a series of false information has recently spread on a large scale through AI Q&A or AI content generation, and people with ulterior motives have tried to create social topics with this. Harmful information is "dressed" in the cloak of AI, and its dissemination is more hidden, more "credible", and more destructive, which not only misleads the public, disrupts public opinion, and may even cause users to worry about the security of AI large model technology itself.
For example, Gan Huatian, a member of the National Committee of the Chinese People's Political Consultative Conference (CPPCC) and a professor at West China Hospital of Sichuan University, proposed to put forward a proposal to strengthen the governance of AI-generated false information; Members of the National Committee of the Chinese People's Political Consultative Conference (CPPCC) and vice president of the Xinjiang New Social Strata Friendship Association, Hainizat Tohuti, plans to submit proposals such as "Strengthening the Governance of AI-Generated False Information".
Previously, during the 2025 Shanghai Two Sessions, Shanghai Municipal People's Congress, INTSIG (688615. Liu Chen, secretary of the board of directors, also put forward a suggestion: multi-party alliance to work together to combat AIGC fake news; Encourage enterprises to develop AI defense and counterfeiting technologies; construct an ethics training and dataset system; Use science and technology to strengthen the legal "weapon" of intellectual property protection.
According to the observation of the reporter of the Financial Associated Press, the spread of false information is frequent, and the form is no longer limited to words.
"AI technology has lowered the barrier to entry for video synthesis and production, making it easy to generate fake content. Some of the various AI-generated videos that appear on the Internet are spread by imitating the image and voice of celebrities or experts, while others are fictitious characters used to promote products, mislead consumers to buy unnecessary goods, cause property damage, and even create a crisis of trust. There are also videos that monetize traffic and exacerbate the chaos. "Deputy to the National People's Congress, Midea Group (000333. SZ) vice president and chief financial officer Zhong Zheng told the Financial Associated Press.
She believes that the current legal and regulatory system of generative AI is still imperfect, especially in defining the copyright ownership of AI-generated content, as well as the determination of infringement liability and privacy protection standards. In addition, the current technical conditions for the review of generative AI videos are limited, and it is necessary to continue to improve the technical means.
This time, Zhong Zheng plans to put forward the "Suggestions on Improving the Management Mechanism of Generative Artificial Intelligence Video Dissemination": first, revise the "Copyright Law" and other relevant laws and regulations to strengthen the protection of original copyright and privacy. Second, platforms should strengthen the review and supervision of AI-generated videos to ensure the authenticity of the content. For AI-generated videos, warning prompts should be mandatory to remind consumers to pay attention to identification. The third is to use AI technology to review AI-synthesized video content, and at the same time invest more resources to develop efficient AI review tools, and combine manual review to improve recognition accuracy through double verification. Fourth, strengthen industry self-discipline and government supervision, severely punish violations of laws and regulations, and supervise the healthy development of the industry.
What is the solution to the problem of AI fraud and infringement?
During the National People's Congress and the National People's Congress this year, the prevention of AI deepfake fraud also attracted attention.
Li Dongsheng, a representative of the National People's Congress and founder and chairman of TCL, brought the "Suggestions on Strengthening the Management of AI Deepfake Fraud", mentioning that with the development of generative artificial intelligence technology, deepfake technology has also developed rapidly. To regulate the misuse of this emerging technology, it is necessary to require deep synthesis service providers to enforce the labeling of AI-generated content, reduce malicious abuse, and help clarify responsibility and pursue accountability for illegal and criminal acts. In recent years, although China's relevant legislation has paid attention to this issue, the rules and regulations that have been promulgated are not yet systematic, and there are no operational rules and clear punishment standards.
"Recently, China also participated in the AI Action Summit in Paris for 61 countries, including France, and we also signed the Paris Declaration on Artificial Intelligence, which is an important attempt at global AI governance and marks a new stage in global AI governance. Taking into account technological innovation and related regulatory needs, the punishment system for AI deep synthesis service providers that fail to perform their marking obligations can be appropriately postponed, and the corresponding punishment measures can be clarified in advance, which is conducive to the preparation of relevant enterprises for compliance as soon as possible. Li Dongsheng said in an interview with a reporter from the Financial Associated Press.
Li Dongsheng suggested: First, speed up the introduction of rules and regulations for the management of artificial intelligence deep synthetic content identification. Second, clarify the punishment system for AI deep synthesis service providers that fail to perform their marking obligations. Improve the definition and classification rules for conduct by deep synthesis content service providers that fail to label as required, as well as the corresponding punishment standards. Third, strengthen the management of technical standards and publication of deep synthetic content identification. Introduce technical standards for deep synthesis content identification to ensure the effectiveness of identification; In addition, relevant content platforms are required to identify deeply synthesized videos, audios, and other content when they publish them. Fourth, strengthen international cooperation and form effective supervision of AI-generated synthetic content.
Lei Jun, a deputy to the National People's Congress and chairman and CEO of Xiaomi Group, also put forward a suggestion: strengthen the governance of the hard-hit areas of "AI face swapping onomatopoeia".
"The improper abuse of 'AI face-swapping onomatopoeia' has caused the hardest hit areas of illegal infringements, which can easily lead to criminal acts such as infringement of portrait rights, infringement of citizens' personal information, and fraud, and even cause irreparable large-scale image and reputation damage, bringing risks such as social governance. The characteristics of AI deep synthesis technology, such as the convenience of obtaining materials, the low threshold for the use of technology, and the strong concealment of infringing entities and their methods, have brought great challenges to governance. Lei Jun said.
Lei Jun believes that first of all, we should speed up the legislative process of the unilateral law and upgrade the legislative level on the basis of attaching equal importance to security and development; secondly, strengthen the self-discipline and co-governance of the industry, and consolidate the responsibilities and obligations of platform enterprises and other parties; Finally, increase the breadth of legal popularization and publicity, and enhance the public's vigilance and discernment.
Large models still face many security challenges
At last year's two sessions, "artificial intelligence +" was written into the government work report, and once became a hot word in the technology industry. A number of communications and computer companies have said in an interview with a reporter from the Financial Associated Press that artificial intelligence has become a new production tool and is one of the important driving factors for new quality productivity. In the past year, the rapid development of the domestic artificial intelligence industry has been obvious to all, especially the emergence of the domestic large model DeepSeek, which has attracted much attention around the world, but with the in-depth application of artificial intelligence technology, the security challenges faced by the large model are becoming increasingly severe.
"As soon as Deepseek came out, targeted and high-intensity cyber attacks broke out, sounding the alarm bell for the AI industry to stick to the bottom line of security. We found that more and more enterprises and individuals have begun to build the private deployment of the DeepSeek large model, and nearly 90% of the servers have not taken security measures, and the service operation and data security are in jeopardy. Member of the 14th National Committee of the Chinese People's Political Consultative Conference, Vice Chairman of the All-China Federation of Industry and Commerce, Qi Anxin (688561. SH) Group Chairman Qi Xiangdong said in an interview with a reporter from the Financial Associated Press.
On the eve of the two sessions this year, Zhou Hongyi, a member of the National Committee of the Chinese People's Political Consultative Conference and founder of 360 Group, also talked about the above issues in an interview with reporters. He believes that for DeepSeek, AI security issues are divided into at least four levels, namely, the base model, the user-side security issue, the data security issue caused by the knowledge base, and the "model-to-model" problem.
Qi Xiangdong told reporters that the security problems faced by the development of artificial intelligence can be roughly divided into three categories: the security of large models themselves, the use of artificial intelligence to carry out cyber attacks, and the "cyber attack explosion" caused by attacking artificial intelligence.
Focusing on the security issues of the large model itself, Qi Xiangdong analyzed, "The four major security risks of development risk, data risk, application risk and basic environment risk are more prominent. In terms of development, the open source model should focus on preventing code defects and reserving backdoors. In terms of application, it is necessary to prevent 'internal ghosts' from poisoning the training data, model tampering, and configuration tampering; In terms of data, be careful of database exposure caused by internal configuration errors and weak passwords. In terms of the basic environment, it is necessary to focus on the cloud, APIs, traditional device vulnerabilities, and so on. At present, the team has identified a critical vulnerability in DeepSeek, which can be used to obtain backend privacy data, API keys, and operational metadata, and although the vulnerabilities are quickly patched, in our experience, new vulnerabilities are constantly being discovered. ”
Zhou Hongyi said that in order to ensure the healthy development of large model technology, it is necessary to make efforts from three directions: large model security, network security and data security.
In response to the above-mentioned large model security issues, Qi Xiangdong told reporters that it is necessary to start from three aspects: technical support, system guarantee, and application of achievements, including the establishment of a defense system in depth that adapts to the large model, and builds a solid security foundation for artificial intelligence; Formulate mandatory compliance requirements for the security of large models, and consolidate the institutional guarantee for the safe development of artificial intelligence; Promote the implementation of "AI + security" innovation achievements, and take the only way to improve security capabilities.
(Financial Associated Press reporters Wang Biwei and Lu Tingting also contributed to this article)
Read 2142
I want to comment
You are welcome to post valuable comments, advertisements and discord comments will be deleted, and comments will be banned from your account.
Ticker Name
Percentage Change
Inclusion Date