} ?>
(Yicai) March 25 -- Developers of artificial intelligence should allocate 10 percent of their funds to studying risk to ensure a future where humans and machines can safely co-exist, according to an AI professor at Tsinghua University.
The risk of AI models is still manageable but when information intelligence extends to physical intelligence and biological intelligence, the risk will increase rapidly so scientists, technologists, and entrepreneurs need to "simultaneously develop and govern," Zhang Yaqin, academician of the Chinese Academy of Engineering and former president at Baidu, said during the China Development Forum in Beijing yesterday.
In the next five years, AI models will enter a stage of large-scale application in various fields, Zhang said, adding that this leads to three layers of risks. The first layer is linked to the realm of software. Currently, most AI models are still limited to the virtual world but as they gain massive amounts of data, there will be errors and false information, per Zhang.
The second part emerges when people develop better robots, drones, and brain-computer interface technologies so that software will merge with the physical world. People can even link up software with biological entities such as the human brain, leading to a scale-up of risks, per Zhang. The third part arrives when AI is extensively applied to economic, financial, and military systems, he added.
In order to mitigate risks, China could enhance the research and development of AI security and promote international cooperation, Xue Lan, dean of Schwarzman College, Tsinghua University, said during the same forum. In particular, he called for global governance to protect technological development from the negative impact of geopolitics.
Editors: Dou Shicong, Emmi Laine