} ?>
(Yicai Global) Oct. 25 -- The European Commission issued a regulatory proposal in April that aims to create a common legal framework for artificial intelligence within the EU, which commentators are decrying as vague, and warning it will saddle the trade bloc’s AI developers with billions in added costs.
Styled the ‘Artificial Intelligence Act,’ the draft particularly addresses safety-critical applications, a category that also includes AI healthcare applications. An assessment of the proposal and its consequences thus needs to be conducted with a hands-on AI perspective that diverges from previous articles by lawyers or policymakers, The Yuan, an open forum seeking to make AI also stand for ‘All-Inclusive,’ reported on October 18.
This policy statement is based on oral evidence heard in May 2021 by the joint session of the EU affairs committees of Germany’s federal parliament the Bundestag and the French National Assembly, per The Yuan report.
“The Commission’s impact assessment concludes that the [AI Act] will cause an additional 17 percent of overhead on all AI spending,” Washington, DC-based think tank the Center for Data Innovation concluded in a study it published in July, the Brussels-based Centre for European Policy Studies reported last month. “By 2025, the [AI Act] will have cost the European economy more than EUR30 billion,” per the study.
Before the AI Act, the EC published an AI white paper in early 2020 as an interim AI strategy for the trade bloc. This white paper, however, lacks crucial points, e.g., it does not compare its proposed actions to countries leading in AI. Surprisingly, the word ‘China’ does not appear once in the report. Its proposed investments appear negligible compared to investment by China.
Existing regulations, laws, standards, and norms of technologies are usually vertically structured and consider systems that perform certain safety-critical functions, such as in healthcare, aviation, or nuclear power. They typically also pay little or no attention to how these systems actually deploy. AI applications are usually only a tiny fraction of larger systems.
Ranking Risk
The AI Act attempts to regulate this proportion horizontally -- and thus independently of use cases. This approach seems impractical due to the large majority of low-risk AI use cases. That additional horizontal regulation would lead to unclear responsibilities and disputes over responsibilities when contrasted to existing regulations is a justifiable assumption, e.g., in medical technology. Instead, added regulations should therefore only address novel use cases that existing vertical regulation does not yet cover.
The application areas prohibited by the AI Act are also too broad. A definition of specifically prohibited use cases would be much more effective. Whether these would have to be explicitly banned at all or whether this is already the case today under other laws -- such as criminal codes or the General Data Protection Regulation -- must also be examined.
For “high-risk applications,” the AI Act also defines corresponding documentation requirements -- technical documentation -- registration requirements -- EU database for stand-alone high-risk AI systems -- and notice requirements -- reporting of serious incidents and of malfunctioning. These rules for AI’s development or use in safety-critical application areas compare to the operation of nuclear power plants or development of aircraft. Most likely, this will stifle innovation. To implement the procedures for access to data and documentation, the EU will have to agglomerate the EU-wide AI expertise of all companies, universities, and experts in the relevant authorities, and invest hundreds of billions of euros in its own infrastructure.
The requirements for the AI regulatory sandbox tests state that all intellectual property -- consisting of datasets and AI -- will have to be shared with authorities. This seems unnecessary and simply unfeasible. Questions about liability also remain open, like whether third parties may access intellectual property via sandbox testing. The AI Act would thus make use or development of relevant safety-critical AI applications, such as any safety-critical assistance systems in healthcare, nearly impossible in the EU.
The proposed AI Act is clearly too broad and more of a ‘Software Act,’ and will lead to overregulation. It will make the use or development of most AI applications -- above all in healthcare -- almost impracticable in the bloc. Chinese and US firms would also face less competition from Europe in their AI leadership in the foreseeable future. The EC should thus instead shift its focus to the added value of AI for citizens, define a reasonable AI strategy and empower its residents to contribute to a Europe that stays competitive in AI.
Editor: Kim Taylor
Yicai Global is pleased to announce its cooperation with The Yuan (https://www.the-yuan.com) and looks forward to future feature articles from it authored by luminaries of the ilk of Hippo.AI founder Bart de Witte and the many other leading lights in the AI sector who are its regular contributors. The Yuan provides an open community with the aim of averting the emergence of bias and social inequities and inequalities arising from uneven access to AI and, as such, its philosophy closely aligns with Yicai Global’s own stance.