} ?>
(Yicai Global) Aug. 31 -- Unlike in other sectors, AI's medical use is a problem before it is a potential. The harm if anything goes wrong in other settings is remediable. Flawed training data might cause AI trained on it to kill.
Thorny issues wreathing AI in medicine include regulating it, determining what it should and does do, ensuring its training rests on standards, and deciding who devises these. The World Health Organization (WHO) has now donned this mantle, albeit belatedly, in its June 28 'Ethics and Governance of Artificial Intelligence for Health-WHO Guidance,' The Yuan, an AI-centric open community on a mission to put the 'All-Inclusive' into AI, reported on August 20.
This article's first part addressed what this guidance gets right. This part will focus on what it misses.
Medical AI has loomed large for over a decade and exploded in recent years, but the WHO waited until 2019 to "be informed by developments at the frontier of new scientific disciplines… which pose… risks to global health," per The Yuan report. The WHO expert committee formed in 2019 is top-notch, and had it joined the debate years ago it would have been ready with rules as the pandemic struck, forestalling the proliferation of AI applications professing to work miracles against COVID-19.
The guidance’s six ethical guidelines are a crash course in issues surrounding AI use and basis for future rules, but the WHO must do better to impact AI.
Clashing Clauses
Some items contradict others, e.g., “…introduction of AI for healthcare…could reduce the size of the workforce...” The WHOlater reverses course, saying the effect on the workforce will be constant training to keep pace with developments, and medical AI may swell employment with all the new hand sit will need.
Next, AI“should satisfy regulatory requirements of safety, accuracy, and efficacy… and… ensure quality control and quality improvement.” For the first clause, the WHO can formulate regulations. The second is up to governments, monitored by the WHO or United Nations (UN).
The WHO will update these guidelines; I offer a few suggestions:
With AI, speed is everything. The WHO delay has caused no great harm, yet. Talk to everyone in government, reassure them they have someone to turn to if they get stuck. Update this guidance every six months, like software updates. Standardize data that devices which detect diseases from X-rays and CT-scans are tested on. Set norms for AI wearables that don't fall under the medical rubric,e.g., Apple Watch. Set differing rules for low- and middle-income countries and richer ones. AI technology trickles down. Lastly, monitor the marriage between AI and robotics, genomics --particularly gene editing -- and bio-robotics.
The report prompts the question as to who will craft global AI in medicine rules. The answer is the WHO alone can do this. This is of utmost importance, not just to save lives. The tendrils of AI now coil around every living thing on this planet. The fate of Earth itself may depend on it.
Author: Wang Shifeng
Editor: Kim Taylor
Yicai Global is pleased to announce its cooperation with The Yuan (https://www.the-yuan.com) and looks forward to future feature articles from it authored by luminaries of the ilk of Hippo. AI founder Bart de Witte and the many other leading lights in the AI sector who are its regular contributors. The Yuan provides an open community with the aim of averting the emergence of bias and social inequities and inequalities arising from uneven access to AI and, as such, its philosophy closely aligns with Yicai Global's own stance.