} ?>
(Yicai) Sept. 9 -- Artificial intelligence-related medical models have garnered a lot of attention at this year’s 2024 Inclusion Conference on the Bund in Shanghai, as improved algorithms lead to better applications, generating more interest among participants, an exhibitor told Yicai.
Chinese e-commerce giant Alibaba Group Holding's think tank the DAMO Academy showcased its AI medical imaging analysis tool at the three-day conference, which ended on Sept. 7. The model can screen for eight types of cancer, including pancreatic, oesophageal, intestinal and stomach cancer, as well as a series of chronic diseases, using ordinary CT scans.
The screening program has an accuracy rate of over 90 percent for liver and lung cancer, staff at the booth told Yicai. It enables doctors to work more efficiently and helps address issues arising from the uneven distribution of medical resources.
Best Covered’s large language model, which is supported by huge amounts of data, can detect Alzheimer’s disease and generate a personalized brain health training plan for users, staff said. Many elderly people visited the exhibition area.
AI medical tools still serve an auxiliary role and cannot replace personalized diagnoses by doctors, company representatives said. Many AI healthcare products still lack publicly available data or validation from authoritative third-party institutions.
The healthcare sector is undergoing a profound transformation thanks to the continuous development of new technologies, such as AI, said Zhou Hanmin, a member of the Standing Committee of the National Committee of the Chinese People's Political Consultative Conference. AI is able to greatly improve accuracy and efficiency, provide personalized treatment and streamline data processing.
The application of AI in the healthcare sector should focus on improving accuracy, applicability and safety, Zhou said. Attention should also be paid as to whether the increased use of AI will lead to an erosion of skills among healthcare professionals, misjudgments due to an overreliance on AI and information overload.
Preventing the misuse of technology was another hot topic at the conference as incidents such as economic losses, reputational damage, face-swapping fraud and other issues become global challenges.
Developing detection algorithms is one way to curb the misuse of technology, but this approach is underused, said Yao Weibin, chief technology officer at face recognition firm ZOLOZ.
Regulators should require the establishment of a universal identifier for AI-generated content to curb fraudulent activities, Yao said. For example, each AIGC-generated video or image should carry a signature or watermark. This would provide a more solid basis for platform management and enhance the credibility of AIGC.
“The formation of an AI ethics committee is vital,” said Han Bicheng, founder and chief executive officer of brain-computer interface company Zhejiang BrainCo. “When a technology emerges that has the potential to change the world, we should first use it to help those who need it the most.”
Editor: Kim Taylor