} ?>
(Yicai) Feb. 26 -- Generative artificial intelligence has huge potential in clinical diagnosis and treatment, but at the same time, AI tools can still ‘hallucinate,’ which is when they generate false or misleading information, so they should be used cautiously when it comes to medical diagnosis, a number of medical experts told Yicai.
Disease diagnosis using AI models, such as Shanghai East Hospital’s ‘Med-Go’, is on the rise in China. And an increasing number of patients are now using generative AI tools, such as chatbot DeepSeek’s ‘diagnostic report,’ to consult directly with doctors.
While there are cases of AI tools accurately diagnosing diseases, it’s still not feasible to apply these models to a wider patient group, developers of a medical information system said. Although large language models could take over some tasks from general practitioners in community clinics, there’s still a need for ongoing research and training of specialized models for clinical diagnosis in large hospitals, he added.
‘Med-Go,’ which has already passed the National Qualification Examination for Medical Practitioners and won several medical knowledge competitions, was recently used to diagnose a rare autoimmune disease in an 11-year-old boy. Normally, this diagnosis would take a year but it was made much faster by AI.
"‘Med-Go’ is like a medical professor, providing auxiliary decision-making support for doctors but not replacing them," said inventor Zhang Haitao.
"Even though 90 percent of DeepSeek’s conclusions are accurate, that doesn’t mean it can replace a doctor’s judgment," said a cardiologist.
AI tools are only as good as the data fed into them, the director of the intensive care unit at a large tertiary hospital in Shanghai told Yicai. Only valid data can lead to valid results. While building a medical AI model isn’t difficult due to the availability of many open-source models, the real challenge is the local data processing in various scenarios.
‘Hallucinations' are the biggest barrier to the widespread use of AI in medicine, said Guo Lehang, deputy director of the ultrasound department at Shanghai’s 10th People’s Hospital and leader of a ‘ultrasound + AI’ research team. This could lead to incorrect diagnoses, treatment advice or medical decisions, which could have serious consequences for the patient’s health, he added.
Overcoming AI ‘hallucinations’ requires solving many technical and ethical challenges, said Chen Runsheng, scholar at the Chinese Academy of Sciences.
Editor: Kim Taylor