} ?>
Recently, the National Health Insurance Administration issued the "Guidelines for the Establishment of Radiological Examination Medical Service Price Projects (Trial)", which unifies and integrates and standardizes the current radiological examination items. It is mentioned that in order to support the clinical application of artificial intelligence-assisted diagnosis and prevent additional patient burden, the project guidelines uniformly arrange the extension of "artificial intelligence-assisted diagnosis" under the main project of radiological examination, that is, if the hospital uses artificial intelligence to assist in diagnosis, the price level is the same as that of the main project, but it is not charged repeatedly with the main project.
This is the first time that AI-assisted diagnosis has been included in the price mix. Affected by the above news, on November 25, Libon Instruments (300206. SZ), Berry Gene (000710. SZ), Seili Medical (603716. SH) and other related concept stocks.
However, there is still a certain distance from the realization of human-computer interaction in medical scenarios. The National Health Insurance Administration said at a press conference on November 23 that after research, all parties generally believe that artificial intelligence technology can help doctors improve the efficiency of diagnosis to a certain extent, but it cannot be completely "replaced" at this stage.
AI-assisted diagnosis has been included in the project guidelines of the National Health Insurance Administration
On November 23, the National Health Insurance Administration said at a relevant press conference that 17 batches of project approval guidelines have been compiled and released to guide local standardized price projects. In order to support the relatively mature artificial intelligence (AI) assisted technology into clinical application and prevent additional patient burden, the National Health Insurance Administration analyzed the potential application scenarios of artificial intelligence and set up an "artificial intelligence-assisted" extension in radiological examination, ultrasound examination, and rehabilitation projects.
According to the statistics of Securities Times and Databao, there are currently more than 30 listed companies involved in AI medical business. Previously, Mindray Medical (300760. SZ, share price 261.36 yuan, market value 316.884 billion yuan) revealed on the interactive platform that the company began to jointly develop AI algorithms applied to film readers with Tencent a few years ago, and this product has become the first AI product in the domestic in vitro diagnostic industry to enter the special review procedure for three types of innovative medical devices.
Affected by the above news, on November 25, the concept of AI medical has risen sharply.
An Biping(688393. SH, share price of 20.09 yuan, market value of 1.88 billion yuan) relevant people told the "Daily Economic News" reporter that the cervical cytology artificial intelligence-assisted diagnostic products developed by the company are in the process of registering and applying for three types of certificates. From the perspective of realizing functions, the current analysis function of artificial intelligence on pathological sections can be divided into three categories: first, the detection and segmentation of tissues and cells; the second is the extraction of image-related features; The third is the classification and grading of pathological images, and pathologists can make further diagnoses of diseases based on the analysis results of computer-aided algorithms.
"The AI workflow can be roughly divided into: data pre-processing - image segmentation, feature extraction, selection, classification, recognition, and result output." The above-mentioned person said that from the perspective of the carrier realized by artificial intelligence, most of the products are in the form of digital pathology image processing software, which is loaded on the computer terminal of the pathology department and connected to the hospital information system for use, and a small number of products directly integrate the AI analysis algorithm into the microscope, which can complete real-time analysis and calculation when the pathologist reads the film and display it in the eyepiece field of view.
In his view, pathological AI is expected to solve the shortage of pathologists and greatly reduce the workload of pathologists. In the case of traditional pathology readings, lesions often occupy less than 1% of the area, and pathologists need to spend their efforts in the negative range of millions of pixels. If pathological AI is put into clinical use, it is expected to reduce 65%~75% of pathologists' "sieve negative" reading work, and clinicians only need to focus their attention on suspicious sites.
On the 25th, a person related to Lipon Instrument also replied to the media that in the field of AI medicine, the company will apply AI modules to the ultrasound business.
There are more and more landing scenarios of "AI + medical care".
With the continuous progress of technology, the application of AI tools in the medical field has broad prospects, among which ultrasound, CT, magnetic resonance and other imaging examinations are the most common fields of AI medical landing, and domestic medical equipment manufacturers have basically added AI functions to new varieties in recent years.
Taking obstetric ultrasound as an example, when the ultrasound probe held by the sonographer scans on the belly of the pregnant woman, the AI ultrasound will quickly identify multiple standard sections used to observe the development of the fetus, and automatically obtain the best standard section to quickly measure the fetal head circumference, abdominal circumference, femur length, humerus length and other indicators. As long as the doctor finds the corresponding position, the AI can automatically identify the standard section, which really saves a lot of time.
In addition, the application scenarios of AI in the field of ultrasound include quality control and doctor teaching. In the past, the growth of young doctors mainly relied on the guidance of experienced doctors, which took a long time and was highly subjective, but now AI has solved this problem to a certain extent. Many screening programs that are difficult for junior doctors to do well are being "taken down" by AI.
Wang Ruili, deputy director of the ultrasound department of Henan Provincial People's Hospital, told the "Daily Economic News" reporter that the ultrasound examination of each pregnant woman needs to save at least 22 sections and 24 pictures, but the pictures obtained clinically are basically twice or more than this number, so the quality control team needs to spend an average of more than ten minutes on each case to select pictures, and AI can shorten this time to less than one minute, or even a few seconds to complete.
With the implementation of large models such as ChatGPT last year, the application of AI in the medical field has been pressed the accelerator button, and it has been further extended to more advanced diagnosis and treatment fields, such as playing a greater application value in clinical surgery. At the 2024 CIFTIS, the world's first orthopedic surgical robot equipped with artificial intelligence deep learning technology, ROPA, has attracted the attention of merchants from all over the world.
Professor Liu Hongbin, director of the AI Center of the Hong Kong Innovation Institute of the Chinese Academy of Sciences, told the "Daily Economic News" reporter that in terms of surgical vertical models, it can be said that the Chinese team and the international head camp are running side by side. Moreover, China has a data advantage, and if it can give full play to its data advantage, it will have a lot of stamina.
In March this year, the AI Center of the Hong Kong Innovation Institute of the Chinese Academy of Sciences released the AI multi-modal model CARES Copilot1.0, which can automatically identify lesions and anatomical structures during surgery. On November 23, the large model was iterated to version 2.0, mainly focusing on intraoperative human-computer interaction.
Liu Hongbin introduced that in the past, it was necessary to turn the book to make surgical planning, and there were a series of precautions after deciding on the surgical method, such as the location of the tumor and how to open the gap, and now the large model can help complete this part of the work, and the most used function by clinicians in version 1.0 is surgical planning. In addition, neurosurgery takes a long time, and doctors encounter some questions that need to be answered during the operation.
In addition, version 2.0 also embeds modules such as surgical navigation to give doctors some real-time reminders, such as which part to look at, which area to focus on on the MRI image, and how to guide blood transfusion during surgery.
"We hope to use large models to guide doctors, and we are combining the ability to make large models with the application of robots." Liu Hongbin said that neurosurgeons look at the images of the endoscope to perform surgery, and if the adjustment of the endoscopic field of view is done by a robot, the doctor can free up one hand. In addition, the intraoperative difficulties that young doctors may encounter are positioning and intraoperative labyrinth, and the large model learns to understand the surgical steps, which can locate and partition some important anatomical structures, light up the danger area, and provide guidance on the surgical steps.
The future direction is the medical scenario of human-computer interaction
Although more and more AI-assisted technologies have entered clinical applications, there is still a long way to go before human-computer interaction can be realized in medical scenarios.
According to the director of the ultrasound department of a tertiary hospital in China, AI can already meet the needs of rapid identification of sections in prenatal screening, and standard sections can be quickly displayed in the real-time examination process. However, one of the key steps in front is to sweep the face, which is said to be "drawing" in the industry, which tests the doctor's technique and experience. There is also the problem that AI needs to be trained, and it cannot recognize things that have not been seen, so it cannot rely entirely on AI.
Liu Hongbin also found this clinical pain point in his communication with clinicians. He told reporters that ultrasound noise is large, and the contact part and strength affect the quality, if the medical vertical model can be empowered, combined with the robotic arm scan, the ultrasound examination used by doctors in the future can be as simple as taking pictures with mobile phones.
He also gave an example, in the traditional operating room, the doctor wants to have a good assistant, but such a good assistant is difficult to find, the feasible way is that the robot is familiar with the operation mode and idiomatic language after interactive learning with the doctor, and plays the role of the assistant. Now, they are already working with clinicians to develop this feature, which also requires continuous learning capabilities.
As for the future direction of AI medicine, Liu Hongbin said that it is foreseeable that AI technology can participate in more serious medical treatment, but this breakthrough requires close cooperation between clinicians, academic researchers and industry, and tripartite collaboration can achieve truly meaningful innovation.
Cover image source: Visual China-VCG211374266925
Ticker Name
Percentage Change
Inclusion Date