【Hualong AI Industry Topic】AI Industry Series Tracking Special Research Report: Looking at the "Late-mover Advantage" of Domestic AI from DeepSeek
DATE:  Feb 28 2025

(Source: Hualong Securities Research).

Summary:

With low pricing + performance comparable to the world's top models + open source, DeepSeek has attracted global attention. On January 20, 2025, DeepSeek officially released the DeepSeek-R1 model and synchronized the open-source model weights. The model uses reinforcement learning technology on a large scale in the post-training stage, and its performance is comparable to OpenAI's o1 official version in mathematics, code, natural language reasoning and other tasks. In terms of API pricing, the DeepSeek-R1 API service is priced at 1 yuan per million input tokens (cache hits)/4 yuan (cache misses), and 16 yuan per million output tokens. It is about 1.8%/3.6% and 3.7% of the corresponding pricing of OpenAI-o1. In terms of technical path, DeepSeek-R1 abandons the traditional supervised fine-tuning path, and achieves the same performance as the OpenAI-o1 series models in the inference task through the combination of reinforcement learning and cold-start data, providing a cost-effective option for the domestic AI industry.

Computing power: Optimistic about the long-term demand growth of domestic computing power, cloud services are still the most direct beneficiary direction. (1) Computing power chips: long-term demand is expected to increase highly, pay attention to the capital expenditure of large manufacturers. DeepSeek breaks the path of relying solely on computing power investment intensity to improve AI performance, and proves that algorithms can also promote AI performance improvement. In the short term, this has had a big impact on the hardware side of computing power. However, in the long run, deepseek will greatly reduce the cost of large models, realize the "equal rights" of companies of different sizes in the field of AI, and create a new path for AI development, which is a historic promotion of artificial intelligence and will stimulate the growth of computing power demand. With the update and promotion of the large model by Deepseek, the competition for computing power between domestic and foreign technology leaders may become more intense, driving the overall demand growth. After the release of Deepseek in January, the demand for major overseas companies to increase investment in response to new changes has become stronger, and the cumulative capital expenditure of four companies, including Microsoft, Alphabet (Google's parent company), Amazon and Meta, will reach at least $320 billion in 2025, a significant increase from $246 billion in 2024. According to the Financial Times, the continued increase in investment will mainly focus on data center construction and cloud services to ensure that it is ahead of the competition with China's large models. Domestically, major cloud vendors such as Tencent and Alibaba have maintained a high level of capital expenditure in recent years. (2) Cloud services: Cloud services are still the most direct beneficiary direction. According to Synergy Research Group, the global cloud infrastructure services market grew by 22% to $330 billion in 2024. By combining new GenAI platform services, GPU-as-a-Service, and enhancements to a variety of other cloud services, generative AI has contributed at least half of the cloud services revenue growth. In terms of overseas manufacturers, the logic of AI to promote the performance growth of cloud factories has been preliminarily verified. From a long-term perspective, although DeepSeek reduces the cost of deploying a single model, the cost of deploying a large model is expected to attract more vendors' attention and use AI technology. The access of DeepSeek enables cloud vendors to provide more diverse computing power leasing solutions, optimize the model deployment process, promote the popularization of cloud services, and provide value-added services. We believe that after DeepSeek, cloud service-related vendors are the direct beneficiaries and are expected to be the first to enter the performance realization period.

End-side: the beneficiary direction under computing power equality. Edge AI is a combination of large models and intelligent hardware, including algorithms, chips, hardware, and the entire upstream and downstream of the industry chain. The progress of the Deepseek model algorithm has greatly reduced the cost of inference, making it easier to deploy lightweight models on the device side, and the device side intelligence in various application scenarios is expected to usher in rapid development, and the deployment time is expected to be advanced.

AI applications: Under the trend of AI inclusiveness, large-scale product implementation is expected. The open-source and low-cost nature of DeepSeek is expected to promote the universalization of AI technology and make AI applications more explosive. We believe that AI is expected to deeply empower traditional SaaS and IaaS business models, create product differentiators, break through the "last mile" demand blockages in various vertical scenarios, and stimulate consumer willingness. At the same time, overseas, some AI application-related stocks have gradually realized their performance, and the global AI application industry is expected to usher in a resonance period in the cutting-edge landing scenario of AI application.

Investment suggestion: We believe that DeepSeek's open-source, low-cost, and high-performance model is expected to promote the universalization of domestic AI technology, and its "late-mover advantage" in algorithms has the potential to be transmitted in the domestic AI industry chain. The mapping role of the overseas AI industry and the comprehensive localization opportunities of the domestic AI industry will inject new growth momentum into domestic AI and maintain the "recommended" rating of the TMT industry. It is recommended to pay attention to the domestic computing power link: Digital China (000034. SZ), Inspur Information (000977. SZ), Sugon (603019. SH); iSoftStone (301236. SZ), Yunsai Zhilian (600602. SH); End-side AI: Bose Glasses (300622.SZ), Emdoor Information (001314. SZ), Arcjet-U (688220.SH), Espressif Technology (688018. SH), Hengxuan Technology (688608. SH), Zhongke Lanxun (688332. SH), Xingchen Technology (301536. SZ); AI application link: Kingsoft Office (688111.SH), iFLYTEK (002230.SZ), Keyuan Wisdom (002380. SZ), Dingjie Digital Intelligence (300378. SZ), Venture Huikang (300451. SZ), Yonyou Network (600588. SH), Zhiyuan Internet (688369. SH), EZVIZ Network (688475. SH), INTSIG (688615. SH)。

Risk warning: the risk of error in the cited data; AI investment is less than expected; Intensified competition for AI products; Focus on the company's performance not meeting expectations; The speed of policy standards is not as fast as expected; The risk of stock price correction caused by the short-term overvaluation of some companies.

1 DeepSeek launched the benchmark O1 open source model, and the domestic large model conquered the city

With low pricing + performance comparable to the world's top models + open source, DeepSeek has attracted global attention. On January 20, 2025, DeepSeek officially released the DeepSeek-R1 model and synchronized the open-source model weights. The model uses reinforcement learning technology on a large scale in the post-training stage, and its performance is comparable to OpenAI's o1 official version in mathematics, code, natural language reasoning and other tasks. In terms of API pricing, the DeepSeek-R1 API service is priced at 1 yuan per million input tokens (cache hits)/4 yuan (cache misses), and 16 yuan per million output tokens. It is about 1.8%/3.6% and 3.7% of the corresponding pricing of OpenAI-o1.

DAU has risen rapidly and has become a phenomenal product in the industry. On February 8, 2025, QuestMobile data showed that DeepSeek surpassed Doubao (daily active users) for the first time on January 28, and then surpassed the 30 million mark on February 1, becoming the fastest app in history to reach this milestone.

In terms of technical path, traditional large model training relies on supervised fine-tuning (such as manual labeling of data), while DeepSeek R1-zero is a large language model that completely relies on reinforcement learning (RL) training, which can improve the inference performance of the model through self-reflection and environmental interaction optimization strategies in an unsupervised environment, proving that the model can achieve effective learning and generalization through RL alone. For example, when the AIME benchmark adopted a majority vote, the performance of DeepSeek-R1-Zero increased from 71.0% to 86.7%, outperforming the performance of OpenAI-o1-0912.

Based on DeepSeek-R1-Zero and DeepSeek-V3, DeepSeek-R1 introduces cold-start data and a multi-stage training process to further improve model performance.

(1) Application of cold-start data

A small amount of high-quality Chain-of-Thought (CoT) data is used as a cold start to improve the initial performance and convergence speed of the model. Cold-start data is designed with readability and human preference in mind, such as adding summaries to the output format.

(2) Multi-stage training process

Phase 1: Fine-tune the underlying model with cold-start data.

Phase 2: Inference-oriented reinforcement learning, which focuses on improving the model's performance in inference-intensive tasks such as math, programming, scientific reasoning, etc.

Phase 3: Collect inference-related training samples, including inference and non-inference data, through rejection sampling and supervised fine-tuning.

Stage 4: Reinforcement learning for all scenarios, combined with rule-based reward and human preference reward models, to further optimize the usefulness and safety of the model.

In addition, DeepSeek verified that the inference mode of the large model can be migrated to the small model by distillation, which can significantly improve the inference ability of the small model. For example, RL training allows DeepSeek-R1-Zero-Qwen-32B to achieve performance comparable to that of QwQ-32B-Preview. However, DeepSeek-R1 Distill-Qwen-32B, distilled from DeepSeek-R1, significantly outperformed DeepSeek-R1-Zero-Qwen-32B in all benchmarks. But at the same time, DeepSeek argues that while distillation strategies are both economical and effective, advances beyond the boundaries of intelligence may still require more robust underlying models and reinforcement learning at a larger scale.

Through the combination of reinforcement learning and cold-start data, DeepSeek-R1 achieves performance comparable to that of OpenAI-o1 series models in inference tasks. At the same time, through model distillation technology, the capabilities of DeepSeek-R1 have been successfully migrated to small dense models, which significantly improves the inference capabilities of these models. It fully proves the breakthrough and potential of DeepSeek-R1 in the inference ability of language models, and provides a cost-effective option for the domestic AI industry.

2 Computing power: Optimistic about the long-term growth of domestic computing power, cloud services are still the most direct beneficiary direction

2.1 Computing chips: long-term demand is expected to increase, pay attention to the capital expenditure of large manufacturers

Scaling law is one of the core concepts in the progress of large models, especially LLMs, and it is a formula that describes the decrease of LLM test losses as a certain variable increases, and its meaning can be summarized simply: the model performs better when it is trained on a larger scale with more data. Among them, there are three main variables that affect the performance of the model: the number of model parameters, the size of the dataset, and the amount of computation used for training. Since OpenAI proposed this formula in a paper in 2020, scaling law has become an important foundation for major tech giants to develop large models. However, Deepseek's breakout has led to the market's question "Is scaling law still valid?" ".

In its essence, scaling law measures the test loss of the model during pre-training, while the average downstream user is actually more concerned about the performance of the LLM, that is, the inference ability. In other words, scaling law can tell developers how to reduce the test loss of LLMs, rather than how to get LLMs that perform better in real-world applications, which are really two different issues. What's more, from its definition, long-term index smoothing is part of the conclusion of Scaling Law, and to further improve the performance of the model in the long run, other methods need to be found, which does not conflict with Scaling Law itself.

In this context, the emergence of DeepSeek is regarded as a kind of "breakthrough" in scaling law, and its MOE architecture, RL (reinforcement learning) based on supervised fine-tuning, and CoT deep complex inference are all useful attempts outside of pre-training, and important breakthroughs have been made. Specifically, under the hardware limitations of computing chips, DeepSeek has achieved a significant improvement in model performance through algorithm framework update and system engineering optimization, and in the process, it has proved that reinforcement learning and deep inference under the appropriate reward mechanism are still subject to scaling law, which is another direction to continue to promote AI progress under the scaling law law. Therefore, scaling law is still valid.

However, there are different market views on whether a larger, cheaper, equally performant, and open-source model is good or bad for upstream computing power. The Jevons Paradex reveals the complex relationship between technological progress and changing demand. Economist Jevons found a contradiction between improved energy efficiency and reduced energy demand, that is, while technological progress or increased resource efficiency makes the use of a resource more efficient, it does not reduce the use of that resource, but rather stimulates broader demand for that resource, leading to an increase in overall resource consumption.

Applying this paradox to the field of artificial intelligence, DeepSeek breaks the path of relying solely on the intensity of computing power investment to improve AI performance, and proves that algorithms can also promote AI performance. In the short term, this has had a big impact on the hardware side of computing power. On January 27, Nvidia, the leader in computing power chips in the U.S. stocks, fell 17%, with a market value of about $600 billion, Broadcom fell by more than 17%, and related stocks such as TSMC fell sharply, reflecting the market's concerns about GPU and ASIC chip manufacturers, and triggering intensive attention from the mainstream media to DeepSeek. However, in the long run, DeepSeek will greatly reduce the cost of large models, realize the "equal rights" of companies of different sizes in the field of AI, and create a new path for AI development, which is a historic promotion of artificial intelligence and will stimulate the growth of computing power demand. Therefore, DeepSeek may have an impact on computing power in the short term, but it is expected to promote the exponential growth of computing power demand in the long term.

With the update and promotion of the large model by Deepseek, the competition for computing power between domestic and foreign technology leaders may become more intense, driving the overall demand growth. According to the Financial Times, the capital expenditure of overseas tech giants has increased significantly year after year, and it is expected to continue to grow at a high rate in 2025. After the release of Deepseek in January, the demand for major overseas companies to increase investment in response to new changes has become stronger, and the cumulative capital expenditure of four companies, including Microsoft, Alphabet (Google's parent company), Amazon and Meta, will reach at least $320 billion in 2025, a significant increase from $246 billion in 2024. According to the Financial Times, the continued increase in investment will mainly focus on data center construction and cloud services to ensure that it is ahead of the competition with China's large models. Domestically, major cloud vendors such as Tencent and Alibaba have maintained a high level of capital expenditure in recent years.

At the same time, the equal weight effect of deepseek will stimulate the development of domestic computing power to a greater extent, driving the growth of domestic GPUs, software ecosystems, and data centers, and domestic substitution is expected to accelerate with higher certainty.

2.2 Cloud services: Domestic cloud vendors are fully adapted to DeepSeek, and cloud services are still the most direct beneficiary direction under overseas mapping

According to Synergy Research Group, the global cloud infrastructure services market grew by 22% to $330 billion in 2024. By combining new GenAI platform services, GPU-as-a-Service, and enhancements to a variety of other cloud services, generative AI has contributed at least half of the cloud services revenue growth. In terms of overseas manufacturers, the logic of AI to promote the performance growth of cloud factories has been preliminarily verified. Specifically, Google Cloud's sales in the fourth quarter of 2024 were $12 billion, up 30% year-over-year, mainly due to the growth of its core GCP (Google Cloud Platform) products, AI infrastructure, and generative AI solutions, which were significantly driven by AI technology. Revenue from Microsoft Azure and other cloud services grew by 31% in the fourth quarter of 2024. Azure's growth came from AI services by 13 percentage points, which grew by 157% year-on-year, exceeding expectations, mainly due to the partnership between Azure and OpenAI.

DeepSeek has dramatically reduced the input/output pricing of inference models through algorithm innovation. In the short term, it strengthens the substitution logic of domestic training chips for overseas high-end training chips, and at the same time makes domestic chips more cost-effective on the inference side, opens up potential market space for domestic computing power chips, and accelerates their deployment and application on the training and inference side. In addition, the price war of large models at home and abroad is in full swing. On the one hand, after the release of DeepSeek, the price war for large models intensified. The development of the large model industry is expected to usher in a change, that is, from "closed-source algorithm competition" to "open source ecological construction". On the other hand, the domestic computing platform is fully compatible with DeepSeek, and the full localization of the AI industry is expected to accelerate, or further stimulate the demand for B-end/G-end AI. Recently, the new DeepSeek series models were launched on Huawei's Ascend community. According to Huawei's official website, four Atlas 800I A2 (8 x 64G) servers are required to deploy the DeepSeek-V3 model. In addition, JD Cloud, Tencent Cloud, Volcano Engine, Alibaba Cloud, etc. have also officially announced the launch of the DeepSeek model.

From a long-term perspective, although DeepSeek reduces the cost of deploying a single model, the cost of deploying a large model is expected to attract more vendors' attention and use AI technology. The access of DeepSeek enables cloud vendors to provide more diverse computing power leasing solutions, optimize the model deployment process, promote the popularization of cloud services, and provide value-added services. We believe that after DeepSeek, cloud service-related vendors are the direct beneficiaries and are expected to be the first to enter the performance realization period.

3 End-side: the beneficiary direction under the equality of computing power

The rapid iterative development of AI has promoted the efficiency improvement of various industries and inversely stimulated the performance improvement of large models, but large models in the cloud have always faced challenges such as data dependence, data quality/data security, energy and computing efficiency bottlenecks, algorithm optimization, and network latency, which limit the expansion and application of large models on the device side. Edge AI pushes large models from the cloud to specific application scenarios, close to the data collection end and business closed-loop end, which can effectively improve the carrying capacity of computing power and reduce costs.

Chen Ning, founder of Yuntian Lifei, believes that the three distinctive characteristics of edge AI constitute its unique competitiveness. The first is lower costs. Edge AI can enable large models to be deployed in many daily application scenarios, including mobile phones, PCs/tablets, automobiles, and wearable devices, to improve the user experience. The second is data security and privacy protection. Because it is closer to the user, the data can be processed on the application side without uploading to the cloud, so there is no need to worry about data leakage. The third is ultra-low latency. The proximity of data processing saves the delay caused by transmission and can provide faster and more stable services. The global edge AI market will be about $23.5 billion in 2024 and is expected to grow to $143.6 billion in 2032, with manufacturing, automotive, government, and IT/communications industries leading the demand.

Edge AI is a combination of large models and intelligent hardware, including algorithms, chips, hardware, and the entire upstream and downstream of the industry chain. The progress of the Deepseek model algorithm has greatly reduced the cost of inference, making it easier to deploy lightweight models on the device side, and the device side intelligence in various application scenarios is expected to usher in rapid development, and the deployment time is expected to be advanced.

AI phones

Smartphones are one of the most mature application scenarios for AI deployment on the device side, and in the context of deepseek breakthroughs, the landing schedule of AI mobile phones is expected to be advanced as a whole. 2024 can be regarded as the "first year of AI mobile phones", including OPPO, vivo, Huawei, Xiaomi, Honor, Apple, Samsung and other mainstream manufacturers have launched AI mobile phones in this year, deploying large models on mobile phones, and domestic brands are even moving slightly faster than overseas leaders.

Although mobile phone manufacturers have taken the first step in the deployment of large models, the real mature commercial use of AI mobile phones still faces many challenges such as cost reduction. In order to achieve intelligence, mobile phones need to deploy large models with larger parameters, which will greatly increase the cost of computing power, and improving quality and reducing fees is the core of this business model. In addition, unlike the high computing performance requirements of the training process, for the models deployed on the device side, the inference link pays more attention to user experience, and the trained model can be used for inference prediction to better meet the needs of different users. Deepseek's innovation not only greatly reduces the cost of large models, but also makes great progress in the inference process, which is expected to gradually solve the pain points faced by large models on the device side.

The rapid growth of AI mobile phone shipments at home and abroad and the continuous improvement of penetration rate are expected to maintain rapid growth under the iteration of large models. In the third quarter of 2024, domestic AI mobile phone shipments surged by 591% year-on-year, and the penetration rate of AI mobile phones jumped to 22% from 3% in the same period in 2023. According to IDC's forecast, China's AI mobile phone market shipments will reach 118 million units in 2025, a year-on-year increase of 59.8%, accounting for 40.7%. According to Canalys, by the end of 2024, 16% of global AI phone shipments have deployed generative AI. Driven by AI Agent and device-side intelligence, global AI mobile phone shipments will grow at a compound annual growth rate of 63% from 2023 to 2028, and the penetration rate of AI mobile phones will reach 54% in 2028.

Wearables

There are many types of wearable devices and expanding application scenarios, and the future growth prospects are broad. According to IDC data, global wearable device shipments will be 538 million units in 2024, a year-on-year increase of 6.1%. In the next few years, the global wearable market is still expected to maintain stable growth, and the growth rate will remain above 2% by 2028.

From the perspective of product categories, headphones account for more than 60% of the wearable market, making them the largest category, and are expected to maintain stable growth driven by demand and renewal demand in emerging economies, with shipments expected to approach 400 million units in 2028. Smart watches will be greatly affected by the Indian market in 2024, with shipments falling by 3%, and are expected to rebound to 4.8% in 2025, with shipments expected to reach 175 million units in 2028. As the two largest categories in the current stock market, the CAGR of headphones and smart watches from 2024 to 2028 will be 3.9% and 2.9% respectively, and will remain stable. At the same time, consumers' demand for intelligent headphones and watches is increasing, and consumption scenarios such as sports and health monitoring are driving the implementation of end-side intelligence, and the penetration rate is expected to continue to increase.

Because the deployment of large models is the most suitable and the logic is the smoothest in the demand scenario, smart glasses are regarded by the industry as the first stop to realize end-to-end intelligence. In 2024, global shipments will be 1.8 million units, a year-on-year increase of 73.1%, and the growth rate of smart rings (88.4%) is significantly higher than that of other categories. At the CES exhibition in early 2025, the "100 Lens War" reflects the optimism of major manufacturers about this track, and many leading technology companies at home and abroad have successively launched smart glasses products. According to IDC's forecast, the global shipment of smart glasses will reach 2.3 million units in 2028, and the shipment of smart rings will reach 3.1 million units, with a CAGR of 7.6% and 17% respectively from 2024 to 2028, which are the two fastest growing categories in the next few years, and will significantly promote the implementation of end-side intelligence.

At present, device-side computing power is one of the main obstacles that limit the interactive experience of wearable devices and restrict large-scale commercialization, and its main solutions include higher-performance integrated chips (SoC, NPU), more powerful device-side model and size model co-processing, and device battery life. After Deepseek realizes model miniaturization and low cost, the follow-up and breakthrough of integrated chips are more important, and with the support of model innovation, the difficulty of integrating and iterating computing chips according to specific needs has also been reduced. At present, many domestic SoC chip companies have launched a new generation of chip products that can support device-side AI and integrate multiple inference capabilities on the basis of their respective professional fields, promoting the early implementation of device-side intelligence in new interactive scenarios such as glasses and rings.

4 AI applications: It is expected to enter the era of "late-mover advantage", waiting for the birth of super applications

In the context of the transformation from the Internet era to the AI era, it is the general trend for AI to empower traditional applications. At present, AI is expected to deeply empower traditional SaaS and IaaS business models, create product differentiators, and stimulate consumer willingness. Overseas, some AI application-related stocks have gradually realized their performance, and the global AI application industry is expected to usher in a resonance period in the cutting-edge landing scenario of AI application.

The open-source and low-cost nature of DeepSeek is expected to promote the universalization of AI technology and make AI applications more explosive. On February 16, 2025, WeChat Soyisou officially connected to DeepSeek for grayscale testing. Judging from the user data of WeChat Mini Programs, one of the main carriers of WeChat traffic, according to QuestMobile, the monthly active users of WeChat Mini Programs will reach 949 million in October 2024, with an average monthly usage time of 1.7 hours, nearly 70 times per capita per month, and 20 days per capita per month, with a huge user scale, activity and user stickiness. Based on this, data, AI algorithms, and user volume will soon form a flywheel effect, greatly improving the efficiency of traffic monetization. At present, there are a large number of traditional applications in China and a wide range of groups, and the access of "national-level" application WeChat to DeepSeek is an important exploration of the domestic AI ecosystem, and the form of similar AI+ traditional applications is expected to continue to blossom. We believe that DeepSeek has proved that "late-mover advantage" can be achieved in the AI industry through algorithm optimization, and the "AI equality" brought by it also opens up the imagination space for the development of domestic AI applications, paving the way for the transmission of "late-mover advantage" to the downstream of the domestic AI industry.

Specifically, the domestic model represented by DeepSeek will help the transformation and upgrading of a large number of traditional apps, accelerate the sinking of AI technology to vertical scenarios, and break through the "last mile" blockage in the real needs of users. Combined with the analysis of China's national conditions and overseas mapping, it is optimistic that AI will take the lead in landing in vertical scenarios such as healthcare, education, e-commerce, office, and programming. Among them, AI agent and multimodal AI are the key branches of AI development.

AI Agent: It can support natural language input, automatically disassemble tasks according to user requests, and execute them across apps, which is one of the important areas of current AI applications. According to the definition of the AGI path in the paper "Position: Levels of AGI for Operationalizing Progress on the Path to AGI" by the DeepMind team, the AI agent is defined as the fifth level of automation (the highest level). From the perspective of the final form of AI, AI can participate in decision-making and perform actions on the basis of recognition, understanding, and reasoning, which is the only way to develop AGI.

The upgrade of model algorithms represented by DeepSeek is expected to accelerate the development and application of AI Agent, making it play a role in more complex tasks and scenarios, including office, marketing, medical, and industrial manufacturing. For B-end demand scenarios, DeepSeek can fully empower AI agents to develop applications such as intelligent customer service and intelligent office assistants to improve work efficiency and user experience. For example, through deep integration with enterprise applications, AI agents can achieve intelligent business process automation and decision support.

After DeepSeek became popular, many listed companies integrated DeepSeek with AI Agent and software products, and deeply empowered DeepSeek. The combination of AI Agent and DeepSeek has been implemented in multiple application scenarios. We believe that the collision of technology and ecology is expected to promote the optimization of B-end business lines and the breakthrough of C-end AI gameplay.

Multimodal AI: At present, AI products are dominated by intelligent chatbots, and the input and output of a single modality will greatly limit the use scenarios, and multimodality has become an inevitable branch direction. In the future, multimodal AI is expected to promote AI technology to smart devices. Recently, a number of A-share AI application listed companies have officially announced access to DeepSeek-R1 and empower the company's own AI multi-modal large models or products. For example, Wondershare Technology has completed the in-depth adaptation of DeepSeek-R1, and its video creativity, drawing creativity, and document creation software business products have integrated the relevant capabilities of DeepSeek-R1 large model. Danghong Technology integrates the BlackEye multi-modal audio-visual model with DeepSeek-R1 and DeepSeek Janus Pro.

5 Investment Advice

We believe that DeepSeek's open-source, low-cost, and high-performance model is expected to promote the universalization of domestic AI technology, and its algorithmic "late-mover advantage" has the potential to be transmitted in the domestic AI industry chain. The mapping role of the overseas AI industry and the comprehensive localization opportunities of the domestic AI industry will inject new growth momentum into domestic AI and maintain the "recommended" rating of the TMT industry. It is recommended to pay attention to the domestic computing power link: Digital China (000034. SZ), Inspur Information (000977. SZ), Sugon (603019. SH); iSoftStone (301236. SZ), Yunsai Zhilian (600602. SH); End-side AI: Bose Glasses (300622.SZ), Emdoor Information (001314. SZ), Arcjet-U (688220.SH), Espressif Technology (688018. SH), Hengxuan Technology (688608. SH), Zhongke Lanxun (688332. SH), Xingchen Technology (301536. SZ); AI application link: Kingsoft Office (688111.SH), iFLYTEK (002230.SZ), Keyuan Wisdom (002380. SZ), Dingjie Digital Intelligence (300378. SZ), Venture Huikang (300451. SZ), Yonyou Network (600588. SH), Zhiyuan Internet (688369. SH), EZVIZ Network (688475. SH), INTSIG (688615. SH)。

6 Risk warning

(1) The risk of error in the cited data. The data in this report is based on publicly available data and may affect the results of the analysis.

(2) AI investment is less than expected. Relevant technological breakthroughs are closely related to investment intensity.

(3) The competition of AI products is intensifying. Increased competition can lead to price wars.

(4) Focus on the company's performance not meeting expectations. Focus on the company's performance will be affected by various factors, if the performance does not meet expectations, the company's stock price will be affected.

(5) The speed of policy standards is not as fast as expected. The sustainable development of AI needs policy guidance.

(6) The risk of stock price correction caused by the short-term overvaluation of some companies. The situation of the industry and the company changes rapidly, and the market volatility may bring significant stock price fluctuations.

This article is excerpted from the report: "AI Industry Series Tracking Special Research Report: Looking at the "Late-mover Advantage" of Domestic AI from DeepSeek

".

Report release date: 28/02/2025

Report issued by: Hualong Securities

Analyst: Sun Bowen Practicing Certificate Number: S0230523080004

Analyst: Jing Danyang

Practising Certificate No.: S0230523080001

Follow Yicai Global on

star50stocks

Ticker Name

Percentage Change

Inclusion Date