LLMs Are Far From Reaching Technical Limits, Chinese AI Experts Say
Liu Xiaojie
DATE:  4 hours ago
/ SOURCE:  Yicai
LLMs Are Far From Reaching Technical Limits, Chinese AI Experts Say LLMs Are Far From Reaching Technical Limits, Chinese AI Experts Say

(Yicai) Feb. 25 -- Large language models are not yet approaching their technological ceiling, with ample room for further advancements, experts said at the recent Global Developer Conference.

LLMs are in a rapid development phase, Liu Hua, vice president of Shanghai-based artificial intelligence startup MiniMax, said during the AI-themed GDC that wrapped up in the eastern city on Feb. 23.

The launch of ChatGPT-o1 by OpenAI late last year and DeepSeek's open-source release DeepSeek R1 in January exemplify this progress, Liu said. In the next two to three years, technological advancements comparable to the leap from GPT-3.5 to GPT-4 are likely to occur twice more, he added.

Industry insiders are speculating on how close LLM developers are to hitting the scaling law limit. The limit refers to a tipping point when increases in model parameters, dataset size, or computational resources no longer enhance model performance but instead cause diminishing returns and wasteful resource allocation.

Developers require more corpus data: text written or audio spoken by native speakers of the language or dialect. An industry practitioner told Yicai that the input of fundamental raw materials has not increased proportionally with the growing scale of LLMs, hindering how models learn new knowledge. 

However, He Conghui, a scientist at the Shanghai AI Laboratory, claimed that available data have not been exhausted and there remains room for quality improvement. Moreover, enhancing data quality can improve efficiency, suggesting future models may require less data. This could lead to further reductions in computational costs and encourage broader participation in model optimization.

Qiao Yu, assistant director of the Shanghai AI Laboratory, noted at the conference that LLMs still face numerous challenges in industrial implementation, including costs, efficiency, reliability, stability, and security.

Beginning this year, LLMs will enter their next phase, where innovation and application will become crucial for overcoming development bottlenecks, Qiao said. DeepSeek has made significant progress through systematic innovation in model architecture, training methods, and high-speed parallel frameworks. This has greatly improved efficiency and reduced costs, providing valuable insights, the expert added.

Looking ahead, Qiao believes this year will bring advancements in multimodal intelligence and AI-assisted scientific discoveries.

Editor: Emmi Laine

Follow Yicai Global on
Keywords:   GDC,developer,AI,AI models,DeepSeek,Scalling Law,OpenAI,ChatGPT,DeepSeek R1,scaling law limit,LLM,large language models,data corpus,model training,China