} ?>
(Yicai) March 7 -- The growing trend of Chinese tech companies making their artificial intelligence models open-sourcing continues to expand to video generation tools.
Tencent Holdings' large language model team released an open-source text-to-video AI model that enables users to generate five-second videos from a single image yesterday. On Feb. 25, Alibaba Group Holding made its video- and image-generating AI tool Wan 2.1 publicly available.
In addition, AI startup StepFun said it plans to make the two text-to-video and text-to-speech models the Shanghai-based firm co-developed with Chinese carmaker Geely Auto open source later this month to follow the footsteps of industry pioneer DeepSeek.
Tencen's new open-source model has 13 billion parameters and can make realistic videos, anime characters, and others, according to the Shenzhen-based company. The Hunyuan LLM team has put the tool open-source, including weights, inference code, and low-rank adaptation training code, on GitHub and Hugging Face, it added.
Using Tencent's new model, users can direct how they want a scene to feel and the camera angles in their five-second video, in addition to being able to input text and audio for lip-synced speech. The example videos show subjects moving smoothly and speaking quite realistically.
Due to the significant gap in computational power and data consumption between video and image generation, the industry is reluctant to open-source models that were developed at a high cost, the head of the Hunyuan team told Yicai last December. Many people do not get to use the best tools due to this closed-door development process, he added.
Tencent also unveiled open source video generator HunyuanVideo last December, noting that its more than 13 billion parameters made it the largest publicly available tool of its kind.
The technical path for AI video generation models is not fully clear yet, and the industry is still exploring, experts said to Yicai. During the technical refinement stage, open-source tools can promote progress, they pointed out.
On the VBench Leaderboard for video generation models, Wan 2.1 ranked first with a score of just over 86 percent, surpassing OpenAI's Sora. HunyuanVideo ranked 12th and open-source CogVideoX1.5-5B, developed by Chinese startup Zhipu AI, was 15th.
Editor: Martin Kadiev