DeepSeek has officially declared the launch and open-sourcing of the preview version of its latest model series, DeepSeek-V4. This cutting-edge model is engineered to handle ultra-long textual contexts, boasting a capacity of up to one million words. It comes in two iterations: deepseek-v4-pro and deepseek-v4-flash, distinguished by their functional scope. Owing to constraints in high-end computing resources, the service throughput of deepseek-v4-pro is presently capped. Nevertheless, with the anticipated widespread deployment of Huawei's Ascend 950 super nodes in the latter half of this year, a substantial reduction in costs is on the horizon. DeepSeek-V4 has undergone profound optimization to seamlessly integrate with domestic chips, notably Huawei's Ascend series. This marks a pivotal advancement for China's AI sector, signaling a departure from reliance on CUDA. At present, Ascend chips have established a comprehensive product lineup. Moreover, Huawei has revealed plans to introduce the Ascend 950PR in the first quarter of 2026. Securities firms are bullish on the future trajectory of the domestic computing power industry chain. Shanxi Securities posits that the synergy between domestic large-scale models and computing power will fuel a robust supply-demand dynamic. Meanwhile, CITIC Securities forecasts that the shipment volume of domestic computing power chips will surge by at least 100% by 2026.
