DeepSeek V4 Lite Unveiled Inconspicuously: A 200-Billion-Parameter Model Nearing the Performance of Leading U.S. Counterparts
2 day ago / Read about 0 minute
Author:小编   

While DeepSeek V4 missed its anticipated release during the Spring Festival period, DeepSeek introduced a new iteration—DeepSeek V4 Lite—on February 11. Boasting 200 billion parameters, this model emphasizes an ultra-long context capability of up to 1 million Tokens, with its knowledge base refreshed to include information up to May 2025. Although initial tests indicated that, aside from the expanded context window, its other capabilities were not particularly remarkable, ongoing refinements have markedly improved its performance, showcasing its competitiveness in domains such as code generation and visual restoration.