During the 2025 Spring Festival, Chinese AI firm DeepSeek unveiled its R1 large model. By innovating algorithms, it broke through the limitations of computational power and achieved performance on par with top-tier models, all at a fraction of the cost incurred by international tech giants. This breakthrough sent ripples across the global tech community and was dubbed the “DeepSeek Moment” by foreign media. The model not only propelled China’s AI sector from closed-source to open-source dominance but also triggered a global paradigm shift in open-source models. Chinese models now outpace their U.S. counterparts in terms of downloads and influence.
A year later, ByteDance rolled out its video generation model, Seedance 2.0. This model supports multimodal inputs and can automatically generate multi-angle sequence videos complete with native audio. It enables the instant creation of full-fledged videos from mere images or text prompts, earning it the accolade of the “world’s most powerful video generation model.” During its beta testing phase, the model triggered a significant response in the capital market, with a notable surge in the A-share media sector. However, it also sparked debates over technical boundaries, particularly concerning the unauthorized use of personal materials to generate lifelike videos. ByteDance swiftly suspended the function that primarily relied on real-person materials as references and refined it based on user feedback.
