Huawei's Pangu large model series has recently introduced a groundbreaking universal language model with 135 billion parameters, natively optimized for Ascend processors. While the release of models with 10 billion and 100 billion parameters has become common in the current technological landscape, most of these rely heavily on NVIDIA GPUs for training. However, domestic research teams often face constraints in computing resources, impeding the swift advancement of large model technology. Huawei's initiative aims to address this limitation and potentially revolutionize the field.
