JD Makes xLLM, Its Self-Developed Large Model Inference Engine Running on Domestic Chips, Open Source
4 day ago / Read about 0 minute
Author:小编   

On September 29, JD took the significant step of open-sourcing xLLM, a large model inference engine that it has independently developed to work seamlessly with domestic chips. This innovative engine has already been successfully integrated into various key scenarios within JD's ecosystem, including its AI assistant Jingyan, intelligent customer service systems, risk control mechanisms, supply chain support services, and advertising platforms. The implementation of xLLM has led to a remarkable over fivefold surge in operational efficiency, coupled with a substantial 90% decrease in machine-related costs.