On February 3, 2026, Zhipu took the significant step of formally unveiling and making the GLM-OCR model open source. This model, boasting a parameter scale of a mere 0.9 billion, offers versatile deployment options through vLLM, SGLang, and Ollama. These deployment avenues lead to a substantial reduction in inference latency and computational burdens. As a result, the model is ideally suited for scenarios involving high concurrency and edge deployment. In the realm of performance, GLM-OCR has achieved state-of-the-art (SOTA) results across multiple mainstream benchmarks. These benchmarks encompass formula recognition, table recognition, and information extraction. Its performance is remarkably close to that of the renowned Gemini-3-Pro, showcasing its exceptional capabilities in these crucial areas.
