On February 4, 2026, OpenAI made an official announcement stating that the overall operational velocity of the GPT-5.2 and GPT-5.2-Codex models has witnessed a significant boost of around 40%. This enhancement has been accomplished without any modifications to the model's architecture or parameter weights. The achievement comes as a result of system-level refinements made to the inference stack, providing a uniform advantage to all API users, all the while preserving the models' original functionalities intact.
