OpenAI has announced that the overall operational speed of the GPT-5.2 and GPT-5.2-Codex models has been increased by approximately 40% without altering their original architectures or parameter weights. This optimization was achieved through system-level improvements to the inference stack.
