On March 4, 2026, Ant Group, in a collaborative effort with Tsinghua University, proudly announced the release of the stable version 1.0 of its open-source reinforcement learning training framework, AReaL. This iteration places a significant emphasis on 'seamless one-click integration of Agents into RL training,' facilitating effortless compatibility with a diverse range of Agent frameworks without the need for any code alterations. This allows for immediate and hassle-free reinforcement learning training for intelligent agents.
At present, intelligent agent frameworks are experiencing rapid development, yet they are confronted with two primary challenges: the exorbitant costs associated with integrating them into training processes and a notable absence of capabilities for continuous evolution. As the pioneering large-model reinforcement learning training system to feature fully asynchronous training-inference decoupling, AReaL enables any Agent to be seamlessly integrated into RL training without any modifications, thanks to its innovative Proxy Worker middleware technology. Developers are only required to modify the request address to facilitate integration into the training process.
