Proximal policy optimization with adaptive generalized advantage estimate: critic-aware refinements

سال انتشار: 1405
نوع سند: مقاله ژورنالی
زبان: انگلیسی
مشاهده: 4

فایل این مقاله در 14 صفحه با فرمت PDF قابل دریافت می باشد

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این مقاله:

شناسه ملی سند علمی:

JR_JMMO-14-1_010

تاریخ نمایه سازی: 1 بهمن 1404

چکیده مقاله:

Proximal Policy Optimization (PPO) is one of the most widely used methods in reinforcement learning, designed to optimize policy updates while maintaining training stability. However, in complex and high-dimensional environments, maintaining a suitable balance between bias and variance poses a significant challenge. The λ parameter in Generalized Advantage Estimation (GAE) influences this balance by controlling the trade-off between short-term and long-term return estimations. In this study, we propose a method for adaptive adjustment of the λ parameter, where λ is dynamically updated during training instead of remaining fixed. The updates are guided by internal learning signals such as the value function loss and Explained Variance—a statistical measure that reflects how accurately the critic estimates target returns. To further enhance training robustness, we incorporate a Policy Update Delay (PUD) mechanism to mitigate instability from overly frequent policy updates. The main objective of this approach is to reduce dependence on expensive and time-consuming hyperparameter tuning. By leveraging internal indicators from the learning process, the proposed method contributes to the development of more adaptive, stable, and generalizable reinforcement learning algorithms. To assess the effectiveness of the approach, experiments are conducted in four diverse and standard benchmark environments: Ant-v۴, HalfCheetah-v۴, and Humanoid-v۴ from the OpenAI Gym, as well as Quadruped-Walk from the DeepMind Control Suite. The results demonstrate that the proposed method can substantially improve the performance and stability of PPO across these environments. Our implementation is publicly available at https://github.com/naempr/PPO-with-adaptive-GAE.

نویسندگان

Naemeh Mohammadpour

Department of Mechanical Engineering, Amirkabir University of Technology, Tehran, Iran

Meysam Fozi

Department of Computer Engineering, Amirkabir University of Technology, Tehran, Iran

Mohammad Mehdi Ebadzadeh

Department of Computer Engineering, Amirkabir University of Technology, Tehran, Iran

Ali Azimi

Department of Mechanical Engineering, Amirkabir University of Technology, Tehran, Iran

Ali Kamali Iglie

Department of Mechanical Engineering, Amirkabir University of Technology, Tehran, Iran