JOURNAL ARTICLE

Safe and Balanced: A Framework for Constrained Multi-Objective Reinforcement Learning

Shangding GuBilgehan SelYuhao DingLu WangQingwei LinAlois KnollMing Jin

Year: 2025 Journal:   IEEE Transactions on Pattern Analysis and Machine Intelligence Vol: 47 (5)Pages: 3322-3331   Publisher: IEEE Computer Society

Abstract

In numerous reinforcement learning (RL) problems involving safety-critical systems, a key challenge lies in balancing multiple objectives while simultaneously meeting all stringent safety constraints. To tackle this issue, we propose a primal-based framework that orchestrates policy optimization between multi-objective learning and constraint adherence. Our method employs a novel natural policy gradient manipulation method to optimize multiple RL objectives and overcome conflicting gradients between different objectives, since the simple weighted average gradient direction may not be beneficial for specific objectives due to misaligned gradients of different objectives. When there is a violation of a hard constraint, our algorithm steps in to rectify the policy to minimize this violation. Particularly, We establish theoretical convergence and constraint violation guarantees, and our proposed method also outperforms prior state-of-the-art methods on challenging safe multi-objective RL tasks.

Keywords:
Reinforcement learning Artificial intelligence Computer science Machine learning Mathematical optimization Mathematics

Metrics

8
Cited By
38.56
FWCI (Field Weighted Citation Impact)
49
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.