BOOK-CHAPTER

Adversarially Robust Neural Lyapunov Control

Abstract

State-of-the-art learning-based stability control methods for nonlinear robotic systems suffer from the issue of reality gap, which stems from discrepancy of the system dynamics between training and target (test) environments. To mitigate this gap, we propose an adversarially robust neural Lyapunov control (ARNLC) method to improve the robustness and generalization capabilities for Lyapunov theory-based stability control. Specifically, inspired by adversarial learning, we introduce an adversary to simulate the dynamics discrepancy, which is learned through deep reinforcement learning to generate the worst-case perturbations during the controller’s training. By alternatively updating the controller to minimize the perturbed Lyapunov risk and the adversary to deviate the controller from its objective, the learned control policy enjoys a theoretical guarantee of stability. Empirical evaluations on five stability control tasks with the uniform and worst-case perturbations demonstrate that ARNLC not only accelerates the convergence to asymptotic stability, but can generalize better in the entire perturbation space.

Keywords:
Lyapunov function Control theory (sociology) Control (management) Computer science Mathematics Artificial intelligence Nonlinear system Physics

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.55
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Fault Detection and Control Systems
Physical Sciences →  Engineering →  Control and Systems Engineering

Related Documents

JOURNAL ARTICLE

Appendix of Adversarially Robust Neural Lyapunov Control

Wei, LiJiang, YuankunLi, ChenglinDai, WenruiZou, JunniXiong, Hongkai

Journal:   Zenodo (CERN European Organization for Nuclear Research) Year: 2024
JOURNAL ARTICLE

Appendix of Adversarially Robust Neural Lyapunov Control

Wei, LiJiang, YuankunLi, ChenglinDai, WenruiZou, JunniXiong, Hongkai

Journal:   Zenodo (CERN European Organization for Nuclear Research) Year: 2024
JOURNAL ARTICLE

Adversarially Robust Neural Architectures

Minjing DongYanxi LiYunhe WangChang Xu

Journal:   IEEE Transactions on Pattern Analysis and Machine Intelligence Year: 2025 Vol: 47 (5)Pages: 4183-4197
JOURNAL ARTICLE

AdvRush: Searching for Adversarially Robust Neural Architectures

Jisoo MokByunggook NaHyeokjun ChoeSungroh Yoon

Journal:   2021 IEEE/CVF International Conference on Computer Vision (ICCV) Year: 2021 Pages: 12302-12312
© 2026 ScienceGate Book Chapters — All rights reserved.