Abstract Control Co-Design (CCD) focuses on integrating the designs of physical and control systems to achieve optimal performance for autonomous systems designed to operate independently and complete tasks even in uncertain environments. However, traditional CCD methods often fail to adapt to unforeseen variations in real-world conditions. This study addresses this challenge by introducing Digital Twins (DTs)—virtual representations of systems that evolve alongside their real-world counterparts, as recent advancements in sensing technologies and artificial intelligence have enabled real-time data collection and adaptive system re-optimization. This paper presents a CCD framework for DT-enabled systems, integrating Deep Reinforcement Learning (DRL) to simultaneously optimize both the physical system and control policy. With real-time updating DRL policies and DTs, systems enable continuous learning from and dynamic adaptation to uncertain environments while ensuring rapid real-time control. The proposed framework employs a multi-generation learning paradigm, where physical data collected from previous generations is used to refine the DT models, improve uncertainty quantification through quantile regression, and better inform decision-making. The effectiveness of this approach is demonstrated on an active suspension system, where the DT learns environmental variations from physical data on road conditions and driving speeds following real-world implementation. Results show that the method can significantly improve dynamic performance and robustness with smoother and more stable control trajectories. Future work will extend the framework to other engineering applications.
Jasmin Y. LimDimitrios PylorofHumberto E. GarciaKarthik Duraisamy
Xuemei GanYing ZuoAnsi ZhangShaobo LiFei Tao
Jin LiDanshi WangMin ZhangSiheng Cui
Haiqin XieSheng TanFengqi LingJialin WuLiang HeXin Zhang
Nafisat GyimahOtt SchelerToomas RangTamás Párdy