Wei XiaoTsun-Hsuan WangRamin HasaniMakram ChahineAlexander AminiXiao LiDaniela Rus
Many safety-critical applications of neural networks, such as robotic control, require safety guarantees. This article introduces a method for ensuring the safety of learned models for control using differentiable control barrier functions (dCBFs). dCBFs are end-to-end trainable and guarantee safety. They improve over classical control barrier functions (CBFs), which are usually overly conservative. Our dCBF solution relaxes the CBF definitions by: 1) using environmental dependencies; 2) embedding them into differentiable quadratic programs. These novel safety layers are called a BarrierNet. They can be used in conjunction with any neural network-based controller. They are trained by gradient descent. With BarrierNet, the safety constraints of a neural controller become adaptable to changing environments. We evaluate BarrierNet on the following several problems: 1) robot traffic merging; 2) robot navigation in 2-D and 3-D spaces; 3) end-to-end vision-based autonomous driving in a sim-to-real environment and in physical experiments; 4) demonstrate their effectiveness compared to state-of-the-art approaches.
Vivek SharmaNegar MehrNaira Hovakimyan
Shuo YangShaoru ChenVíctor M. PreciadoRahul Mangharam
Desong DuShaohang HanNaiming QiHaitham Bou AmmarJun WangWei Pan
Mouhyemen KhanAbhijit Chatterjee