Neural networks often are highly redundant and can thus be effectively compressed to a fraction of their initial size using model pruning techniques without harming the overall prediction accuracy. Additionally, pruned networks need to maintain robustness against attacks such as adversarial examples. Recent research on combining all these objectives has shown significant advances using uniform compression strategies, that is, parameters are compressed equally according to a preset compression ratio. In this paper, we show that employing non-uniform compression strategies allows to improve clean data accuracy as well as adversarial robustness under high overall compression—in particular using channel pruning. We leverage reinforcement learning for finding an optimal trade-off and demonstrate that the resulting compression strategy can be used as a plug-in replacement for uniform compression ratios of existing state-of-the-art approaches.
Tong JianZifeng WangYanzhi WangJennifer DyStratis Ioannidis
Micah GoldblumLiam FowlSoheil FeiziTom Goldstein
Minjing DongYanxi LiYunhe WangChang Xu