Neural Architecture Search (NAS) has attracted growing interest. To reduce the search cost, recent work has explored weight sharing across models and made major progress in One-Shot NAS. However, it has been observed that a model with higher one-shot model accuracy does not necessarily perform better when stand-alone trained. To address this issue, in this paper, we propose Progressive Automatic Design of search space, named PAD-NAS. Unlike previous approaches where the same operation search space is shared by all the layers in the supernet, we formulate a progressive search strategy based on operation pruning and build a layer-wise operation search space. In this way, PAD-NAS can automatically design the operations for each layer. During the search, we also take the hardware platform constraints into consideration for efficient neural network model deployment. Extensive experiments on ImageNet show that our method can achieve state-of-the-art performance. Take-aways Uses network architecture search methods to find better architectures for lower latencies and higher accuracy Formulates a search strategy to build a layer-wise operation search space through hierarchical operation pruning and mitigates weight coupling issue in One-Shot NAS. Compares the effects of different parameters on memory sizes, latency, and accuracy
Xin XiaXuefeng XiaoXing WangMin Zheng
Yanxi LiZean WenYunhe WangChang Xu
Chen ZhangQiyu WanLening WangYu WenMingsong ChenJingweijia TanKaige YanXin Fu
Minghao ChenJianlong FuHaibin Ling