In the previous chapter, we explored Multi-trial Neural Architecture Search, which is a very promising approach. And the reader might wonder why Multi-trial NAS is called that way. Are there any other non-Multi-trial NAS approaches, and is it really possible to search for the optimal neural network architecture in some other way without trying it? It looks pretty natural that the only way to find the optimal solution is to try different elements in the search space. In fact, it turns out that this is not entirely true. There is an approach that allows you to find the best architecture by training some Supernet. And this approach is called One-shot Neural Architecture Search. As the name "one-shot" implies, this approach involves only one try or shot. Of course, this "shot" is much longer than single neural network training, but nevertheless, it saves a lot of time. In this chapter, we will study what One-shot NAS is and how to design architectures for this approach. We will examine two popular One-shot algorithms: Efficient Neural Architecture Search via Parameter Sharing (ENAS)Efficient neural architecture search via parameter sharing (ENAS) and Differentiable Architecture Search (DARTS)Differentiable architecture search (DARTS). Of course, we will apply these algorithms to solve practical problems.
Seok Bin SonSoohyun ParkJoongheon Kim
Yanxi LiZean WenYunhe WangChang Xu
Guoyang XieJinbao WangGuo YuJiayi LyuFeng ZhengYaochu Jin
Kuangda LyuMaoguo GongHao LiYuan GaoYue WuDan FengJiao ShiYu Lei