Most existing human pose estimation approaches focus on improving the model performance, but putting aside the significant efficiency problem. It makes the model not practical. Therefore, it is meaningful to explore how to keep high precision on a smaller model. This paper investigates the knowledge distillation strategy for training small network by making use of large network and try to keep high performance at the same time. Experiments on COCO dataset demonstrates the effectiveness of the proposed approach which can improve network accuracy without increasing network complexity.
Guangyao ZhouXiluo TengKang-Hyun Jo
Wei Herng YapRui CaoSim Kuan Goh