Hanchen WangQi LiuXiangyu YueJoan LasenbyMatt J. Kusner
We describe a simple pre-training approach for point clouds. It works in three steps: 1. Mask all points occluded in a camera view; 2. Learn an encoder-decoder model to reconstruct the occluded points; 3. Use the encoder weights as initialisation for downstream point cloud tasks. We find that even when we pre-train on a single dataset (ModelNet40), this method improves accuracy across different datasets and encoders, on a wide range of downstream tasks. Specifically, we show that our method outperforms previous pre-training methods in object classification, and both part-based and semantic segmentation tasks. We study the pre-trained features and find that they lead to wide downstream minima, have high transformation invariance, and have activations that are highly correlated with part labels. Code and data are available at: https://github.com/hansen7/OcCo
Saining XieJiatao GuDemi GuoCharles R. QiLeonidas GuibasOr Litany
Guofeng MeiXiaoshui HuangJuan LiuJian ZhangQiang Wu
Jingyu GongFengqi LiuJiachen XuMin WangXin TanZhizhong ZhangRan YiHaichuan SongYuan XieLizhuang Ma
Haihong XiaoYuqiong LiWenxiong KangQiuxia Wu