Yachao ZhangYanyun QuYuan XieZonghao LiShanshan ZhengCuihua Li
Large-scale point cloud semantic segmentation has wide applications. Current popular researches mainly focus on fully supervised learning which demands expensive and tedious manual point-wise annotation. Weakly supervised learning is an alternative way to avoid this exhausting an-notation. However, for large-scale point clouds with few labeled points, the network is difficult to extract discriminative features for unlabeled points, as well as the regularization of topology between labeled and unlabeled points is usually ignored, resulting in incorrect segmentation results.To address this problem, we propose a perturbed self-distillation (PSD) framework. Specifically, inspired by self-supervised learning, we construct the perturbed branch and enforce the predictive consistency among the perturbed branch and original branch. In this way, the graph topology of the whole point cloud can be effectively established by the introduced auxiliary supervision, such that the in-formation propagation between the labeled and unlabeled points will be realized. Besides point-level supervision, we present a well-integrated context-aware module to explicitly regularize the affinity correlation of labeled points. Therefore, the graph topology of the point cloud can be further refined. The experimental results evaluated on three large-scale datasets show the large gain (3.0% on average) against recent weakly supervised methods and comparable results to some fully supervised methods.
Yachao ZhangZonghao LiYuan XieYanyun QuCuihua LiTao Mei
Yanfei SuMing ChengZhimin YuanWeiquan LiuWankang ZengCheng Wang
Jingyi WangJingyang HeYu LiuChen ChenMaojun ZhangHanlin Tan
Yong ZhangZhaolong WuRukai LanYingjie LiangYifan Liu
Qingyong HuBo YangGuangchi FangYulan GuoAleš LeonardisNiki TrigoniAndrew Markham