Xu WangYi JinHui YuYigang CenYidong Li
Task-oriented sampling aims to predict the importance of points of a point cloud to better serve downstream tasks, which has attracted increasing attention in the fields of computer vision and visualization in recent years. However, existing methods cannot sufficiently leverage both global saliency and local saliency cues, resulting in suboptimal performance that requires further improvement. To tackle this challenge, we propose a novel 3D point cloud sampling method inspired by the human visual perception mechanism in this study, which can effectively extract important point cloud subsets from critical regions to better adapt to downstream tasks, thereby maintaining superior sampling performance. The proposed Visual Perception-inspired 3D Point Cloud Sampling (VPI-3DPS) method simulates the human visual system’s dynamic attention-shifting strategy by combining coarse-grained attention-driven sampling with fine-grained detail preservation. This allows our approach to adaptively capture both global context and local details within point cloud data, safeguarding downstream task performance. By leveraging Gated Recurrent Units (GRUs) for long-term dependency modeling and integrating Graph Convolutional Networks (GCNs) to capture local structures, VPI-3DPS obtains an integrated representation of regional correlation and detail awareness. Extensive experiments show that VPI-3DPS outperforms existing methods. Compared to the best-performing approaches, it achieves an average increase of 1.29% in classification accuracy, an average reduction of 13.20% in registration MRE, and an average decrease of 4.29% in Chamfer Distance for reconstruction.
Lei ZhuWeinan ChenXubin LinLi HeYisheng Guan
Yichen ZhouXinfeng ZhangYingzhan XuKai ZhangLi ZhangQingming Huang
Zhuangzi LiShan LiuWei GaoGuanbin LiGe Li
Christopher R. SerranoAleksey NoginMichael A. Warren
Xijiang ChenPeng LiBufan ZhaoTieding LuXunqiang GongHui Deng