JOURNAL ARTICLE

ESC-Net: Alleviating Triple Sparsity on 3D LiDAR Point Clouds for Extreme Sparse Scene Completion

Pei AnDi ZhuSiwen QuanJunfeng DingJie MaYou YangQiong Liu

Year: 2024 Journal:   IEEE Transactions on Multimedia Vol: 26 Pages: 6799-6810   Publisher: Institute of Electrical and Electronics Engineers

Abstract

3D scene completion (SC) has made progress in the last three years. From the application of mobile robot system, SC should support the downstream task (i.e. mapping or perception), instead of only predicting the completed scenes. However, as the low-cost few-beam LiDAR is widely applied in mobile robot, gap between SC and downstream tasks is large. To generate the high quality completion result, the bottleneck lies in the triple sparsity of input, ground truth (GT) occupancy, and GT foreground. To deal with the triple sparsity, we present an extreme sparse scene completion network (ESC-Net). At first, input sparsity hides most of the spatial information of the scene. A feature completion (FC) decoder is designed to mine the spatial feature using feature-level completion. Then, GT occupancy sparsity hinders representation learning of the real scene with continuous surfaces. A multi-view multi-task attention (MMA) loss is presented to recover the high-quality object boundaries via correcting occupancy and semantic labels of regions from 3D and bird's eye view (BEV) spaces. After that, GT foreground sparsity is the imbalance of foreground and background GT labels. It causes the inaccuracy of local 3D object completion. A combination network (ESC-Net-D) is presented to recover 3D structural details of both foreground and background. Experiment is conducted on KITTI and SemanticPOSS datasets. It shows that ESC-Net has the performance higher than current methods not only on completion task, but also on the downstream tasks (i.e. 3D registration, 3D object detection). Hence, we believe that ESC-Net benefits to the community of mobile robot. Source code is released soon.

Keywords:
Computer science Artificial intelligence Feature (linguistics) Bottleneck Computer vision Point cloud Lidar Task (project management) Object detection Point (geometry) Sparse approximation Pattern recognition (psychology) Remote sensing

Metrics

10
Cited By
5.30
FWCI (Field Weighted Citation Impact)
42
Refs
0.92
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
3D Shape Modeling and Analysis
Physical Sciences →  Engineering →  Computational Mechanics
Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering

Related Documents

JOURNAL ARTICLE

Semantic Segmentation-assisted Scene Completion for LiDAR Point Clouds

Xuemeng YangHao ZouXin KongTianxin HuangYong LiuWanlong LiFeng WenHongbo Zhang

Journal:   2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Year: 2021 Pages: 3555-3562
JOURNAL ARTICLE

SLSM-Net: Sparse LiDAR Point Clouds Supervised Stereo Matching

Ze ZongCheng WuJie XieJin Zhang

Journal:   IEEE Transactions on Multimedia Year: 2025 Pages: 1-13
JOURNAL ARTICLE

RPS-Net: Indoor Scene Point Cloud Completion using RBF-Point Sparse Convolution

Wang, TaoWu, JingJi, ZeLai, Yu-Kun

Journal:   Spectrum Research Repository (Concordia University) Year: 2023
JOURNAL ARTICLE

Self Attention Guided Depth Completion using RGB and Sparse LiDAR Point Clouds

Siddharth SrivastavaGaurav Sharma

Journal:   2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Year: 2021 Vol: 205 Pages: 2643-2650
JOURNAL ARTICLE

Voxel- and Bird’s-Eye-View-Based Semantic Scene Completion for LiDAR Point Clouds

Li LiangNaveed AkhtarJordan ViceAjmal Mian

Journal:   Remote Sensing Year: 2024 Vol: 16 (13)Pages: 2266-2266
© 2026 ScienceGate Book Chapters — All rights reserved.