Jianghong ZhaoMingming CaoJia Yang
Abstract. Accurate building contour extraction is critical for urban modeling but remains challenging due to limitations in single-source point clouds. LiDAR data suffers from sparsity and sensitivity to surface reflectance, while photogrammetric point clouds exhibit noise under occlusion and lighting variations. To overcome these constraints, we propose an end-to-end framework combining multimodal 3D fusion and deep geometric co-optimization. First, LiDAR and photogrammetric point clouds are fused through ICP registration, avoiding 2D-3D misalignment. Building points are then segmented using PointNet++. A novel Z-axis threshold projection is applied during projection, eliminating rooftop interference by constraining projections to structural walls. Initial contours extracted via Alpha-shapes undergo adaptive regularization: 1) Douglas-Peucker simplification, 2) angle-constrained vector optimization rectifying non-orthogonal corners. Validated on Ming and Qing heritage structures, our method achieves 3.7% area error (vs. 17.8% for CloudCompare) and 2.6% perimeter error. This represents the first unified pipeline combining 3D-3D data fusion with deep learning and geometric regularization, offering a promising approach for automated building modeling in complex urban and heritage environments.
Danesh ShokriM. ZaboliF. DolatiSaeid Homayouni
Buray KARSLIFerruh YılmaztürkMurat BahadirFevzi KarslıEmirhan Özdemir
Liang GuoXingdong DengYang LiuHuagui HeHong LinGuangxin QiuWeijun Yang
Bo XiongSander Oude ElberinkGeorge Vosselman