Yujia LiuAnton ObukhovJan Dirk WegnerKonrad Schindler
We present a learning-based approach to reconstructing buildings as 3D polygonal meshes from airborne LiDAR point clouds. What makes 3D building reconstruction from airborne LiDAR difficult is the large diversity of building designs, especially roof shapes, the low and varying point density across the scene, and the often incomplete coverage of building facades due to occlusions by vegetation or the sensor's viewing angle. To cope with the diversity of shapes and inhomogeneous and incomplete object coverage, we introduce a generative model that directly predicts 3D polygonal meshes from input point clouds. Our autoregressive model, called Point2Building, iteratively builds up the mesh by generating sequences of vertices and faces. This approach enables our model to adapt flexibly to diverse geometries and building structures. Unlike many existing methods that rely heavily on pre-processing steps like exhaustive plane detection, our model learns directly from the point cloud data, thereby reducing error propagation and increasing the fidelity of the reconstruction. We experimentally validate our method on a collection of airborne LiDAR data from Zurich, Berlin, and Tallinn. Our method shows good generalization to diverse urban styles.
Liu, YujiaObukhov, AntonWegner, Jan DirkSchindler, Konrad
Xinsheng WangTing On ChanKai LiuJun PanMing LuoWenkai LiChunzhu Wei
Rujun CaoYongjun ZhangXinyi LiuZongze Zhao
Shaojun HuZhengrong LiZhiyi ZhangDongjian HeMichael Wimmer
Jennifer RoelensBernhard HöfleStefaan DondeyneJos Van OrshovenJan Diels