Borna BešićNikhil GosalaDaniele CattaneoAbhinav Valada
Scene understanding is a pivotal task for autonomous vehicles to safely\nnavigate in the environment. Recent advances in deep learning enable accurate\nsemantic reconstruction of the surroundings from LiDAR data. However, these\nmodels encounter a large domain gap while deploying them on vehicles equipped\nwith different LiDAR setups which drastically decreases their performance.\nFine-tuning the model for every new setup is infeasible due to the expensive\nand cumbersome process of recording and manually labeling new data.\nUnsupervised Domain Adaptation (UDA) techniques are thus essential to fill this\ndomain gap and retain the performance of models on new sensor setups without\nthe need for additional data labeling. In this paper, we propose AdaptLPS, a\nnovel UDA approach for LiDAR panoptic segmentation that leverages task-specific\nknowledge and accounts for variation in the number of scan lines, mounting\nposition, intensity distribution, and environmental conditions. We tackle the\nUDA task by employing two complementary domain adaptation strategies,\ndata-based and model-based. While data-based adaptations reduce the domain gap\nby processing the raw LiDAR scans to resemble the scans in the target domain,\nmodel-based techniques guide the network in extracting features that are\nrepresentative for both domains. Extensive evaluations on three pairs of\nreal-world autonomous driving datasets demonstrate that AdaptLPS outperforms\nexisting UDA approaches by up to 6.41 pp in terms of the PQ score.\n
Ling‐Dong KongNiamul QuaderVenice Erin Liong
Eojindl YiJuyoung YangJunmo Kim
Muhan WangXiaolan QiuSilin GaoZhe Zhang