Till BeemelmannsYuchen TaoBastian LampeLennart ReiherRaphael van KempenTimo WoopenLutz Eckstein
Storing and transmitting LiDAR point cloud data is essential for many AV\napplications, such as training data collection, remote control, cloud services\nor SLAM. However, due to the sparsity and unordered structure of the data, it\nis difficult to compress point cloud data to a low volume. Transforming the raw\npoint cloud data into a dense 2D matrix structure is a promising way for\napplying compression algorithms. We propose a new lossless and calibrated\n3D-to-2D transformation which allows compression algorithms to efficiently\nexploit spatial correlations within the 2D representation. To compress the\nstructured representation, we use common image compression methods and also a\nself-supervised deep compression approach using a recurrent neural network. We\nalso rearrange the LiDAR's intensity measurements to a dense 2D representation\nand propose a new metric to evaluate the compression performance of the\nintensity. Compared to approaches that are based on generic octree point cloud\ncompression or based on raw point cloud data compression, our approach achieves\nthe best quantitative and visual performance. Source code and dataset are\navailable at https://github.com/ika-rwth-aachen/Point-Cloud-Compression.\n
Ziqun LiQi ZhangXiaofeng HuangZhao WangSiwei MaYan Wei
Ashwini KambarV.M. ChougalaShettar Rajashekar
Chenxi TuEijiro TakeuchiAlexander CarballoKazuya Takeda