JOURNAL ARTICLE

Human Interpretable Radar Through Deep Generative Models

Abstract

Imaging radars are a powerful new sensor aimed at the autonomous vehicle market. With high angular resolution and dynamic range, such radars can discern between different adjacent objects, even when those are stationary. The output of the radar is a complex Point Cloud (PC) that is difficult for humans to understand and recognize objects. We investigate the use of machine learning to transform the radar PC to an interpretable format that can be understood intuitively. We employ two different generative models to transform radar PCs into synthetic LiDAR PCs and camera images of the scene. Our results demonstrate the imaging radar's ability to recognize objects such as pedestrians, parked vehicles, trees and road edges. We show how these tools can be used to analyse and evaluate the radar PC.

Keywords:
Radar Radar imaging Computer science Artificial intelligence Computer vision Lidar Radar engineering details Point cloud 3D radar Remote sensing Deep learning Geography Telecommunications

Metrics

2
Cited By
0.14
FWCI (Field Weighted Citation Impact)
17
Refs
0.43
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Generative Adversarial Networks and Image Synthesis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Image Processing and 3D Reconstruction
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.