JOURNAL ARTICLE

3DPPE: 3D Point Positional Encoding for Transformer-based Multi-Camera 3D Object Detection

Abstract

Transformer-based methods have swept the benchmarks on 2D and 3D detection on images. Because tokenization before the attention mechanism drops the spatial information, positional encoding becomes critical for those methods. Recent works found that encodings based on samples of the 3D viewing rays can significantly improve the quality of multi-camera 3D object detection. We hypothesize that 3D point locations can provide more information than rays. Therefore, we introduce 3D point positional encoding, 3DPPE, to the 3D detection Transformer decoder. Although 3D measurements are not available at the inference time of monocular 3D object detection, 3DPPE uses predicted depth to approximate the real point positions. Our hybrid-depth module combines direct and categorical depth to estimate the refined depth of each pixel. Despite the approximation, 3DPPE achieves 46.0 mAP and 51.4 NDS on the competitive nuScenes dataset, significantly outperforming encodings based on ray samples. The code is available at https://github.com/drilistbox/3DPPE.

Keywords:
Computer science Artificial intelligence Computer vision Object detection Encoding (memory) Transformer Pixel Inference Pattern recognition (psychology)

Metrics

24
Cited By
4.37
FWCI (Field Weighted Citation Impact)
43
Refs
0.94
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
3D Surveying and Cultural Heritage
Physical Sciences →  Earth and Planetary Sciences →  Geology
© 2026 ScienceGate Book Chapters — All rights reserved.