3D object detection models that exploit both LiDAR and camera sensor features are top performers in large-scale autonomous driving benchmarks. A transformer is a popular network architecture used for this task, in which so-called object queries act as candidate objects. Initializ
...
3D object detection models that exploit both LiDAR and camera sensor features are top performers in large-scale autonomous driving benchmarks. A transformer is a popular network architecture used for this task, in which so-called object queries act as candidate objects. Initializing these object queries based on current sensor inputs leads to state-of-the-art performance. Existing methods rely strongly on LiDAR data however, and do not fully exploit image features. Besides, they introduce significant latency.
To overcome these limitations we propose EfficientQ3M, an efficient, modular, and multimodal solution for object query initialization for transformer-based 3D object detection models. Using both the LiDAR and camera modalities as input, we use efficient grid sampling and a lightweight detection head to predict a set of initial object query locations and corresponding query feature vectors. The proposed initialization method is combined with a “modality-balanced” transformer decoder where the queries can access all sensor modalities throughout the decoder.
We achieve state-of-the-art performance for both LiDAR-camera and LiDAR-only sensor setups on the competitive nuScenes benchmark while being up to 15 times more efficient than the closest related method. The proposed initialization can be applied with any combination of sensor modalities as input, demonstrating its modularity.