4DRC-OCC: Robust Semantic Occupancy Prediction Through Fusion of 4D Radar and Camera

More Info
expand_more

Abstract

Autonomous driving systems require robust and reliable perception across diverse environmental conditions, yet current approaches to 3D semantic occupancy prediction face challenges in adverse weather and lighting. In this paper, we present the first study on fusing 4D radar and camera for 3D semantic occupancy prediction. This fusion offers significant advantages for robust and accurate perception as 4D radar provides reliable range, velocity, and angle information, even in challening conditions, complementing rich semantic and texture details from cameras. Additionally, we demonstrate that incorporating depth cues with camera image pixels supports the process of lifting 2D images to 3D, enhancing the accuracy of scene reconstruction. Secondly, we introduce a fully automatically labeled dataset specifically designed for training semantic occupancy models, demonstrating its ability to substantially reduce the need for costly manual annotation. Our results highlight the robustness of 4D radar in a wide range of challenging scenarios, showcasing its potential to advance perception for autonomous vehicles.

Files

4DRC-OCC_Robust_Semantic_Occup... (pdf)
License info not available
warning

File under embargo until 12-12-2026