Reducing the Sim-to-Real Gap: Lidar-Based 3D Static Environment Reconstruction
More Info
expand_more
Abstract
This paper presents a method that shows how a lidar-based 3D static environment can be constructed from a driving scenario and is used to aid in the creation of a digital twin from that scenario. Built with limited computational resources in mind, the resulting 3D static background reconstruction method is lightweight and nonetheless up to par with similar works, proving itself to be a viable alternative to similar methods. It uses ground truth labels and open-source, out-of-the-box working building blocks to create the pipeline that filters the lidar frames, performs registration between those frames, and meshes the resulting point cloud into a 3D mesh of the static background. We performed experiments in the Siemens Prescan ADAS Simulator of a reenactment of the traffic scenario in combination with our 3D static background and measured the inference domain gap using the Bidirectional Chamfer Distance and a real-data trained object detector, comparing both the synthetic and original data. Using Prescan as the simulator, AD models van be evaluated closed-loop. This work opens up the possibilities of applying sensor domain shifts to existing data or creating reality-based traffic scenarios from existing traffic scenarios and their corresponding 3D static background of edge cases that do not, or too little, exist in real-world data.