3D Gaussian Splatting (3DGS) is a method for representing 3D scenes, but is prone to overfitting when trained with limited viewpoint diversity, of- ten resulting in artifacts like floating Gaussians at incorrect depths. This paper addresses this issue by introducing 3D Gaussian S
...
3D Gaussian Splatting (3DGS) is a method for representing 3D scenes, but is prone to overfitting when trained with limited viewpoint diversity, of- ten resulting in artifacts like floating Gaussians at incorrect depths. This paper addresses this issue by introducing 3D Gaussian Splatting with Depth, which incorporates depth supervision from RGB Depth (RGB-D) cameras into the training process. By using depth data to guide the placement of Gaussians, the proposed method aims to reduce artifacts. Through quantitative and qualitative analysis, this paper demonstrates that depth-supervised Gaussian splatting mitigates overfitting artifacts, particularly in outdoor scenes with a mediocre cam- era point diversity. The depth-supervised model is able to reduce the depth loss by a factor of three times without substantially increasing the loss on regular views.