Identifying Labeling Errors Without Access to Ground Truth
Exploring Ensemble Methods for Error Detection and Rectification
More Info
expand_more
Abstract
Object detection heavily relies on accurate annotations, which are costly to obtain but crucial for model performance. Annotation errors can severely impact the reliability of detection models. In response to this challenge, we introduce EnsembAudit (EA), a novel framework designed to autonomously identify and rectify common labeling errors, thereby reducing annotation efforts. EA leverages ensemble techniques such as Threshold Voting for error identification and Non-Maximum Suppression for error rectification.
This paper evaluates EA across various noise levels and types of labeling errors to assess its effectiveness. Our experiments demonstrate that EA excels in detecting and rectifying errors in datasets with significant noise, achieving an approximate 20% reduction in errors. However, its efficacy diminishes when applied to datasets with minimal noise. This study highlights EA's potential in enhancing annotation quality and improving the robustness of object detection applications.