Performance Comparison of Synthetic Face Databases with the Xception Model
More Info
expand_more
Abstract
The rise of AI-generated images presents significant challenges in distinguishing between real and fake visuals. Such fake content can disseminate false information about someone or create false identities for fraud. This study evaluates the effectiveness of the Xception model in detecting AI-generated faces, a crucial task to mitigate the misuse of facial manipulation technology. By analyzing various datasets, including iFakeFaceDB, Diverse Fake Face Dataset (DFFD), CelebA, and CASIAWebFace, we assess the model's performance in real-world scenarios. Our findings highlight the strengths of the Xception model at recognising real and synthetic images. It achieves an accuracy upwards of 97.11% on DFFD, which includes both real and synthetic images, and 96.87%-98.07% on CASIA and CelebA. Furthermore, it achieves an accuracy of 73.15% on a different synthetic facial dataset - iFakeFaceDB. This work examines the Xception model's capabilities and underscores the need for comprehensive detection methods to safeguard against the potential harms of synthetic media.