Applying Fine-Tuning methods to FTTransformer in Anti Money Laundering applications

More Info
expand_more

Abstract

This research investigates the effectiveness of combining Feature Tokenizer Transformer (FTTransformer)[6] with graph neural networks for anti-money laundering (AML) applications. We explore various fine-tuning techniques, including LoRA[9] and vanilla fine-tuning, on our baseline FTT architecture. Using the IBM AML dataset [1], we compare the performance of different models and fine-tuning approaches. Our results indicate that FTT alone do not outperform GNN’s and careful configuration is required when working with datasets of Multi-Modality. This work contributes to the development of more efficient and accurate methods for detecting financial fraud patterns.

Files

Research_paper_20_.pdf
(pdf | 0.521 Mb)
Unknown license