Natural Language Processing and Tabular Data sets in Federated Continual Learning

A usability study of FCL in domains beyond Image classification

More Info
expand_more

Abstract

Federated Continual Learning (FCL) is a emerging field with strong roots in Image classification. However, limited research has been done on its potential in Natural Language Processing and Tabular datasets. With recent developments in A.I. with language models and the widespread use of mobile devices, it becomes relevant to consider FCL’s capabilities in dynamic environments. Our
paper discusses and evaluates the applicability of FCL methods between the domains of Natural Language Processing Tabular Data with Image Processing as a baseline. We use Long-Short Term Memory (LSTM) models, DNN’s and LeNet-5 as models for Sentiment analysis, Tabular classification and Image classification. Through our experiments, we evaluate the average accuracy and backwards transfer of EWC, GEM, their federated variants and the state-of-the-art FCL method FedWEIT. With these methods, sentiment analysis and tabular classification tasks show that image classification reached over 17% higher average accuracy and achieved a 99.5% average increase in knowledge
transfer between tasks. Furthermore, we observe that non-federated continual learning methods on average reach higher accuracies than their federated
counterparts as well as the state-of-the-art methods