Recent scandals like the dutch Toeslagenaffaire have shown the importance of fairness monitoring of machine learning models. When not careful, automated decision making models can unfairly favor groups of people and discriminate other groups. The results can be devastating for th
...
Recent scandals like the dutch Toeslagenaffaire have shown the importance of fairness monitoring of machine learning models. When not careful, automated decision making models can unfairly favor groups of people and discriminate other groups. The results can be devastating for the people involved. It has been recognised that this problem requires proper research. However, most of the already conducted research looks at the problem in a static context, while almost all the real life applications are a dynamic process. Datasets are constantly increasing, and an automated decision process can have an effect on the newer entries on this dataset. A new problem then arises, when looking at these prediction tasks in a dynamic context, is the older data just as relevant as the new entries? This question can be answered by the use of fading algorithms. Fading algorithms use different methods to prioritise new data and forget old data. This paper investigates the effect of these fading algorithms on the fairness of a model. The different methods researched are an abrupt fading algorithm, a gradual fading of weight algorithm and a gradual fading of amount of data algorithm. This research resulted in the showing of importance of looking at the data in a dynamic context, observing a significant improvement on the equality of opportunity, at the cost of the efficiency of the model.