Adaptive Activation Functions for Deep Learning-based Power Flow Analysis

More Info
expand_more

Abstract

This paper investigates the impact of adaptive activation functions on deep learning-based power flow analysis. Specifically, it compares four adaptive activation functions with state-of-the-art activation functions, i.e., ReLU, LeakyReLU, Sigmoid, and Tanh, in terms of loss function error, convergence speed, and learning process stability, using a real-world distribution network dataset. Results indicate that the proposed adaptive activation functions improve learning capability over state-of-the-art activation functions. Notably, adaptive ReLU activation shows the most improved learning process, with convergence speed up to twice as fast as ReLU. Adaptive Sigmoid and Tanh activation functions exhibit superior performance in terms of loss function error, outperforming ReLU and LeakyReLU by up to two orders of magnitude. Furthermore, the proposed adaptive activation functions lead to smoother and more stable learning processes, especially during early training, improving convergence. The practical implications of this study include the potential application of these adaptive activation functions in distribution network modeling, forecasting, and control, leading to more accurate and reliable power system operation.

Files

Adaptive_Activation_Functions_... (pdf)
(pdf | 1.91 Mb)
- Embargo expired in 30-07-2024