Solving the incompressible Navier-Stokes equations is computationally heavy, with the pressure Poisson equation being the most time-consuming step. Iterative linear solvers are typically utilized to solve this equation. Since most solvers are iterative and rely on an initial gues
...
Solving the incompressible Navier-Stokes equations is computationally heavy, with the pressure Poisson equation being the most time-consuming step. Iterative linear solvers are typically utilized to solve this equation. Since most solvers are iterative and rely on an initial guess, an opportunity emerges to use machine learning to improve this initial guess, such that fewer iterations are needed, consequently saving time.
A novel graph neural network is designed that employs a custom convolution algorithm, message-passing scheme, and pooling algorithm to maximize its performance. First, a convolution algorithm is proposed that uses interpolation to make a discrete (3x3) CNN kernel continuous. Then, instead of directly computing the kernel weight from the function, an integral over specified bounds is applied to account for the geometrical inhomogeneous distribution of the source nodes. The integral is embedded as the weighted sum of a vector containing learnable parameters, computed through a dot product with the edge attribute vectors. Next to the convolution operation, a message-passing scheme is designed that is compatible with the data format of the finite volume method whilst performing well in terms of the distance information can travel over the mesh. To conclude the design of the model, a custom pooling algorithm is designed that is equivalent to average pooling in CNNs.
A normalization procedure is established that ensures consistency in the model's magnitude. Notably, the ground truth output pressure is normalized using its standard deviation, which is unknown. To estimate this normalization factor, a correction model is established that uses the same convolution algorithm but employs an architecture inspired by classification CNNs.
The model's performance is evaluated based on the reduction in number of iterations required to reach convergence. This is done for both the Preconditioned Conjugate Gradient (PCG) solver and the multigrid Geometric Agglomerated Algebraic Multigrid (GAMG) solver
Across the various tests conducted, the number of iterations needed to reach convergence is reduced by approximately 40%, with the PCG solver performing slightly better than the GAMG solver. However, the PCG solver yields less consistent results, performing very well at samples that closely align with the training data, leading to a reduction of up to 60%. However, its performance drops significantly when tested on data that does not closely resemble the training data, sometimes even increasing the number of iterations. The GAMG solver demonstrates consistent performance, with almost no difference between the training and evaluation data.
In terms of generalization, the model demonstrates promising results, achieving similar performance across datasets with varying levels of complexity. This is interesting as the root mean square error differs significantly across the datasets and individual samples. This suggests that there is no direct relationship between the reduction in the number of iterations and the accuracy of the prediction. Furthermore, the model performs well on unseen meshes, showing that it can handle the unstructured nature of the meshes used throughout this research. This demonstrates the model's ability to be trained on a diverse dataset, after which it can be applied to unseen cases.