Abstract
The numerical solution of differential equations using machine learning-based approaches has gained significant popularity. Neural network-based discretization has emerged as a powerful tool for solving differential equations by parameterizing a set of functions. Various approaches, such as the deep Ritz method and physics-informed neural networks, have been developed for numerical solutions. Training algorithms, including gradient descent and greedy algorithms, have been proposed to solve the resulting optimization problems. In this paper, we focus on the variational formulation of the problem and propose a Gauss–Newton method for computing the numerical solution. We provide a comprehensive analysis of the superlinear convergence properties of this method, along with a discussion on semi-regular zeros of the vanishing gradient. Numerical examples are presented to demonstrate the efficiency of the proposed Gauss–Newton method.
Recommended Citation
W. Hao et al., "Gauss Newton Method for Solving Variational Problems of PDEs with Neural Network Discretizaitons," Journal of Scientific Computing, vol. 100, no. 1, article no. 17, Springer, Jul 2024.
The definitive version is available at https://doi.org/10.1007/s10915-024-02535-z
Department(s)
Mathematics and Statistics
Keywords and Phrases
Convergence analysis; Gauss–Newton method; Neural network discretization; Partial differential equations; Variational form
International Standard Serial Number (ISSN)
1573-7691; 0885-7474
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2024 Springer, All rights reserved.
Publication Date
01 Jul 2024
Comments
Purdue University, Grant 1R35GM146894