CMX Student/Postdoc Seminar
Machine Learning for Numerical PDEs: Fast Rate, Scaling Law and Minimax Optimalit
Despite the empirical success of adopting machine learning (ML) models for solving high-dimensional partial differential equations (PDEs), the following question remains poorly answered: For a given PDE and a data-driven approximation architecture, how large the sample size and how complex the model are needed to reach a prescribed performance level? In this talk, we will discuss the statistical limits of some ML-based methods for solving elliptic PDEs from random samples, including the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs). Firstly, we will talk about how to establish information-theoretic lower bounds for both methods via Fano's inequality. Secondly, we will prove upper bounds for DRM and PINN by using a fast rate generalization bound. We discover that the local Rademacher complexity of a gradient term is hard to be bounded, which causes the current version of DRM to be sub-optimal. Based on the discovery, we propose a modified version of DRM by sampling more data points for the gradient term. We also prove that PINN and the modified version of DRM can achieve minimax optimal bounds over Sobolev spaces. Finally, we will exhibit results of some computational experiments that agrees with the convergence rates proved in our theory.