PINNs
Resources
- Raissi, M., P. Perdikaris, and G. E. Karniadakis. 2019. “Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations.” Journal of Computational Physics 378 (February):686–707. https://doi.org/10.1016/j.jcp.2018.10.045.
- Panahi, Milad, Giovanni Michele Porta, Monica Riva, and Alberto Guadagnini. 2024. “Modelling Parametric Uncertainty in PDEs Models via Physics-Informed Neural Networks.” arXiv. https://doi.org/10.48550/arXiv.2408.04690.
- Cho, Woojin, Kookjin Lee, Donsub Rim, and Noseong Park. n.d. “Hypernetwork-Based Meta-Learning for Low-Rank Physics-Informed Neural Networks.”
- Yang, Liu, Xuhui Meng, and George Em Karniadakis. 2021. “B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data.” Journal of Computational Physics 425 (January):109913. https://doi.org/10.1016/J.JCP.2020.109913.
- Kiyani, Elham, Khemraj Shukla, Jorge F. Urbán, Jérôme Darbon, and George Em Karniadakis. 2025. “Which Optimizer Works Best for Physics-Informed Neural Networks and Kolmogorov-Arnold Networks?” arXiv. https://doi.org/10.48550/arXiv.2501.16371.
- Wang, Sifan, Ananyae Kumar Bhartari, Bowen Li, and Paris Perdikaris. 2025. “Gradient Alignment in Physics-Informed Neural Networks: A Second-Order Optimization Perspective.” arXiv. https://doi.org/10.48550/arXiv.2502.00604.
Related
Main Idea
PINNs are extremely nonlinear (global) Collocation Methods for forward solutions or Inverse Problems in Partial Differential Equations. The number of collocation points is generally unrelated to the number of variables to solve for (the parameters of the neural network), and a loss minimization is used, as opposed to solving a system of equations.
As opposed to some collocation methods which rely on recursive / analytic derivatives of basis functions, PINNs rely on Automatic Differentiation for the construction of their derivatives.
Vanilla PINNs suffer from inexact solution of BCs, ICs, and the Governing PDE. Some of these can be combatted through clever architecture changes, for simple cases.
Training
Recent work suggests that second-order/quasi-newton Optimization or Multi-Objective Optimization is essential for good performance, outweighing many other factors such as architecture and collocation point sampling. Generally to improve performance, this should be the first thing to try.
Exact Periodic BCs
If our domain is
where
We can add higher frequencies to
Exact Initial Conditions
We want
Construct
where $$\phi(t) = \exp\left ( {-\frac{\lambda t}{T}} \right ) \cdot \left ( 1 - \frac{t}{T} \right ),$$
now with
Uncertainty
Suppose
- Naively build a PINNs solution for each new
, throwing away previous results. - Take
and train this now with samples over . This implies a sort of continuity or smoothness of over , which may or may not be the case. There is one training over all . - Do some sneaky hypernetwork tricks. I.e. have
as a guess from some hypernetwork, then detach and use this as the initial guess for regular training. This has been done by having take low rank updates, e.g. , and only train during this 2nd phase. Share all and across , but still learn them (with an orthogonality constraint). See “Hypernetwork-Based Meta-Learning for Low-Rank Physics-Informed Neural Networks.” Learning and here can be interpreted as Meta-Learning.
Bayesian PINNs
To facilitate UQ and have a probabilistic version of
As part of this, we must construct a likelihood function,
For inverse problems, we can just group those parameters with
- Include other PINNs variants: training, architecture, and sampling
- Discuss problems in training
- Highlight scaling to higher dimensions