✨ TL;DR
This paper provides a complete, step-by-step manual derivation of how Physics-Informed Neural Networks (PINNs) are trained, including forward propagation, loss computation, and backpropagation with explicit numerical examples. It bridges the gap between automatic differentiation libraries and the underlying mathematical operations, making PINN training transparent and verifiable.
Existing tutorials and guides on Physics-Informed Neural Networks typically rely on automatic differentiation libraries to handle the training process, treating the underlying mathematical operations as a black box. This creates a pedagogical gap where practitioners use PINNs without understanding the complete algebraic mechanics of how gradients flow through both the network parameters and the physics-based loss terms. The lack of explicit, worked-through examples makes it difficult for learners to verify their understanding or debug implementations, particularly when dealing with the product rule complications that arise when computing gradients of ODE residuals through hidden layers.
The paper uses a concrete first-order initial value problem with a known analytical solution as a running example throughout. It employs a small 1-3-3-1 multilayer perceptron (one input, two hidden layers with three neurons each, one output) with 22 trainable parameters to demonstrate every calculation with explicit numerical values. The authors manually derive forward propagation for both the network output and its temporal derivative, construct a composite loss function from the ODE residual and initial condition, perform complete backpropagation including the product rule terms in hidden layers, and execute gradient descent updates. From these concrete examples, they generalize to recursive formulas (sensitivity propagation relations) that apply to networks of arbitrary depth and connect these to automatic differentiation frameworks. A companion Jupyter/PyTorch notebook validates all hand-derived calculations against machine-computed gradients.