✨ TL;DR
This paper uses Deep Operator Networks (DeepONets) to learn a surrogate for the Riccati differential equation solution operator in finite-horizon LQR problems, enabling fast approximate optimal control without repeated numerical integration. The approach includes theoretical guarantees on stability and performance, and demonstrates significant computational speedups while maintaining high accuracy.
Solving finite-horizon Linear Quadratic Regulator (LQR) problems requires repeatedly solving differential Riccati equations—nonlinear matrix-valued differential equations—for each new system instance or parameter configuration. This repeated numerical integration is computationally expensive and becomes a bottleneck in parametric and real-time optimal control applications where many system configurations must be evaluated or control must be computed rapidly. The challenge is particularly acute for time-varying systems where the Riccati equation must be solved over the entire time horizon for each query.
The authors propose learning an approximation of the solution operator that maps time-dependent system parameters to the entire Riccati solution trajectory using Deep Operator Networks (DeepONets). The computational burden is shifted to a one-time offline learning stage where the operator surrogate is trained, after which online evaluation becomes fast function evaluation rather than differential equation solving. They design specialized DeepONet architectures for matrix-valued, time-dependent problems and introduce a progressive learning strategy to handle scalability as system dimension increases. The framework constructs an operator approximation that can generalize across a wide class of system configurations.