AutoXiv

Physics-Informed Neural Networks.

Research on neural network architectures that incorporate physical laws, differential equations, and domain knowledge into their training objectives for scientific modeling and control problems.

11 papers

Papers.

260421.0048
Physics-Informed Neural Networks for Biological $2\mathrm{D}{+}t$ Reaction-Diffusion Systems
Lavery · Cochrane · Olesen +3
This paper extends biologically-informed neural networks (BINNs) from 1D to 2D spatial domains for learning reaction-diffusion equations from data, combining neural network training with symbolic regression to discover closed-form equations. The method is demonstrated on real lung cancer cell microscopy data, successfully recovering interpretable 2D+time reaction-diffusion models.
Formal Sciences
260421.0053
Learning the Riccati solution operator for time-varying LQR via Deep Operator Networks
Chen · Biccari · Wang
This paper uses Deep Operator Networks (DeepONets) to learn a surrogate for the Riccati differential equation solution operator in finite-horizon LQR problems, enabling fast approximate optimal control without repeated numerical integration. The approach includes theoretical guarantees on stability and performance, and demonstrates significant computational speedups while maintaining high accuracy.
Formal Sciences
260421.0057
Safe Control using Learned Safety Filters and Adaptive Conformal Inference
Huriot · Tabbara · Sibai
This paper introduces Adaptive Conformal Filtering (ACoFi), which combines learned safety filters with adaptive conformal inference to provide soft safety guarantees for control systems. The method dynamically adjusts switching criteria between nominal and safe policies based on prediction uncertainty, achieving better safety performance than fixed-threshold approaches.
Formal Sciences
260421.0058
Physics-Informed Neural Networks: A Didactic Derivation of the Complete Training Cycle
Tahimi
This paper provides a complete, step-by-step manual derivation of how Physics-Informed Neural Networks (PINNs) are trained, including forward propagation, loss computation, and backpropagation with explicit numerical examples. It bridges the gap between automatic differentiation libraries and the underlying mathematical operations, making PINN training transparent and verifiable.
Formal Sciences
260421.0065
Learning Invariant Modality Representation for Robust Multimodal Learning from a Causal Inference Perspective
Mai · Han
This paper proposes CmIR, a causal inference framework that separates multimodal data into stable causal features and spurious environment-specific features to improve robustness in affective computing. The method achieves state-of-the-art performance, especially on out-of-distribution and noisy data.
Formal Sciences
260421.0066
Random Matrix Theory of Early-Stopped Gradient Flow: A Transient BBP Scenario
Coeurdoux · Ferré · Bouchaud
This paper develops a random matrix theory model that explains why neural networks exhibit a transient learning window where signal is detectable before overfitting occurs. The key mechanism is that anisotropy in input data creates fast and slow learning directions, causing a learnable eigenvalue to temporarily separate from noise before being reabsorbed.
Formal Sciences
260421.0069
Scalable Physics-Informed Neural Differential Equations and Data-Driven Algorithms for HVAC Systems
Zhai · Qiao · Mansour +1
This paper develops a hybrid simulation framework for large HVAC systems that combines physics-informed neural networks with traditional differential-algebraic equation solvers, achieving multi-fold speedups over high-fidelity simulations while maintaining low errors. The approach scales to systems with 32+ compressor-condenser pairs by learning component-level dynamics and enforcing system-level physical constraints.
Formal Sciences
260421.0073
Randomly Initialized Networks Can Learn from Peer-to-Peer Consensus
Rodríguez-Betancourt · Casasola-Murillo
This paper shows that randomly initialized neural networks can learn useful representations through simple peer-to-peer consensus (self-distillation) alone, without projectors, predictors, or pretext tasks. The findings suggest that self-distillation itself is a key mechanism driving learning in self-supervised methods, independent of other architectural components.
Formal Sciences
260421.0076
Parkinson's Disease Detection via Self-Supervised Dual-Channel Cross-Attention on Bilateral Wrist-Worn IMU Signals
Zannat
This paper presents a self-supervised deep learning method using bilateral wrist-worn IMU sensors to detect Parkinson's disease, achieving over 93% accuracy for distinguishing PD from healthy controls and demonstrating effective transfer learning with only 20% labeled data. The model is lightweight enough to run in real-time on a Raspberry Pi, making it practical for clinical deployment.
Formal Sciences
260421.0084
Dissipative Latent Residual Physics-Informed Neural Networks for Modeling and Identification of Electromechanical Systems
Long · Solak · Ajoudani
DiLaR-PINN is a physics-informed neural network that learns unmodeled dissipative effects in electromechanical systems by constraining residual terms to be energy-dissipating rather than energy-injecting. The method combines first-principles models with data-driven components that operate on latent states and guarantee physically consistent energy behavior.
Formal Sciences
260421.0086
Incremental learning for audio classification with Hebbian Deep Neural Networks
Casciotti · Santis · Antonietti +1
This paper applies Hebbian learning principles to audio classification in a continual learning setting, introducing a kernel plasticity approach that selectively updates network weights to balance learning new sounds while retaining knowledge of previously learned ones. On the ESC-50 dataset, the method achieves 76.3% accuracy across five incremental learning steps, significantly outperforming a baseline approach.
Formal Sciences