✨ TL;DR
This paper applies Hebbian learning principles to audio classification in a continual learning setting, introducing a kernel plasticity approach that selectively updates network weights to balance learning new sounds while retaining knowledge of previously learned ones. On the ESC-50 dataset, the method achieves 76.3% accuracy across five incremental learning steps, significantly outperforming a baseline approach.
Deep neural networks typically suffer from catastrophic forgetting when learning new tasks sequentially—they lose performance on previously learned tasks when trained on new data. This is particularly problematic for continual or lifelong learning scenarios where systems need to incrementally acquire new knowledge without forgetting old information. While humans naturally perform lifelong learning, standard deep learning approaches struggle with this capability, especially in domains like audio classification where models need to continuously adapt to new sound categories.
The authors propose a Hebbian learning-based approach with kernel plasticity for incremental audio classification. The method selectively modulates network kernels (weights) during incremental learning steps: some kernels are actively updated to learn new information while others are protected to retain previous knowledge. This selective plasticity mechanism is inspired by biological learning processes in the brain, where synaptic connections strengthen or weaken based on correlated neural activity. The approach is evaluated on the ESC-50 environmental sound classification dataset, divided into five incremental learning steps to simulate continual learning scenarios.