
Equilibrium-Based Learning Dynamics in Spiking Architectures
Presentation Menu
Neuromorphic computing platforms rely on computational primitives where the "artificial" neurons and synapses mimic their biological counterparts with a much greater degree of bio-fidelity. Termed as "Spiking Neural Networks" (SNNs), they represent a significant shift from standard deep learning frameworks since they process information temporally with binary spike-based information. While the power consumption benefits of neuromorphic computing platforms are apparent due to sparse event-driven computations, scaling such computing schemes to large-scale machine learning tasks has been challenging.
The talk will delve into recent algorithmic explorations in my group to generate SNNs with deep architectures and achieve state-of-the-art recognition results on complex machine learning tasks like natural language processing. We will explore methodologies that treat spiking architectures as continuously evolving dynamical systems, revealing intriguing parallels with the learning dynamics in the brain. We will discuss methods like Equilibrium Propagation, Implicit Differentiation, among others, that address multiple challenges of training spiking architectures and highlight the necessity for bio-plausible local learning and increasing model scalability in spiking architectures.
The methodologies discussed enable spiking architectures to transition beyond simple vision-related tasks to complex sequence learning problems and large language model (LLM) architectures. In addition to spiking neural network algorithms, the talk will extend to a large spectrum of bio-plausible algorithms—dynamic networks, probabilistic AI, among others—to enable robust and accurate ML systems capable of lifelong learning.