Friedemann Zenke: Functional spiking neural network models - From end-to-end optimization to plausible learning rules
When |
Apr 27, 2021
from 05:15 PM to 06:00 PM |
---|---|
Where | Zoom Meeting. Meeting ID and password will be sent with e-mail invitation. You can also contact Fiona Siegfried for Meeting ID and password. |
Contact Name | Fiona Siegfried |
Add event to calendar |
vCal iCal |
Abstract
Our brains use spiking neural networks to process information. But how synaptic connectivity translates into complex information processing capabilities is theoretically not well understood. One common remedy used for artificial neural networks is to rely on end-to-end optimization to find the required connectivity. However, this approach fails in spiking network models because spiking neurons are not differentiable.
In my talk, I will introduce the notion of surrogate gradients that sidesteps this problem and illustrate their effectiveness on several complex tasks. When penalizing high spiking activity during optimization, networks exhibit sparse activity reminiscent of neurobiology while their computational capabilities remain unaffected down to a critical sparsity limit. Further, we will see that learning is robust to the choice of surrogate and self-correcting for imperfections in the underlying computational substrate. Finally, I will show how practical approximations relate surrogate gradients with biologically plausible three-factor learning rules with little impact on their effectiveness.
About the speaker
Friedemann Zenke
Hosted by
Stefan Rotter