Gradient Descent Optimization of Embodied SNNs

Introducing a Scalable, Biologically Representative Closed-Loop Model for Motor Control Simulation

More Info
expand_more

Abstract

This study introduces an approach to optimizing large-scale embodied spiking neural networks (SNNs) for simulating the brain in a closed-loop environment, crucial for validating theoretical neuroscience hypotheses about the brain-body relationship. Accurately modeling this relationship at scale allows for the simulation of neural plasticity, temporal dynamics, and spike timing. Traditional parameter tuning methods are impractical for complex SNNs due to their non-differentiable nature and computational challenges. To address these issues, we apply gradient descent optimization with forward propagation through time, enhanced by surrogate gradient techniques, enabling efficient and scalable SNN tuning. We demonstrate this approach with a proof-of-concept system comprising a three-layer leaky-integrate- and-fire neural network with recurrent connections, integrated with a 2D musculoskeletal model using Hill-type muscle representations. All components are fully differentiable, allowing for gradient calcula- tions through the system. The results demonstrate that the weights are updated and the performance of the embodied SNN increases as it learns to stabilize the arms angle to zero degrees. Together with the improved motor control behavior, these results indicate that the optimization approach can handle the non-linearities of the muscle model. Spike activity show representative spike firing frequencies during the training process. The system operates within zero memory constraints and has an easily adjustable and well structured software architecture enabling scalability of the system. These findings support gra- dient descent optimization with forward propagation through time as a viable and scalable approach for embodied SNNs in motor control simulations, paving the way for more extensive applications like closed-loop cerebellum modeling.