The field of Computing has been a significant catalyst for innovation across various segments of our lives. Computational neuroscience keeps demanding increased perfor- mance to implement powerful simulators able to closely approximate brain behavior using complex mathematical mo
...
The field of Computing has been a significant catalyst for innovation across various segments of our lives. Computational neuroscience keeps demanding increased perfor- mance to implement powerful simulators able to closely approximate brain behavior using complex mathematical models. This resulted in various High-Performance Com- puting systems able accelerate the above simulation workloads. One of the challenges is how these applications are being ported to massively parallel accelerators that requires significant time and effort for designing and debugging. This thesis primary task is to optimize an existing hardware library for neural simulation. The above library uses one of the most widely used biophysically-meaningful neuron models called Hodgkin-Huxley. The library optimizations will be performed while following a design methodology to accelerate applications on Maxeler’s Data-Flow Engines (DFEs). A DFE is an FPGA- based accelerator incorporating a top-of-the-line reconfigurable device surrounded by high bandwidth, large capacity on-card memory. This work focused in the fully ex- tended model that had room for performance improvements. The result, an optimized model that takes advantage of the FPGA capabilities and achieve up to 2.66x speed up over the previous implementation. They key to this speedup is the use of fixed-point arithmetic that provides 2x speed up compared to the optimized floating-point version. Additionally, the model is implemented in multiple kernels in such a way that can be scaled up using multiple DFEs to achieve even greater performance.