AI on Low-Cost Hardware

FPGA subgroup

More Info
expand_more

Abstract

In the past decades, much progress has been made in the field of AI, and now many different algorithms exist that reach very high accuracies. Unfortunately, many of these algorithms are quite resource intensive, which makes them unavailable on low-cost devices.
The aim of this thesis is to explore algorithms and neural network techniques suitable for implementation on FPGAs. While FPGAs provide almost complete control over all aspects of design, allowing for the development of high-performance systems, they have not gained widespread popularity in neural network development due to their limited accessibility compared to computers and microcontrollers.
In the thesis, an inference-only 8-bit quantized neural net is designed, implemented and deployed on the Diligent Zedboard, and the performance is compared to similar networks on other devices. The thesis then focuses on two learning algorithms: Forward-Forward learning and Hebbian learning. It is shown how Forward-Forward can be seen as a way to apply Hebbian learning rules, and a simplified algorithm is proposed for use in a quantized system and implemented on an FPGA.
Although the performance of the network is quite low, reaching only 90.5% on the MNIST dataset and 74.2% on Fashion MNIST, the results are promising enough to give ground for further research and show that even very simplified versions of the Forward-Forward algorithm are capable of learning.
Moreover, it demonstrates that the Forward-Forward algorithm is suitable for FPGA implementation.
Both implementations show that the processing speed of the FPGA implementations is much faster than that of similar network implementations on other devices.