A Survey on Distributed Tiny Machine Learning
Exploring Techniques, Applications, Challenges, and Future Directions in Distributed Tiny Machine Learning
More Info
expand_more
Abstract
The explosive growth in data collection driven by the proliferation of interconnected devices necessitates novel approaches to data processing. Traditional centralised data processing methods are increasingly inadequate due to the sheer volume of data generated. Distributed Tiny Learning (DTL) offers a compelling solution by distributing machine learning tasks across multiple edge devices and processing data locally, thus minimising the need for data transmission to central servers. This approach is particularly beneficial in scenarios with limited network bandwidth and stringent privacy requirements, enhancing data security and compliance with privacy regulations. The advent of 6G networks, with their promise of unprecedented speed, capacity, and reliability, can further amplify the power of DTL. By providing higher bandwidth and lower latency, 6G enables more efficient data processing and communication among edge devices, thereby enhancing the overall performance and scalability of DTL systems. This integration supports real-time decision-making for applications such as autonomous vehicles, smart cities, and healthcare monitoring.
This paper conducts a comprehensive survey of the state-of-the-art in DTL, categorising scientific literature, mapping the ecosystem and tools, and addressing performance, efficiency, and scalability challenges in ultra-low-power devices within a 6G context. Additionally, it implements and benchmarks two DTL algorithms, providing practical insights into their effectiveness and operational viability.