Design Patterns for Detecting and Mitigating Bias in Edge AI
More Info
expand_more
Abstract
From smart phones to speakers and watches, Edge Al is deployed on billions of devices to process large volumes of personal data efficiently, privately and in real-time. While Edge Al applications are promising, many recent incidents of bias in Al systems caution that Edge Al too, may systematically discriminate against groups of people based on their gender, race, age, accent, nationality and other personal attributes. More so, as the physical restrictions of Edge Al, together with the complexity of its heterogeneous and decentralised operating environment pose trade-offs when deploying Al to the edge.
This thesis is motivated by the societal demand for trustworthy Al, by the propensity of Al systems to be biased, and consequently by the need to detect and mitigate bias in diverse Edge Al applications. To address this need, this thesis develops design patterns for detecting and mitigating bias in the development of Edge Al systems. The design patterns present a generalisable approach for capturing established practices to detect and mitigate bias in machine learning. They make this knowledge readily accessible to researchers and practitioners that develop Edge Al, but who have limited prior experience with detecting and mitigating bias.