SPATIAL

Practical AI Trustworthiness with Human Oversight

More Info
expand_more

Abstract

We demonstrate SPATIAL, a proof-of-concept system that augments modern applications with capabilities to analyze trustworthy properties of AI models. The practical analysis of trustworthy properties is key to guaranteeing the safety of users and overall society when interacting with AI -driven applications. SPATIAL implements AI dashboards to introduce human-in-the-loop capabilities for the construction of AI models. SPATIAL allows different stakeholders to obtain quantifiable insights that characterize the decision making process of AI. This information can then be used by the stakeholders to comprehend possible issues that influence the performance of AI models, such that the issues can be resolved by human operators. Through rigorous benchmarks and experiments in a real-world industrial application, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness. However, this, in turn, increases the complexity of developing and maintaining the systems implementing AI. Our work paves the way towards augmenting modern applications with trustworthy AI mechanisms and human oversight approaches.

Files

SPATIAL_Practical_AI_Trustwort... (pdf)
(pdf | 0.724 Mb)
warning

File under embargo until 22-02-2025