Can Ontologies help making Machine Learning Systems Accountable?

Articles, News

Can Ontologies help making Machine Learning Systems Accountable?

Semantic technologies have a yet unlocked potential to fill gaps and unsolved challenges towards trustworthy AI systems. This is an interesting state of the art article aimed at paving the way for future research in this direction AI-PROFICIENT.EU

Extended Abstract

Even though the maturity of the Artificial Intelligence (AI) technologies is rather advanced nowadays, according to McKinsey, its adoption, deployment and application is not as wide as it could be expected. This could be attributed to many barriers including cultural ones, but above all, the lack of trust of potential users in such AI systems. The different factors that affect the users’ trustworthiness on AI systems were studied in. Some of these factors comprise the so-called Explainable Artificial Intelligence (XAI), which according to refers to the “techniques that enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”. However, the explainability of AI systems is necessary but far from sufficient for understanding them and holding them accountable. Therefore, in order to develop trustworthy AI systems, not only should they be explainable, but also accountable. Accountability can be defined as the ability to determine whether a decision was made in accordance with procedural and substantive standards and to hold someone responsible if those standards are not met. This means that with an accountable AI system, the causes that derived a given decision can be discovered, even if its underlying model’s details are not fully known or must be kept secret.

by Iker Esnaola-Gonzalez

The publication of this article has been extracted from AI-PROFICIENT.EU