Explaining the Decisions of Artificial Intelligence Models in Manufacturing
01 July 2021
The publication of this article has been kindly granted by STAR-EU, an ICT-38 Project Cluster partner.
The Fourth Industrial Revolution (Industry 4.0) has resulted in the automation of many manufacturing processes. Artificial Intelligence (AI) models offer astonishing performances in various industrial use cases such as predictive quality management, effective human robot collaboration and agile production.
However, such high accuracy comes at the cost of low interpretability. As interpretability or explainability, one refers to the notion of explaining and expressing, in an intuitive manner, an AI model. In real world applications, AI solutions need to operate as high-performance models which contain huge amount (up to thousands) of hyper parameters which indicates extreme internal complexity by using non-linear transformations. To that end, AI models tend to operate as “black-boxes” with a low level of clarity of their inner processes, especially to non-IT experts and other stakeholders, thus generating an issue of trust. The field of Explainable Artificial Intelligence (XAI) has been touted as a way to enhance transparency of Machine Learning (ML) models and increase human cognition.