We have partnered with the Know-Center to explore AI transparency and explainability.
In recent years, AI-based systems have been used for a broad body of tasks. These tasks rely on AI-made decisions which often are of a highly sensitive nature, and pose potential risks to the well-being of individuals or social groups. Such tasks can include approving loans, managing hiring processes and making health-related decisions. For AI to be used appropriately in such sensitive situations as well as others, AI-made decisions must be understandable and reasonable to human beings.
Transparency can be defined as the understandability of a specific AI system – how well we know what happens in which part of the system. This can be a mechanism that facilitates accountability (Lepri et al. 2018). Explainability is a closely related concept (Lepri et al. 2018, Larsson and Heintz, 2020) and refers to providing information, in a reverse manner, on the logic, process, factors or reasoning that the AI system’s actions are based on. Explainable AI (XAI) can be achieved via various means, for example, by adapting existing AI systems or developing AI systems that are explainable by design. Commonly, these methods are referred to as “XAI methods”.
According to Meske et al. (2022), transparency and explainability in AI pertain to five stakeholder groups:
- AI regulators, who need explanations to test and certify the system
- AI managers, who need explanations to supervise and control the algorithm and its usage, and to ensure the algorithm’s compliance
- AI developers, who use explanations to improve the algorithm’s performance as well as for debugging and verification. This allows them to pursue a structured engineering approach based on cause analysis instead of trial and error
- AI users, who are interested in understanding and comparing the reasoning of the algorithm with their own ways of thinking, to assess validity and reliability
- Individuals affected by AI decisions, who are interested in explainability to evaluate the fairness of a given AI-based decision
Motivated by the importance of the explainability of AI systems for many sensitive real-world tasks, we have co-written a white paper that:
- Provides a high-level overview of the taxonomy of XAI methods
- Reviews existing XAI methods
- Thoroughly discusses possible challenges and future directions
Download it here.
About SGS
We are SGS – the world’s leading testing, inspection and certification company. We are recognized as the global benchmark for sustainability, quality and integrity. Our 99,600 employees operate a network of 2,600 offices and laboratories around the world.