Contact

What are you looking for?

Transparency and Explainability in AI

Quality InsightsQuality Insights Volume 19July 18, 2024

We have partnered with the Know-Center to explore AI transparency and explainability.

In recent years, AI-based systems have been used for a broad body of tasks. These tasks rely on AI-made decisions which often are of a highly sensitive nature, and pose potential risks to the well-being of individuals or social groups. Such tasks can include approving loans, managing hiring processes and making health-related decisions. For AI to be used appropriately in such sensitive situations as well as others, AI-made decisions must be understandable and reasonable to human beings.

Transparency can be defined as the understandability of a specific AI system – how well we know what happens in which part of the system. This can be a mechanism that facilitates accountability (Lepri et al. 2018). Explainability is a closely related concept (Lepri et al. 2018, Larsson and Heintz, 2020) and refers to providing information, in a reverse manner, on the logic, process, factors or reasoning that the AI system’s actions are based on. Explainable AI (XAI) can be achieved via various means, for example, by adapting existing AI systems or developing AI systems that are explainable by design. Commonly, these methods are referred to as “XAI methods”.

According to Meske et al. (2022), transparency and explainability in AI pertain to five stakeholder groups:

  1. AI regulators, who need explanations to test and certify the system
  2. AI managers, who need explanations to supervise and control the algorithm and its usage, and to ensure the algorithm’s compliance
  3. AI developers, who use explanations to improve the algorithm’s performance as well as for debugging and verification. This allows them to pursue a structured engineering approach based on cause analysis instead of trial and error
  4. AI users, who are interested in understanding and comparing the reasoning of the algorithm with their own ways of thinking, to assess validity and reliability
  5. Individuals affected by AI decisions, who are interested in explainability to evaluate the fairness of a given AI-based decision

Motivated by the importance of the explainability of AI systems for many sensitive real-world tasks, we have co-written a white paper that:

  • Provides a high-level overview of the taxonomy of XAI methods
  • Reviews existing XAI methods
  • Thoroughly discusses possible challenges and future directions

Download it here.

About SGS

We are SGS – the world’s leading testing, inspection and certification company. We are recognized as the global benchmark for sustainability, quality and integrity. Our 99,600 employees operate a network of 2,600 offices and laboratories around the world.

News & Insights

  • SGS North America Inc.

201 Route 17 North,

7th floor,

Rutherford, New Jersey, 07070,

United States