Human agency and oversight are essential to trustworthy AI, ensuring alignment with ethical norms and safeguarding fundamental rights. Human agency preserves the autonomy of individuals using or affected by AI, fostering informed and unmanipulated interactions, while also securing the right to rectify AI-driven decisions. In contrast, human oversight entails a supervisory role over AI, involving monitoring and guiding its learning and actions to prevent harm to health, security, safety or rights, especially in high-risk AI scenarios as emphasized in the EU AI Act. Our experts explore these concepts and how different levels of human interaction with AI are defined to maintain control and ethical integrity.
Watch our experts Willy Fabritius (Global Head of Strategy & Business Development Information Security, SGS), Dr. Simone Kopeinik (Computer Scientist and Co-Head of Fair AI, Know-Center) and Angela Fessl (Scientific Manager, Know-Center, and Senior Researcher, Graz University of Technology) as they discuss human agency and oversight in AI.
About our “Trustworthy AI: current areas of research and challenges” series
The need for trustworthy Artificial Intelligence systems is recognized by many organizations, from governments to industries and academia. As AI systems become more widely used by both organizations and individuals, it is important to establish trust in them. To establish this trust, numerous white papers, proposals and standards have been published and are still in development to educate organizations on the need for and uses of AI systems. Join us for our series as our experts discuss a variety of topics related to building trust and understanding of AI systems.
About SGS
We are SGS – the world’s leading testing, inspection and certification company. We are recognized as the global benchmark for sustainability, quality and integrity. Our 99,600 employees operate a network of 2,600 offices and laboratories around the world.
Whatsapp for service requests only
Av. Elmer Faucett 3348,
07036,
Callao, Peru