AI systems are being entrusted with decisions that deeply impact individual and societal well-being. For stakeholders to responsibly harness AI in these sensitive areas, the decisions made by these systems must be understandable and reasonable to humans, highlighting the necessity of explainability and transparency for trustworthiness. Transparency allows us to comprehend the inner workings of an AI system, facilitating accountability, while explainability sheds light on the logic and reasoning behind AI decisions. This white paper addresses the significance of explainability, reviewing existing approaches and discussing challenges and future prospects.