This report examines explainable artificial intelligence (XAI) in education, highlighting its role in fostering human oversight and shared responsibility. XAI aims to provide meaningful explanations about AI systems’ decisions, crucial for building trust and ensuring ethical implementation in educational contexts. The document explores the technical foundations of XAI, legal frameworks including the AI Act and GDPR, stakeholder perspectives, and necessary educator competencies. Through practical use cases like AI-powered tutoring systems and lesson plan generators, the report demonstrates how explainability supports transparency, accountability, and human agency. Creating educational AI systems that are transparent, fair, and pedagogically sound requires collaboration between developers, educators, learners, and policymakers. The report provides recommendations for implementing XAI that aligns with educational values while complying with European regulations.