As artificial intelligence (AI) becomes deeply embedded in education, concerns about fairness, bias, and transparency are rising. AI-driven tools, from adaptive learning systems to automated grading platforms, are shaping students' educational journeys and future opportunities. However, as the European Union AI Act designates these systems as 'high-risk', it is crucial to ensure they meet stringent fairness and trustworthiness standards.
While AI in education (AIED) offers benefits like personalised learning and predictive analytics, it also raises ethical and regulatory challenges. Algorithms can reinforce biases, misinterpret student potential, or limit opportunities based on opaque decision-making. Without proper oversight, these systems risk exacerbating educational inequalities rather than mitigating them.
Why AI audits matter
As AI rapidly integrates into education worldwide, regulatory gaps persist especially in evidencing how AI systems meet minimum appropriate standards and where they need to focus attention to protect children's data, ensure their wellbeing and quality education.
AIED holds immense promise by offering greater inclusivity, timely assessments, and alternative learning provisions. Adaptive technologies adjust study materials based on student progress, while automated grading systems predict human-like scoring. AI-driven student support tools help identify at-risk learners, but some systems modify learning paths without notifying students, potentially limiting their control over their own education.
However, AIED remains technologically immature and often overhyped, with serious privacy, security, and fairness risks. Even well-established EdTech tools lead to further issues relating to data breaches and cyber risks, while AI-driven decisions may reinforce bias and widen inequalities - aspects that are so much harder to assess, detect and prevent.
Moreover, a recent UNESCO survey found that while AI use in classrooms is growing, fewer than 10% of educational institutions have formal guidelines for its ethical deployment. This lack of comprehensive governance further exposes students and educators to growing risks and accountability challenges in relation to advancing digital systems influencing their daily practices in class.
Introducing the TAI-SDF Framework
Our work, based on the EU-HORIZON project TRUSTEE, aims to bridge the governance and assessment gap by providing a comprehensive and practical framework to assess and improve the fairness of AI-driven educational technologies. This work expands on EDDS's framework of audit and risk assessment of EdTech as many technologies now integrate advancing algorithmic capabilities into their products.

TheTrustworthy AI Support Design Framework (TAI-SDF) is a structured assessment tool which integrates legal, ethical and technical dimensions. This framework ensures that AI systems used in education are:
Legally compliant (aligned with the EU AI Act and data protection regulations)
Ethically sound (focused on fairness, transparency, and accountability)
Technically robust (evaluating security, reliability, and data privacy)
TAI-SDF builds on the Assessment List for Trustworthy Artificial Intelligence (ALTAI) guidelines and the EU AI Act’s high-risk classification (Annex III). However, it goes further by using an AI-driven assistant to automate trustworthiness evaluations, making developer documentation analysis more comprehensive and efficient.
Case studies: real-world AI evaluations
We apply the TAI-SDF framework to two AI-powered educational platforms: Thrively, an AI tool assessing students' socio-emotional skills and learning potential. And Century Tech, an adaptive learning system personalising educational content. Both products are widely used. TAI-SDF seeks to find relevant responses on aspects such as:
Does the system compute fairness metrics in evaluating their AI solution?
Does the AI solution explain why it made a particular decision or return a specific result?
Is the AI solution accessible for persons with disabilities?
Does its developers consider diversity and representativeness of end-users and/or subjects in the data?
Did the developers test for specific target groups or problematic use cases? Among other aspects.
The fairness assessment revealed gaps in bias mitigation and transparency.
Case study 1: Century Tech
Century Tech is a widely used adaptive learning platform that personalises educational content using AI-driven models. However, when assessed under TAI-SDF, the platform falls short in several critical areas, particularly those related to bias mitigation, explainability, and security robustness.

AI Robustness (Score: 2/10 - Red)
Utilises knowledge tracing & reinforcement learning models.
Gaps inclide
No reference to error-handling, edge-case scenarios, or resilience testing.
The lack of human-in-the-loop mechanisms means that AI errors may go unchecked.
Explainability (Score: 2/10 - Red)
Some level of transparency in teacher dashboards.
Gaps include
Lacks explainability features—users cannot see why certain content is recommended.
AI recommendations could mislead struggling students if not properly understood.
Security and privacy (Score: 3/10 - Red)
Implements encryption and access controls.
Gaps include
No mention of adversarial robustness testing to protect against system gaming or hacking attempts.
Cross-border data transfers exist but lack clarity on extra security measures.
AI fairness (Score: 3/10 - Red)
Uses adaptive models for individualised learning.
Gaps include
No evidence of bias audits, which could result in disadvantaging certain student demographics.
No clear strategy to ensure AI models work fairly across diverse learning needs.
Legal compliance (Score: 5/10 - Yellow)
Century Tech aligns with GDPR and implements consent mechanisms. It distinguishes between data controller and data processor roles. However,
Gaps include
Does not explicitly outline Data Protection Impact Assessments (DPIAs), which are crucial for high-risk AI systems.
Key takeaways and recommendations
Century Tech’s AI model lacks essential safeguards required for high-risk AI systems in education.
Critical weaknesses include bias mitigation, explainability, and robustness against adversarial attacks.
To align with Trustworthy AI principles, Century Tech should:
Conduct regular bias audits to ensure fairness across demographics.
Implement AI explainability features so students & teachers understand recommendations.
Introduce security testing against potential AI manipulation or hacking.
Enable human oversight mechanisms to reduce risks of over-reliance on AI.
Case study 2: Thrively
AI-driven educational tools like Thrively aim to personalise learning and help students discover their strengths. However, when assessed under the TAI-SDF -Thrively falls significantly short in key areas of fairness, security, and explainability.

Legal compliance (Score: 3/10 - Red)
Thrively collects and processes student data but lacks transparency regarding data protection compliance.
Gaps include
Mentions data collection for personalisation but does not clearly state compliance with privacy laws like GDPR, COPPA, etc.
No evidence of Data Protection Impact Assessments (DPIAs) or detailed user control over data deletion requests.
AI Fairness (Score: 2/10 - Red)
Uses AI to assess student strengths and provide recommendations.
Gaps include
No mention of bias mitigation strategies or fairness audits.
No consideration of diversity in training data, leading to risks of algorithmic bias affecting different student groups.
Security and Privacy (Score: 2/10 - Red)
Data is collected and used for personalisation.
Gaps include
No information on encryption, pseudonymisation, or anonymisation techniques.
Lack of transparency on third-party data sharing raises privacy concerns.
Explainability (Score: 1/10 - Red)
Thrively does not explain AI decision-making or how recommendations are generated.
Gaps include
No visibility into why students receive specific assessments or learning pathways.
Lack of explainability mechanisms makes it difficult for teachers and students to trust AI-driven decisions.
AI Robustness (Score: 1/10 - Red)
Utilizes AI-driven assessments.
Gaps include
No evidence of security testing to protect against adversarial manipulation.
No mechanisms for flagging bias, discrimination, or errors in the AI system.
No defined protocols for handling AI failures or incorrect recommendations.
Key takeaways and recommendations
Thrively's AI model lacks essential safeguards to meet Trustworthy AI standards. Its major weaknesses include AI fairness, transparency and security protections.
To align with best practices, Thrively should:
Introduce fairness audits to prevent algorithmic bias.
Provide clear AI explainability features so users understand AI-driven decisions.
Strengthen privacy protections, ensuring encryption, anonymisation and third-party data security.
Implement security testing to protect against AI manipulation and data breaches.
The urgency of trustworthiness in AIED
AI-driven educational tools like Century Tech and Thrively showcase the potential of personalised learning, yet both assessments under the TAI-SDF framework highlight critical gaps in fairness, transparency and security. As adaptive learning technologies fall under the high-risk category of the EU AI Act, ensuring they meet legal, ethical and technical standards is not optional—it is a necessity.
For AIED to be truly transformative, developers must prioritise bias mitigation, explainability and robust privacy protections. Without these safeguards, AI risks widening educational inequalities rather than solving them. As AI adoption accelerates in schools, governments, institutions and developers must act now to ensure these technologies are not only effective but also safe, fair and accountable.
This research paper, currently under review, features the work of the Trust & Privacy Preserving Computing Platform For Cross-Border Federation Of Data (TRUSTEE), with Etoile Partners (EDDS) as a consortium partner.
Contact us for an assessment and support enquiry.
Kommentare