top of page
Search
Editorial

The ‘delay, deny, defend’ strategy: a lesson for EdTech and AI in education

Updated: Dec 16

When it comes to insurance companies, the phrase ‘delay, deny, defend’ has become infamous as a strategy to minimise claims and avoid financial responsibility. While it’s often seen as a questionable business practice, the same approach can be observed in the ongoing debates about EdTech and AI in education.


As calls for a ‘digital future’ intensify and discussions on the role of AI and technology in education dominate media and academic circles, we can’t help but notice a similar pattern emerging: delay the adoption of critical discussions, deny evidence or concerns about potential harm, and defend the push for tech-driven education as the only path forward.


Delay: The call for 'more evidence'

In both the insurance industry and the world of education technology, 'more evidence' is often the response when critical issues are raised. Insurance companies delay payouts by requiring additional documentation or evidence, sometimes leading to prolonged claims processes.


In a similar vein, during the discussion on incorporating EdTech and AI in educational settings, policymakers, advocates and those aiming to connect academia with business (one needn't look further than the teacher EdTech ambassador programs to discover how counterproductive this can be), frequently call for 'more evidence' before tackling the ethical, social, and practical issues associated with these technologies and even their desirability.

While gathering evidence is essential, this constant call for more research (just take a look here, here, here, here, here, here and here) can become a convenient excuse to delay action - control expenditure and adopt meaningfully what tools are actually desirable for a child to learn. EdTech advocates often argue that technology can level up education, enhance learning, and promote Sustainable Development Goals (SDGs), yet academic studies and media reports either question the effectiveness of many educational technologies or perpetuate the narrative that we need 'more scientific evidence' what really works. This delays any definitive decisions; meanwhile commercial apps and platforms rapidly infiltrate with minimal oversight.


Deny: pushing a 'Digital Future' without enough scrutiny

Insurance companies deny claims based on the premise that certain incidents were not covered or were misreported. In the EdTech world, we often see a similar denial of potential downsides and risks associated with the rush toward digital learning environments. The push for AI and digital tools is often framed as an inevitability—especially in light of narratives about STEM education and the need for future-proof skills. Proponents argue that AI is essential for enriching children’s education, but these arguments often ignore the complexities around data privacy, algorithmic bias, and unequal access to technology or even something as personal and contextual as 'when is access and use desirable'!

For instance, using an AI-simulated world to educate a child about ecology and the environment might be an excellent and innovative method to delve into the subject with something visual, interactive, immersive. However, is it really desirable to (1) present this topic at this point in a child's development, and (2) use a technology that contributes to and likely will exaccerbate the very condition of our environment? No, there isn't an easy answer.

The enthusiastic push for AI and technology-driven education, packaged as crucial for economic competitiveness and socioeconomic mobility, sometimes glosses over the potential harms that could disproportionately affect vulnerable student populations. Commercial interests and profit-driven motives often shape these narratives, while the unintended consequences of technology—like exacerbating inequalities, perpetuating bias, impoverishing the quality of education through over-standardisation, dehumanising it, overloading students cognitively and emotionally, the overall impact on their cultural, language, and family heritage and values, or creating technological dependencies—are downplayed or literally ignored.


Defend: 'Digital is the only path forward'

Finally, insurance companies defend their position by emphasising that their approach is standard practice or simply a matter of policy. In the case of EdTech, we see a similar defense of the status quo, where 'digital futures for children' are portrayed as the main viable solution to educational disparities and global challenges. The media, growing number of academics, policymakers, and global organizations collaborating with influential industry corporations argue that the digitisation of education is critical for addressing issues like education inequity, the digital divide, and preparing students for an increasingly technology-driven world.



Yet, this defence overlooks the importance of a more thoughtful and balanced approach. Education is about more than simply providing access to technology or relying on apps and platforms—it’s about fostering environments where children can grow academically, socially, and emotionally. It’s a deeply social experience, a bridge between childhood and adulthood, a place where children should be supported to understand the world and the adults who have shaped it. Education involves learning to recognise ongoing injustices, and developing the skills to negotiate, live with, prevent real-world risks, and adapt to the complexities of the world children will inherit and love. The push for EdTech often overlooks critical questions about how technology can be used in ways that are meaningful and aligned with children’s development, rather than simply rushing to embed digital tools in every classroom.


A more thoughtful path forward

The pattern of delay, deny, defend in the EdTech and AI debate is not an effective way forward. While digital tools and AI have enormous potential, it is essential to approach their integration into education with caution, critical evaluation, and a commitment to honestly identifying what is truly proposed; chose not to do what makes no sense and have clarity and conscience about such decisions. That requires responsibility.


Digitisation has suddenly become a one-size-fit-all—'digital futures' for children. Instead of pushing that, we need to create strategies that prioritise children’s well-being, what is desirable, and to a great extent identify and ([re]learn to) trust educators of what is desirable for a child at any point in their education. Technologies—how something is done—shouldn't dominate the learning experience and the decisions about what should be in the classroom or in children's lives.


Comparing education and the discussion around its digitization to insurance companies is a rather unfortunate analogy. Education must learn from the mistakes of industries like insurance, where profit and convenience sometimes outweigh accountability and transparency. The future of education shouldn’t be confined to a digital-first mindset. To truly improve education, we need to move beyond oversimplified views of technology’s potential and sometimes simply allow educators to teach, parents to parent, and peers to engage—simply allowing meaningful learning experiences in any context, regardless of the tools available at hand.

14 views0 comments

Comments


bottom of page