ΑΙhub.org
 

Why we should be skeptical of the hasty global push to test 15-year-olds’ AI literacy in 2029


by
13 October 2025



share this:

Kathryn Conrad / Datafication / Licenced by CC-BY 4.0

By J-C Couture, University of Alberta; Michele Martini, University of Naples Federico II, and Susan Lee Robertson, University of Cambridge

If 2022 was the year OpenAI knocked our world off course with the launch of ChatGPT, 2025 will be remembered for the frenzied embrace of AI as the solution to everything. And, yes, this includes teaching and schoolwork.

In today’s breakneck AI innovation race, the Organization for Economic Co-operation and Development (OECD), along with the European Commission, have called for the development of unified AI literacy strategies in kindergarten to Grade 12 education.

They have done this through an AI Literacy Framework developed with Code.org, and a range of experts in computational thinking, neuroscience, AI, educational technology and innovation — and with “valuable insights” from the “TeachAI community.”

The “TeachAI community” refers to a larger umbrella project providing web resources targeting teachers, education leaders and “solution providers”. Its advisory committee includes companies like Meta, OpenAI, Amazon and Microsoft and other for-profit ed tech providers, international organizations and government educational agencies and not-for-profit groups.

The rush to establish global standards for AI literacy has been further energized by a recent OECD program announcement.

The Programme for International Student Assessment (PISA) — which tests 15-year-old students of member nations in literacy, numeracy and science every three years — is introducing a media and AI literacy assessment in 2029. This is related to what it calls an “innovation domain” of learning.

There have been consultations about the AI literacy framework, but it’s misguided to think that educators and the general public at large would be able to comment on this in an informed way before AI has been widely accessible to the public.

The OECD’s hasty push for PISA 2029 threatens to obscure essential questions about the political economy that is enabling the marketing and popularization of AI, including relationships between business markets and states.

Marketing, popularizing AI

Essential questions include: Who stands to benefit most and profit from proliferating AI in education? And what are the implications for young people when national governments and international organizations appear to be actively promoting the interests of private tech companies?

We agree with a growing community of researchers that regard calls for AI literacy as being based on ill-defined and preliminary concepts: for example, the draft framework speaks about four areas of AI literacy competency that involve: engaging with AI, creating with AI, managing AI and designing AI.

As we try to grasp the meaning of terms such as “AI skills” and “AI knowledge,” the educational landscape becomes both vague and confounding. Educators are all too familiar with the legacy, often related to commercialization, of attaching various modifiers to notions of literacy — digital literacy, financial literacy, the list goes on.

‘The future’

By framing AI as a distinct, readily measurable capability, the OECD has signalled that it can impose its own understanding onto AI, leaving school communities globally with the task of simply accepting and implementing this presumed all-embracing vision of the future amid profound and alarming existential and practical questions.

Efforts to frame AI literacy as a vehicle to prepare young people for “the future” are a recurring theme of influential global policy bodies like the OECD.

Elsewhere, research has shown how these policy shifts over the past three decades follow a familiar pattern — the OECD functions as an influential policy entity that establishes its own definitions of student progress through standards and benchmarks for assessing the quality of education programs around the globe. In doing so, it imposes a single understanding on what are diverse systems with distinct cultures.

As digital education expert Ben Williamson points out, this burst of “infrastructuring AI literacy” not only involves “building, maintaining and enacting a testing and measurement system” but will also “make AI literacy into a central concern and objective of schooling systems.”

In doing so, it will sideline other important subjects, gear up schools and learners to become uncritical users of AI and turn schools into a testing ground for AI developments.

Lack of discussion around teachers

We also have other concerns.

In our preliminary research, yet to be published, we analyzed the AI Literacy Framework document and found a significant lack of discussion regarding the role of teachers. The document directly mentions teachers only 10 times and schools nine times. By comparison, AI is mentioned 442 times, while learners and students are referenced approximately 126 times.

This suggests to us that teachers and formal schooling seem to have been removed from any major role in these frameworks. When they are mentioned, they appear a more of a prop to AI and not a critical mediator.

Educators and national education systems are facing a one-size-fits-all solution to a wider societal issue that attempts to defuse, depoliticize and naturalize what ought to be urgent, engaged conversations by teachers and the education profession about AI, education, learning, sustainability and the future.

Current classroom realities

As political theorist Langdon Winner reminded us more than 40 years ago, technologies have politics that rotate around both problems and opportunities. These politics ignore some realities and amplify others.

Well-intended promoters of AI literacy in schools in Canada call for professional development and resources to support the adoption of AI. Yet these aspirations and hopes for positive change need to be contextualized by the current realities Canadian teachers face:

  • 63 per cent of educators report their ministries of education are “not supportive at all;”

  • Nearly 80 per cent of educators report struggling to cope;

  • 95 per cent of educators are concerned that staff shortages are negatively impacting students.

Proceed with slowly with care

Ours is not a call for educators to be luddites and reject technology. Rather, it’s a call to the profession and the public to collectively question the rush to AI and the current framings of AI literacy as an inevitable policy trajectory and preferred future for education.

Both the limited time frame of the next few months to respond to the AI Literacy Framework — following its May 2025 release — and the pre-emptive decision by the OECD to proceed with its PISA assessment in 2029 signals a race to a finish line.

As with the recent return to school and the annual reminders about the need for caution in school speed zones, we need to avoid distractions — and proceed slowly, with care.The Conversation

J-C Couture, Adjunct faculty and Associate Lecturer, Department of Secondary Education, University of Alberta; Michele Martini, Lecturer in Sociology of Digital Education, University of Naples Federico II, and Susan Lee Robertson, Chair in Sociology of Education, University of Cambridge

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:



Related posts :



Machine learning for atomic-scale simulations: balancing speed and physical laws

How much underlying physics can we safely “shortcut” without breaking a simulation?

Policy design for two-sided platforms with participation dynamics: Interview with Haruka Kiyohara

  09 Oct 2025
Studying the long-term impacts of decision-making algorithms on two-sided platforms such as e-commerce or music streaming apps.

The Machine Ethics podcast: What excites you about AI? Vol.2

This is a bonus episode looking back over answers to our question: What excites you about AI?

Interview with Janice Anta Zebaze: using AI to address energy supply challenges

  07 Oct 2025
Find out more about research combining renewable energy systems, tribology, and artificial intelligence.

How does AI affect how we learn? A cognitive psychologist explains why you learn when the work is hard

  06 Oct 2025
Early research is only beginning to scratch the surface of how AI technology will truly affect learning and cognition in the long run.

Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors

  03 Oct 2025
Find out more about research developing scalable and adaptive deep learning frameworks.

Diffusion beats autoregressive in data-constrained settings

  03 Oct 2025
How can we trade off more compute for less data?

Forthcoming machine learning and AI seminars: October 2025 edition

  02 Oct 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 3 October and 30 November 2025.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence