ΑΙhub.org
 

We asked teachers about their experiences with AI in the classroom — here’s what they said


by
05 December 2025



share this:

Students at computers with screens that include a representation of a retinal scanner with pixelation and binary data overlays and a brightly coloured datawave heatmap at the top.Kathryn Conrad / Datafication / Licenced by CC-BY 4.0

By Nadia Delanoy, University of Calgary

Since ChatGPT and other large language models burst into public consciousness, school boards are drafting policies, universities are hosting symposiums and tech companies are relentlessly promoting their latest AI-powered learning tools.

In the race to modernize education, artificial intelligence (AI) has become the new darling of policy innovation. While AI promises efficiency and personalization, it also introduces complexity, ethical dilemmas and new demands.

Teachers, who are at the heart of learning along with students, are watching this transformation with growing unease. For example, according to the Alberta Teachers’ Association, 80 to 90 per cent of educators surveyed expressed concern about AI’s potential negative effects on education.

To understand comprehensive policy needs, we must first understand classrooms — and teachers’ current realities.

As a researcher with expertise in technology-enhanced teaching and learning at the intersections of assessment, leadership and policy, I interviewed teachers from across Canada, with Erik Sveinson, a Bachelor of Education student. We asked them about their experiences with generative AI (GenAI) in the classroom.

Their stories help contextualize a reality of AI in a K-12 context, and offer insights around harnessing AI’s potential without harming education as a human-centred endeavour.

AI policy and teaching wisdom

This qualitative study involved 10 (grades 5 to 12) teachers from Alberta, Saskatchewan, Ontario and British Columbia.

We recruited participants through professional learning networks, teacher associations and district contacts, seeking to ensure a variety of perspectives from varied grade levels, subjects and geographic locations.

We thematically coded interview data, and then cross-referenced this with insights from a review of existing research about GenAI use in K-12 classrooms. We highlighted convergences or tensions between theories about assessment, teaching approaches in technology-enhanced environments, student learning and educator practices.

Across interviews, teachers described a widening gap between policy expectations and the emotional realities of classroom practice.

What we heard

The following themes emerged from our interviews:

1. The assessment crisis: Longstanding tools of assessment, such as the essay or the take-home project, have suddenly become vulnerable. Teachers are spending countless hours questioning the authenticity of student work.

All teachers interviewed consistently said they struggled with their current assessment practices and how students may be using GenAI in work. Confidence in the reliability of assessments have been challenging. The majority of teachers shared they felt they needed to consider students cheating more than ever given advancing GenAI technology.

2. Equity dilemmas: Teachers are on the front lines of seeing firsthand which students have unlimited access to the latest AI tools at home and which do not.

3. Teachers perceive both opportunities and challenges with AI. Great teaching is about fostering critical thinking and human connection. Ninety per cent of teachers interviewed faced complex challenges relating to equity and how best to support critical thinking in the classroom while building foundational knowledge. In particular, middle and high school teachers in core subject areas indicated students were using GenAI tools in their own time outside class without ethical guidance.

‘One more thing piled’ on

One teacher from central Alberta said:

“AI is definitely helpful for my workflow, but right now it feels like one more thing piled onto an already impossible workload. The policy says, ‘embrace innovation,’ but where’s the guidance and support?”

Classrooms are dynamic ecosystems shaped by emotion, relationships and unpredictability. Teachers manage trauma, neurodiversity, language barriers and social inequities while delivering curriculum and meeting student achievement expectations.

Teachers say there’s little recognition of the cognitive load they already carry, or the time it takes to vet, adapt and ethically deploy AI tools. They say AI policies often treating educators as passive implementers of tech, rather than active agents of learning.

A high school teacher from eastern Canada shared:

“AI doesn’t understand the emotional labour of teaching. It can’t see the trauma behind a student’s meltdown. As much as I appreciate professional learning, when it is all about what tools to use, it misses the mark.”

This perspective highlights a broader finding: teachers are not resisting AI per se; they are resisting implementation that disregards their emotional expertise and contextual judgment. They want professional learning initiatives that honour the human and relational dimensions of their work.

Burnout, professional erosion

This disconnect is not just theoretical, it’s emotional. Teachers are reporting burnout, anxiety and a sense of professional erosion. A 2024 study found that 76.9 per cent of Canadian educators felt emotionally exhausted, and nearly half had considered leaving the profession. The introduction of AI, without proper training or support, is compounding that stress.

There’s also a growing fear reported by the Alberta Teachers’ Association that, if not implemented properly with support for teachers new to the profession, AI could deskill the profession.

A teacher in Vancouver shared:

“I am a veteran teacher and understand the fundamentals of teaching. For beginner teachers, when algorithms write report cards or generate lesson plans, what happens to teacher autonomy and the art of teaching?”

Turning teaching into a checklist?

Overall, the interview responses suggest what’s missing from AI policy is a fundamental understanding of teaching as a human-centred profession. As policymakers rush to integrate AI into digitized classrooms, they’re missing a critical truth: technology cannot fix what it may not understand.

Without clear guardrails and professional learning grounded in teacher and student-informed needs, AI risks becoming a tool of surveillance and standardization, rather than empowerment.

This tension between innovation and de-professionalization emerged across many teacher responses. Educators expressed optimism about AI’s potential to reduce workload, but also deep unease about how it could erode their professional judgment and relational roles with students.

A northern Ontario teacher said:

“There is hope with new technology, but I worry that AI will turn teaching into a checklist. We’re not technicians, we’re mentors, guides and sometimes lifelines.”

Teachers fear that without educator-led frameworks, AI could shift schooling from a human-centred practice to a compliance-driven one.

Responsible AI policy

If we want to harness AI’s potential without harming education as a human-centred endeavour with students and teachers at the core, we must rethink approaches to AI innovation in education. That starts with listening to teachers.

Teachers must be involved in the design, testing and evaluation of AI tools. Policies must prioritize ethics, transparency and equity. That includes regulating how student data is used, ensuring teachers can ascertain algorithmic bias and ethical implications and also protecting teacher discretion.

Third, we need to slow down. The pace of AI innovation is dizzying, but education isn’t a startup. It’s a public good. Policies must be evidence-based and grounded in the lived experiences of those who teach.The Conversation

Nadia Delanoy, Assistant Professor, Leadership, Policy, and Governance and Learning Sciences, Werklund School of Education, University of Calgary

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:



Related posts :



Interview with Alice Xiang: Fair human-centric image dataset for ethical AI benchmarking

  04 Dec 2025
Find out more about this publicly-available, globally-diverse, consent-based human image dataset.

The Machine Ethics podcast: Fostering morality with Dr Oliver Bridge

Talking machine ethics, superintelligence, virtue ethics, AI alignment, fostering morality in humans and AI, and more.

Interview with Frida Hartman: Studying bias in AI-based recruitment tools

  02 Dec 2025
In the next in our series of interviews with ECAI2025 Doctoral Consortium participants, we caught up with Frida, a PhD student at the University of Helsinki.

Forthcoming machine learning and AI seminars: December 2025 edition

  01 Dec 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2025 and 31 January 2026.
monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence