ΑΙhub.org
 

5 questions schools and universities should ask before they purchase AI tech products


by
01 May 2024



share this:

By George Veletsianos, University of Minnesota

Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education. The most recent? Technologies and apps that include or are powered by generative artificial intelligence, also known as GenAI.

These technologies are sold on the potential they hold for education. For example, Khan Academy’s founder opened his 2023 Ted Talk by arguing that “we’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen.”

As optimistic as these visions of the future may be, the realities of educational technology over the past few decades have not lived up to their promises. Rigorous investigations of technology after technology – from mechanical machines to computers, from mobile devices to massive open online courses, or MOOCs – have identified the ongoing failures of technology to transform education.

Yet, educational technology evangelists forget, remain unaware or simply do not care. Or they may be overly optimistic that the next new technology will be different than before.

When vendors and startups pitch their AI-powered products to schools and universities, educators, administrators, parents, taxpayers and others ought to be asking questions guided by past lessons before making purchasing decisions.

As a longtime researcher who examines new technology in education, here are five questions I believe should be answered before school officials purchase any technology, app or platform that relies on AI.

1. Which educational problem does the product solve?

One of the most important questions that educators ought to be asking is whether the technology makes a real difference in the lives of learners and teachers. Is the technology a solution to a specific problem or is it a solution in search of a problem?

To make this concrete, consider the following: Imagine procuring a product that uses GenAI to answer course-related questions. Is this product solving an identified need, or is it being introduced to the environment simply because it can now provide this function? To answer such questions, schools and universities ought to conduct needs analyses, which can help them identify their most pressing concerns.

2. Is there evidence that a product works?

Compelling evidence of the effect of GenAI products on educational outcomes does not yet exist. This leads some researchers to encourage education policymakers to put off buying products until such evidence arises. Others suggest relying on whether the product’s design is grounded in foundational research.

Unfortunately, a central source for product information and evaluation does not exist, which means that the onus of assessing products falls on the consumer. My recommendation is to consider a pre-GenAI recommendation: Ask vendors to provide independent and third-party studies of their products, but use multiple means for assessing the effectiveness of a product. This includes reports from peers and primary evidence.

Do not settle for reports that describe the potential benefits of GenAI – what you’re really after is what actually happens when the specific app or tool is used by teachers and students on the ground. Be on the lookout for unsubstantiated claims.

3. Did educators and students help develop the product?

Oftentimes, there is a “divide between what entrepreneurs build and educators need.” This leads to products divorced from the realities of teaching and learning.

For example, one shortcoming of the One Laptop Per Child program – an ambitious program that sought to put small, cheap but sturdy laptops in the hands of children from families of lesser means – is that the laptops were designed for idealized younger versions of the developers themselves, not so much the children who were actually using them.

Some researchers have recognized this divide and have developed initiatives in which entrepreneurs and educators work together to improve educational technology products.

Questions to ask vendors might be: In what ways were educators and learners included? How did their input influence the final product? What were their major concerns and how were those concerns addressed? Were they representative of the various groups of students who might use these tools, including in terms of age, gender, race, ethnicity and socioeconomic background?

4. What educational beliefs shape this product?

Educational technology is rarely neutral. It is designed by people, and people have beliefs, experiences, ideologies and biases that shape the technologies they develop.

It is important for educational technology products to support the kinds of learning environments that educators aspire for their students. Questions to ask include: What pedagogical principles guide this product? What particular kinds of learning does it support or discourage? You do not need to settle for generalities, such as a theory of learning or cognition.

5. Does the product level the playing field?

Finally, people ought to ask how a product addresses educational inequities. Is this technology going to help reduce the learning gaps between different groups of learners? Or is it one that aids some learners – often those who are already successful or privileged – but not others? Is it adopting an asset-based or a deficit-based approach to addressing inequities?

Educational technology vendors and startups may not have answers to all of these questions. But they should still be asked and considered. Answers could lead to improved products.The Conversation

George Veletsianos, Professor of learning technologies, University of Minnesota

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:


Related posts :



Introducing the NASA Onboard Artificial Intelligence Research (OnAIR) platform: an interview with Evana Gizzi

  03 Jul 2025
Find out about the OnAIR platform, some of the particular challenges of deploying AI-based solutions in space, and how the tool has been used so far.

An interview with Nicolai Ommer: the RoboCupSoccer Small Size League

  01 Jul 2025
We caught up with Nicolai to find out more about the Small Size League, how the auto referees work, and how teams use AI.

Forthcoming machine learning and AI seminars: July 2025 edition

  30 Jun 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 July and 31 August 2025.
monthly digest

AIhub monthly digest: June 2025 – gearing up for RoboCup 2025, privacy-preserving models, and mitigating biases in LLMs

  26 Jun 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

RoboCupRescue: an interview with Adam Jacoff

  25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Making optimal decisions without having all the cards in hand

Read about research which won an outstanding paper award at AAAI 2025.

Exploring counterfactuals in continuous-action reinforcement learning

  20 Jun 2025
Shuyang Dong writes about her work that will be presented at IJCAI 2025.

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

  19 Jun 2025
Until recently, most computer code was written, at least originally, by human beings. But with the advent of GenAI, that has begun to change.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence