ΑΙhub.org
 

5 questions schools and universities should ask before they purchase AI tech products


by
01 May 2024



share this:

By George Veletsianos, University of Minnesota

Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education. The most recent? Technologies and apps that include or are powered by generative artificial intelligence, also known as GenAI.

These technologies are sold on the potential they hold for education. For example, Khan Academy’s founder opened his 2023 Ted Talk by arguing that “we’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen.”

As optimistic as these visions of the future may be, the realities of educational technology over the past few decades have not lived up to their promises. Rigorous investigations of technology after technology – from mechanical machines to computers, from mobile devices to massive open online courses, or MOOCs – have identified the ongoing failures of technology to transform education.

Yet, educational technology evangelists forget, remain unaware or simply do not care. Or they may be overly optimistic that the next new technology will be different than before.

When vendors and startups pitch their AI-powered products to schools and universities, educators, administrators, parents, taxpayers and others ought to be asking questions guided by past lessons before making purchasing decisions.

As a longtime researcher who examines new technology in education, here are five questions I believe should be answered before school officials purchase any technology, app or platform that relies on AI.

1. Which educational problem does the product solve?

One of the most important questions that educators ought to be asking is whether the technology makes a real difference in the lives of learners and teachers. Is the technology a solution to a specific problem or is it a solution in search of a problem?

To make this concrete, consider the following: Imagine procuring a product that uses GenAI to answer course-related questions. Is this product solving an identified need, or is it being introduced to the environment simply because it can now provide this function? To answer such questions, schools and universities ought to conduct needs analyses, which can help them identify their most pressing concerns.

2. Is there evidence that a product works?

Compelling evidence of the effect of GenAI products on educational outcomes does not yet exist. This leads some researchers to encourage education policymakers to put off buying products until such evidence arises. Others suggest relying on whether the product’s design is grounded in foundational research.

Unfortunately, a central source for product information and evaluation does not exist, which means that the onus of assessing products falls on the consumer. My recommendation is to consider a pre-GenAI recommendation: Ask vendors to provide independent and third-party studies of their products, but use multiple means for assessing the effectiveness of a product. This includes reports from peers and primary evidence.

Do not settle for reports that describe the potential benefits of GenAI – what you’re really after is what actually happens when the specific app or tool is used by teachers and students on the ground. Be on the lookout for unsubstantiated claims.

3. Did educators and students help develop the product?

Oftentimes, there is a “divide between what entrepreneurs build and educators need.” This leads to products divorced from the realities of teaching and learning.

For example, one shortcoming of the One Laptop Per Child program – an ambitious program that sought to put small, cheap but sturdy laptops in the hands of children from families of lesser means – is that the laptops were designed for idealized younger versions of the developers themselves, not so much the children who were actually using them.

Some researchers have recognized this divide and have developed initiatives in which entrepreneurs and educators work together to improve educational technology products.

Questions to ask vendors might be: In what ways were educators and learners included? How did their input influence the final product? What were their major concerns and how were those concerns addressed? Were they representative of the various groups of students who might use these tools, including in terms of age, gender, race, ethnicity and socioeconomic background?

4. What educational beliefs shape this product?

Educational technology is rarely neutral. It is designed by people, and people have beliefs, experiences, ideologies and biases that shape the technologies they develop.

It is important for educational technology products to support the kinds of learning environments that educators aspire for their students. Questions to ask include: What pedagogical principles guide this product? What particular kinds of learning does it support or discourage? You do not need to settle for generalities, such as a theory of learning or cognition.

5. Does the product level the playing field?

Finally, people ought to ask how a product addresses educational inequities. Is this technology going to help reduce the learning gaps between different groups of learners? Or is it one that aids some learners – often those who are already successful or privileged – but not others? Is it adopting an asset-based or a deficit-based approach to addressing inequities?

Educational technology vendors and startups may not have answers to all of these questions. But they should still be asked and considered. Answers could lead to improved products.The Conversation

George Veletsianos, Professor of learning technologies, University of Minnesota

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:


Related posts :



Generative AI is already being used in journalism – here’s how people feel about it

  21 Feb 2025
New report draws on three years of interviews and focus group research into generative AI and journalism

Charlotte Bunne on developing AI-based diagnostic tools

  20 Feb 2025
To advance modern medicine, EPFL researchers are developing AI-based diagnostic tools. Their goal is to predict the best treatment a patient should receive.

What’s coming up at #AAAI2025?

  19 Feb 2025
Find out what's on the programme at the 39th Annual AAAI Conference on Artificial Intelligence

An introduction to science communication at #AAAI2025

  18 Feb 2025
Find out more about our forthcoming training session at AAAI on 26 February 2025.

The Good Robot podcast: Critiquing tech through comedy with Laura Allcorn

  17 Feb 2025
Eleanor and Kerry chat to Laura Allcorn about how she pairs humour and entertainment with participatory public engagement to raise awareness of AI use cases

Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies

  14 Feb 2025
Hear from Doctoral Consortium participant Kayla about her work focussed on explanations for multi-agent reinforcement learning, and human-centric explanations.

The Machine Ethics podcast: Running faster with Enrico Panai

This episode, Ben chats to Enrico Panai about different aspects of AI ethics.

Diffusion model predicts 3D genomic structures

  12 Feb 2025
A new approach predicts how a specific DNA sequence will arrange itself in the cell nucleus.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association