ΑΙhub.org
 

5 questions schools and universities should ask before they purchase AI tech products

by
01 May 2024



share this:

By George Veletsianos, University of Minnesota

Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education. The most recent? Technologies and apps that include or are powered by generative artificial intelligence, also known as GenAI.

These technologies are sold on the potential they hold for education. For example, Khan Academy’s founder opened his 2023 Ted Talk by arguing that “we’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen.”

As optimistic as these visions of the future may be, the realities of educational technology over the past few decades have not lived up to their promises. Rigorous investigations of technology after technology – from mechanical machines to computers, from mobile devices to massive open online courses, or MOOCs – have identified the ongoing failures of technology to transform education.

Yet, educational technology evangelists forget, remain unaware or simply do not care. Or they may be overly optimistic that the next new technology will be different than before.

When vendors and startups pitch their AI-powered products to schools and universities, educators, administrators, parents, taxpayers and others ought to be asking questions guided by past lessons before making purchasing decisions.

As a longtime researcher who examines new technology in education, here are five questions I believe should be answered before school officials purchase any technology, app or platform that relies on AI.

1. Which educational problem does the product solve?

One of the most important questions that educators ought to be asking is whether the technology makes a real difference in the lives of learners and teachers. Is the technology a solution to a specific problem or is it a solution in search of a problem?

To make this concrete, consider the following: Imagine procuring a product that uses GenAI to answer course-related questions. Is this product solving an identified need, or is it being introduced to the environment simply because it can now provide this function? To answer such questions, schools and universities ought to conduct needs analyses, which can help them identify their most pressing concerns.

2. Is there evidence that a product works?

Compelling evidence of the effect of GenAI products on educational outcomes does not yet exist. This leads some researchers to encourage education policymakers to put off buying products until such evidence arises. Others suggest relying on whether the product’s design is grounded in foundational research.

Unfortunately, a central source for product information and evaluation does not exist, which means that the onus of assessing products falls on the consumer. My recommendation is to consider a pre-GenAI recommendation: Ask vendors to provide independent and third-party studies of their products, but use multiple means for assessing the effectiveness of a product. This includes reports from peers and primary evidence.

Do not settle for reports that describe the potential benefits of GenAI – what you’re really after is what actually happens when the specific app or tool is used by teachers and students on the ground. Be on the lookout for unsubstantiated claims.

3. Did educators and students help develop the product?

Oftentimes, there is a “divide between what entrepreneurs build and educators need.” This leads to products divorced from the realities of teaching and learning.

For example, one shortcoming of the One Laptop Per Child program – an ambitious program that sought to put small, cheap but sturdy laptops in the hands of children from families of lesser means – is that the laptops were designed for idealized younger versions of the developers themselves, not so much the children who were actually using them.

Some researchers have recognized this divide and have developed initiatives in which entrepreneurs and educators work together to improve educational technology products.

Questions to ask vendors might be: In what ways were educators and learners included? How did their input influence the final product? What were their major concerns and how were those concerns addressed? Were they representative of the various groups of students who might use these tools, including in terms of age, gender, race, ethnicity and socioeconomic background?

4. What educational beliefs shape this product?

Educational technology is rarely neutral. It is designed by people, and people have beliefs, experiences, ideologies and biases that shape the technologies they develop.

It is important for educational technology products to support the kinds of learning environments that educators aspire for their students. Questions to ask include: What pedagogical principles guide this product? What particular kinds of learning does it support or discourage? You do not need to settle for generalities, such as a theory of learning or cognition.

5. Does the product level the playing field?

Finally, people ought to ask how a product addresses educational inequities. Is this technology going to help reduce the learning gaps between different groups of learners? Or is it one that aids some learners – often those who are already successful or privileged – but not others? Is it adopting an asset-based or a deficit-based approach to addressing inequities?

Educational technology vendors and startups may not have answers to all of these questions. But they should still be asked and considered. Answers could lead to improved products.The Conversation

George Veletsianos, Professor of learning technologies, University of Minnesota

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:


Related posts :



Unmasking AlphaFold to predict large protein complexes

“We’re giving a new type of input to AlphaFold. The idea is to get the whole picture, both from experiments and neural networks, making it possible to build larger structures."
03 December 2024, by

How to benefit from AI without losing your human self – a fireside chat from IEEE Computational Intelligence Society

Tayo Obafemi-Ajayi (Missouri State University) chats to Hava T. Siegelmann (University of Massachusetts, Amherst)

AIhub monthly digest: November 2024 – dynamic faceted search, the kidney exchange problem, and AfriClimate AI

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
29 November 2024, by

Improving calibration by relating focal loss, temperature scaling, and properness

In ML classification tasks, achieving high accuracy is only part of the goal; it's equally important for models to express how confident they are in their predictions.
28 November 2024, by

The Good Robot podcast: art, technology and justice with Yasmine Boudiaf

In this episode, Eleanor and Kerry chat to Yasmine Boudiaf, a researcher, artist and creative technologist who uses technology in beautiful and interesting ways to challenge and redefine what we think of as 'good'.
27 November 2024, by

AI in cancer research & care: perspectives of three KU Leuven institutes

This story is a collaboration of three Institutes that are working at the intersection of cell research, cancer research and care, and artificial intelligence.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association