ΑΙhub.org
 

5 questions schools and universities should ask before they purchase AI tech products

by
01 May 2024



share this:

By George Veletsianos, University of Minnesota

Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education. The most recent? Technologies and apps that include or are powered by generative artificial intelligence, also known as GenAI.

These technologies are sold on the potential they hold for education. For example, Khan Academy’s founder opened his 2023 Ted Talk by arguing that “we’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen.”

As optimistic as these visions of the future may be, the realities of educational technology over the past few decades have not lived up to their promises. Rigorous investigations of technology after technology – from mechanical machines to computers, from mobile devices to massive open online courses, or MOOCs – have identified the ongoing failures of technology to transform education.

Yet, educational technology evangelists forget, remain unaware or simply do not care. Or they may be overly optimistic that the next new technology will be different than before.

When vendors and startups pitch their AI-powered products to schools and universities, educators, administrators, parents, taxpayers and others ought to be asking questions guided by past lessons before making purchasing decisions.

As a longtime researcher who examines new technology in education, here are five questions I believe should be answered before school officials purchase any technology, app or platform that relies on AI.

1. Which educational problem does the product solve?

One of the most important questions that educators ought to be asking is whether the technology makes a real difference in the lives of learners and teachers. Is the technology a solution to a specific problem or is it a solution in search of a problem?

To make this concrete, consider the following: Imagine procuring a product that uses GenAI to answer course-related questions. Is this product solving an identified need, or is it being introduced to the environment simply because it can now provide this function? To answer such questions, schools and universities ought to conduct needs analyses, which can help them identify their most pressing concerns.

2. Is there evidence that a product works?

Compelling evidence of the effect of GenAI products on educational outcomes does not yet exist. This leads some researchers to encourage education policymakers to put off buying products until such evidence arises. Others suggest relying on whether the product’s design is grounded in foundational research.

Unfortunately, a central source for product information and evaluation does not exist, which means that the onus of assessing products falls on the consumer. My recommendation is to consider a pre-GenAI recommendation: Ask vendors to provide independent and third-party studies of their products, but use multiple means for assessing the effectiveness of a product. This includes reports from peers and primary evidence.

Do not settle for reports that describe the potential benefits of GenAI – what you’re really after is what actually happens when the specific app or tool is used by teachers and students on the ground. Be on the lookout for unsubstantiated claims.

3. Did educators and students help develop the product?

Oftentimes, there is a “divide between what entrepreneurs build and educators need.” This leads to products divorced from the realities of teaching and learning.

For example, one shortcoming of the One Laptop Per Child program – an ambitious program that sought to put small, cheap but sturdy laptops in the hands of children from families of lesser means – is that the laptops were designed for idealized younger versions of the developers themselves, not so much the children who were actually using them.

Some researchers have recognized this divide and have developed initiatives in which entrepreneurs and educators work together to improve educational technology products.

Questions to ask vendors might be: In what ways were educators and learners included? How did their input influence the final product? What were their major concerns and how were those concerns addressed? Were they representative of the various groups of students who might use these tools, including in terms of age, gender, race, ethnicity and socioeconomic background?

4. What educational beliefs shape this product?

Educational technology is rarely neutral. It is designed by people, and people have beliefs, experiences, ideologies and biases that shape the technologies they develop.

It is important for educational technology products to support the kinds of learning environments that educators aspire for their students. Questions to ask include: What pedagogical principles guide this product? What particular kinds of learning does it support or discourage? You do not need to settle for generalities, such as a theory of learning or cognition.

5. Does the product level the playing field?

Finally, people ought to ask how a product addresses educational inequities. Is this technology going to help reduce the learning gaps between different groups of learners? Or is it one that aids some learners – often those who are already successful or privileged – but not others? Is it adopting an asset-based or a deficit-based approach to addressing inequities?

Educational technology vendors and startups may not have answers to all of these questions. But they should still be asked and considered. Answers could lead to improved products.The Conversation

George Veletsianos, Professor of learning technologies, University of Minnesota

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:


Related posts :



Madagascar’s ancient baobab forests are being restored by communities – with a little help from AI

The collaboration between communities and scientists aims to restore baobab forests in Madagascar to their natural state.
24 May 2024, by

DataLike: Interview with Wuraola Oyewusi

Ndane and Isabella talk to Wuraola Oyewusi about challenging and rewarding aspects of research and how her background in pharmacy has helped her data and AI career

European Union AI Act receives final approval

On 21 May, the Council of the EU formally signed off the artificial intelligence Act.
22 May 2024, by

#ICLR2024 invited talk: Priya Donti on why your work matters for climate more than you think

How is AI research related to climate, and how can the AI community better align their work with climate change-related goals?
21 May 2024, by

Congratulations to the #ICRA2024 best paper winners

The winners and finalists in the different categories have been announced.
20 May 2024, by

Trotting robots offer insights into animal gait transitions

A four-legged robot trained with machine learning has learned to avoid falls by spontaneously switching between walking, trotting, and pronking
17 May 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association