ΑΙhub.org
 

AI in health care challenges us to define what better, people-centred care looks like


by
24 April 2023



share this:

cartoon of a doctor stood next to a large mobile phone
By Catherine Burns

From faster and more accurate disease diagnosis to models of using health care resources more efficiently, AI promises a new frontier of effective and efficient health care. If it’s done right, AI may allow for more people-centred care and for clinicians to spend more time with people, doing the work they enjoy most. But to achieve these aspirations, foundational work must occur in how we operate today and in defining what health care looks like in the future.

AI technologies are only as reliable as the data that drives them. To unlock the power of AI, it requires us to become better at sharing health data between primary care providers, specialists, hospitals, research universities, health companies and patients to develop reliable and accurate models. Without this data, AI technologies may make mistakes, generate inappropriate solutions and encourage inappropriate trust in their answers.

Our health data will also need to be better quality. Issues with noisy sensors, incomplete documentation and different data types must be solved. Health data will have to travel across individual health journeys through multiple providers to avoid reaching solutions that are limited in time and context. In some cases, AI solutions are being developed from clinical trial data. Clinical trial data sets are well known to exclude participants of certain ages, demographics or with multiple morbidities.

Our community and small hospitals can be a solution to this, and they need a louder voice in the health care conversation. More Canadians visit community hospitals than academic hospitals, so their data and experience must be part of the solution. Our small hospitals provide many services to our remote and often underserved communities. For this reason, the voices of those working in our remote communities are critically important at this time, where they are overworked and under-resourced. AI must be designed with a goal of promoting greater access and equity in health care. This means AI must be designed to support equity, be broadly inclusive and be designed to partner with our communities.

We need to understand what it means to have successful health care. Without understanding what a high-performance health-care system looks like, technologies will not be developed to align for effective solutions. We must define the right metrics to get the right results. Do we want to reduce the cost of surgery? Or do we want to reduce the likelihood of follow-up surgery years later? Those goals may have different solutions.

Similarly, do we believe strongly in growing towards a coordinated and shared health care vision? If we do, and I hope we do, AI must be people-centred and designed from an interprofessional lens. It means we must learn and teach each other more about practices of care, outcomes, technology, decision-making and quality of life.

AI learns from our data, so we must provide the proper foundation. Our next generation of AI designers will design their technologies for the problems we tell them are important. We need to define what those problems are and what success would mean.

Catherine Burns

Catherine Burns is the Chair in Human Factors in Health Care Systems and leads the University of Waterloo’s health initiatives. She is a professor in the Faculty of Engineering and an expert in human-centred approaches to the design and implementation of advanced health-care technologies.



tags: ,


University of Waterloo




            AIhub is supported by:


Related posts :



monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.

Interview with AAAI Fellow Roberto Navigli: multilingual natural language processing

  21 Mar 2025
Roberto tells us about his career path, some big research projects he’s led, and why it’s important to follow your passion.

Museums have tons of data, and AI could make it more accessible − but standardizing and organizing it across fields won’t be easy

  20 Mar 2025
How can AI models help organize large amounts of data from different collections, and what are the challenges?

Shlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award

  19 Mar 2025
Congratulations to Shlomo Zilberstein on winning this prestigious award!




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association