ΑΙhub.org
 

AI in health care challenges us to define what better, people-centred care looks like


by
24 April 2023



share this:

cartoon of a doctor stood next to a large mobile phone
By Catherine Burns

From faster and more accurate disease diagnosis to models of using health care resources more efficiently, AI promises a new frontier of effective and efficient health care. If it’s done right, AI may allow for more people-centred care and for clinicians to spend more time with people, doing the work they enjoy most. But to achieve these aspirations, foundational work must occur in how we operate today and in defining what health care looks like in the future.

AI technologies are only as reliable as the data that drives them. To unlock the power of AI, it requires us to become better at sharing health data between primary care providers, specialists, hospitals, research universities, health companies and patients to develop reliable and accurate models. Without this data, AI technologies may make mistakes, generate inappropriate solutions and encourage inappropriate trust in their answers.

Our health data will also need to be better quality. Issues with noisy sensors, incomplete documentation and different data types must be solved. Health data will have to travel across individual health journeys through multiple providers to avoid reaching solutions that are limited in time and context. In some cases, AI solutions are being developed from clinical trial data. Clinical trial data sets are well known to exclude participants of certain ages, demographics or with multiple morbidities.

Our community and small hospitals can be a solution to this, and they need a louder voice in the health care conversation. More Canadians visit community hospitals than academic hospitals, so their data and experience must be part of the solution. Our small hospitals provide many services to our remote and often underserved communities. For this reason, the voices of those working in our remote communities are critically important at this time, where they are overworked and under-resourced. AI must be designed with a goal of promoting greater access and equity in health care. This means AI must be designed to support equity, be broadly inclusive and be designed to partner with our communities.

We need to understand what it means to have successful health care. Without understanding what a high-performance health-care system looks like, technologies will not be developed to align for effective solutions. We must define the right metrics to get the right results. Do we want to reduce the cost of surgery? Or do we want to reduce the likelihood of follow-up surgery years later? Those goals may have different solutions.

Similarly, do we believe strongly in growing towards a coordinated and shared health care vision? If we do, and I hope we do, AI must be people-centred and designed from an interprofessional lens. It means we must learn and teach each other more about practices of care, outcomes, technology, decision-making and quality of life.

AI learns from our data, so we must provide the proper foundation. Our next generation of AI designers will design their technologies for the problems we tell them are important. We need to define what those problems are and what success would mean.

Catherine Burns

Catherine Burns is the Chair in Human Factors in Health Care Systems and leads the University of Waterloo’s health initiatives. She is a professor in the Faculty of Engineering and an expert in human-centred approaches to the design and implementation of advanced health-care technologies.



tags: ,


University of Waterloo




            AIhub is supported by:


Related posts :



Interview with Eden Hartman: Investigating social choice problems

  24 Apr 2025
Find out more about research presented at AAAI 2025.

The Machine Ethics podcast: Co-design with Pinar Guvenc

This episode, Ben chats to Pinar Guvenc about co-design, whether AI ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, and more.

Why AI can’t take over creative writing

  22 Apr 2025
A large language model tries to generate what a random person who had produced the previous text would produce.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Images of AI – between fiction and function

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

Grace Wahba awarded the 2025 International Prize in Statistics

  16 Apr 2025
Her contributions laid the foundation for modern statistical techniques that power machine learning algorithms such as gradient boosting and neural networks.

Repurposing protein folding models for generation with latent diffusion

  14 Apr 2025
The awarding of the 2024 Nobel Prize to AlphaFold2 marks an important moment of recognition for the of AI role in biology. What comes next after protein folding?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association