ΑΙhub.org
 

Computer vision app allows easier monitoring of glucose levels


by
06 January 2021



share this:
Reading diabetes monitor Credit: James Charles
Reading diabetes monitor. Credit: James Charles

A computer vision technology developed by University of Cambridge engineers has now been integrated into a free mobile phone app for regular monitoring of glucose levels in people with diabetes.

The app uses computer vision techniques to read and record the glucose levels, time and date displayed on a typical glucose test via the camera on a mobile phone. The technology, which doesn’t require an internet or Bluetooth connection, works for any type of glucose meter, in any orientation and in a variety of light levels. It also reduces waste by eliminating the need to replace high-quality non-Bluetooth meters, making it a cost-effective solution.
AIhub focus issue on good health and well-being
Working with UK glucose testing company GlucoRx, the Cambridge researchers have developed the technology into a free mobile phone app, called GlucoRx Vision. To use the app, users simply take a picture of their glucose meter and the results are automatically read and recorded, allowing much easier monitoring of blood glucose levels.

In addition to the glucose meters which people with diabetes use on a daily basis, many other types of digital meters are used in the medical and industrial sectors. However, many of these meters still do not have wireless connectivity, so connecting them to phone tracking apps often requires manual input.

“These meters work perfectly well, so we don’t want them sent to landfill just because they don’t have wireless connectivity,” said Dr James Charles from Cambridge’s Department of Engineering. “We wanted to find a way to retrofit them in an inexpensive and environmentally-friendly way using a mobile phone app.”

In addition to his interest in solving the challenge from an engineering point of view, Charles also had a personal interest in the problem. He has type 1 diabetes and needs to take as many as ten glucose readings per day. Each reading is then manually entered into a tracking app to help determine how much insulin he needs to regulate his blood glucose levels.

“From a purely selfish point of view, this was something I really wanted to develop,” he said.

“We wanted something that was efficient, quick and easy to use,” said Professor Roberto Cipolla, also from the Department of Engineering. “Diabetes can affect eyesight or even lead to blindness, so we needed the app to be easy to use for those with reduced vision.”

The computer vision technology behind the GlucoRx app is made up of two steps. First, the screen of the glucose meter is detected. The researchers used a single training image and augmented it with random backgrounds, particularly backgrounds with people. This helps ensure the system is robust when the user’s face is reflected in the phone’s screen.

Second, a neural network called LeDigit detects each digit on the screen and reads it. The network is trained with computer-generated synthetic data, avoiding the need for labour-intensive labelling of data which is commonly needed to train a neural network.

“Since the font on these meters is digital, it’s easy to train the neural network to recognise lots of different inputs and synthesise the data,” said Charles. “This makes it highly efficient to run on a mobile phone.”

“It doesn’t matter which orientation the meter is in – we tested it in all types of orientations, viewpoints and light levels,” said Cipolla. “The app will vibrate when it’s read the information, so you get a clear signal when you’ve done it correctly. The system is accurate across a range of different types of meters, with read accuracies close to 100%”

In addition to blood glucose monitor, the researchers also tested their system on different types of digital meters, such as blood pressure monitors, kitchen and bathroom scales. The researchers also recently presented their results at the 31st British Machine Vision Conference.

As for Charles, who has been using the app to track his glucose levels, he said it “makes the whole process easier. I’ve now forgotten what it was like to enter the values in manually, but I do know I wouldn’t want to go back to it. There are a few areas in the system which could still be made even better, but all in all I’m very happy with the outcome.”

Read the paper in full

Real-time screen reading: reducing domain shift for one-shot learning
James Charles, Stefano Bucciarelli and Roberto Cipolla
Paper presented at the British Machine Vision Conference.

Watch this short video put together by the authors to accompany their paper



tags: ,


University of Cambridge




            AIhub is supported by:


Related posts :



Generative AI is already being used in journalism – here’s how people feel about it

  21 Feb 2025
New report draws on three years of interviews and focus group research into generative AI and journalism

Charlotte Bunne on developing AI-based diagnostic tools

  20 Feb 2025
To advance modern medicine, EPFL researchers are developing AI-based diagnostic tools. Their goal is to predict the best treatment a patient should receive.

What’s coming up at #AAAI2025?

  19 Feb 2025
Find out what's on the programme at the 39th Annual AAAI Conference on Artificial Intelligence

An introduction to science communication at #AAAI2025

  18 Feb 2025
Find out more about our forthcoming training session at AAAI on 26 February 2025.

The Good Robot podcast: Critiquing tech through comedy with Laura Allcorn

  17 Feb 2025
Eleanor and Kerry chat to Laura Allcorn about how she pairs humour and entertainment with participatory public engagement to raise awareness of AI use cases

Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies

  14 Feb 2025
Hear from Doctoral Consortium participant Kayla about her work focussed on explanations for multi-agent reinforcement learning, and human-centric explanations.

The Machine Ethics podcast: Running faster with Enrico Panai

This episode, Ben chats to Enrico Panai about different aspects of AI ethics.

Diffusion model predicts 3D genomic structures

  12 Feb 2025
A new approach predicts how a specific DNA sequence will arrange itself in the cell nucleus.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association