ΑΙhub.org
 

Computer vision app allows easier monitoring of glucose levels


by
06 January 2021



share this:
Reading diabetes monitor Credit: James Charles
Reading diabetes monitor. Credit: James Charles

A computer vision technology developed by University of Cambridge engineers has now been integrated into a free mobile phone app for regular monitoring of glucose levels in people with diabetes.

The app uses computer vision techniques to read and record the glucose levels, time and date displayed on a typical glucose test via the camera on a mobile phone. The technology, which doesn’t require an internet or Bluetooth connection, works for any type of glucose meter, in any orientation and in a variety of light levels. It also reduces waste by eliminating the need to replace high-quality non-Bluetooth meters, making it a cost-effective solution.
AIhub focus issue on good health and well-being
Working with UK glucose testing company GlucoRx, the Cambridge researchers have developed the technology into a free mobile phone app, called GlucoRx Vision. To use the app, users simply take a picture of their glucose meter and the results are automatically read and recorded, allowing much easier monitoring of blood glucose levels.

In addition to the glucose meters which people with diabetes use on a daily basis, many other types of digital meters are used in the medical and industrial sectors. However, many of these meters still do not have wireless connectivity, so connecting them to phone tracking apps often requires manual input.

“These meters work perfectly well, so we don’t want them sent to landfill just because they don’t have wireless connectivity,” said Dr James Charles from Cambridge’s Department of Engineering. “We wanted to find a way to retrofit them in an inexpensive and environmentally-friendly way using a mobile phone app.”

In addition to his interest in solving the challenge from an engineering point of view, Charles also had a personal interest in the problem. He has type 1 diabetes and needs to take as many as ten glucose readings per day. Each reading is then manually entered into a tracking app to help determine how much insulin he needs to regulate his blood glucose levels.

“From a purely selfish point of view, this was something I really wanted to develop,” he said.

“We wanted something that was efficient, quick and easy to use,” said Professor Roberto Cipolla, also from the Department of Engineering. “Diabetes can affect eyesight or even lead to blindness, so we needed the app to be easy to use for those with reduced vision.”

The computer vision technology behind the GlucoRx app is made up of two steps. First, the screen of the glucose meter is detected. The researchers used a single training image and augmented it with random backgrounds, particularly backgrounds with people. This helps ensure the system is robust when the user’s face is reflected in the phone’s screen.

Second, a neural network called LeDigit detects each digit on the screen and reads it. The network is trained with computer-generated synthetic data, avoiding the need for labour-intensive labelling of data which is commonly needed to train a neural network.

“Since the font on these meters is digital, it’s easy to train the neural network to recognise lots of different inputs and synthesise the data,” said Charles. “This makes it highly efficient to run on a mobile phone.”

“It doesn’t matter which orientation the meter is in – we tested it in all types of orientations, viewpoints and light levels,” said Cipolla. “The app will vibrate when it’s read the information, so you get a clear signal when you’ve done it correctly. The system is accurate across a range of different types of meters, with read accuracies close to 100%”

In addition to blood glucose monitor, the researchers also tested their system on different types of digital meters, such as blood pressure monitors, kitchen and bathroom scales. The researchers also recently presented their results at the 31st British Machine Vision Conference.

As for Charles, who has been using the app to track his glucose levels, he said it “makes the whole process easier. I’ve now forgotten what it was like to enter the values in manually, but I do know I wouldn’t want to go back to it. There are a few areas in the system which could still be made even better, but all in all I’m very happy with the outcome.”

Read the paper in full

Real-time screen reading: reducing domain shift for one-shot learning
James Charles, Stefano Bucciarelli and Roberto Cipolla
Paper presented at the British Machine Vision Conference.

Watch this short video put together by the authors to accompany their paper



tags: ,


University of Cambridge




            AIhub is supported by:


Related posts :



The Good Robot podcast: Symbiosis from bacteria to AI with N. Katherine Hayles

  13 Jun 2025
In this episode, Eleanor and Kerry talk to N. Katherine Hayles about her new book, and discuss how the biological concept of symbiosis can inform the relationships we have with AI.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

  12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Graphic novel explains the environmental impact of AI

  11 Jun 2025
EPFL’s Center for Learning Sciences has released Utop’IA, an educational graphic novel that explores the environmental impact of artificial intelligence.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Congratulations to the #IJCAI2025 award winners

  09 Jun 2025
The winners of three prestigious IJCAI awards for 2025 have been announced.

Machine learning powers new approach to detecting soil contaminants

  06 Jun 2025
Method spots pollutants without experimental reference samples.

What is AI slop? Why you are seeing more fake photos and videos in your social media feed

  05 Jun 2025
AI-generated low-quality news sites are popping up all over the place, and AI images are also flooding social media platforms

The Machine Ethics podcast – DeepDive: AI and the environment

In the 100th episode of the podcast, Ben talks to four experts in the field.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence