ΑΙhub.org
 

Computer vision app allows easier monitoring of glucose levels


by
06 January 2021



share this:
Reading diabetes monitor Credit: James Charles
Reading diabetes monitor. Credit: James Charles

A computer vision technology developed by University of Cambridge engineers has now been integrated into a free mobile phone app for regular monitoring of glucose levels in people with diabetes.

The app uses computer vision techniques to read and record the glucose levels, time and date displayed on a typical glucose test via the camera on a mobile phone. The technology, which doesn’t require an internet or Bluetooth connection, works for any type of glucose meter, in any orientation and in a variety of light levels. It also reduces waste by eliminating the need to replace high-quality non-Bluetooth meters, making it a cost-effective solution.
AIhub focus issue on good health and well-being
Working with UK glucose testing company GlucoRx, the Cambridge researchers have developed the technology into a free mobile phone app, called GlucoRx Vision. To use the app, users simply take a picture of their glucose meter and the results are automatically read and recorded, allowing much easier monitoring of blood glucose levels.

In addition to the glucose meters which people with diabetes use on a daily basis, many other types of digital meters are used in the medical and industrial sectors. However, many of these meters still do not have wireless connectivity, so connecting them to phone tracking apps often requires manual input.

“These meters work perfectly well, so we don’t want them sent to landfill just because they don’t have wireless connectivity,” said Dr James Charles from Cambridge’s Department of Engineering. “We wanted to find a way to retrofit them in an inexpensive and environmentally-friendly way using a mobile phone app.”

In addition to his interest in solving the challenge from an engineering point of view, Charles also had a personal interest in the problem. He has type 1 diabetes and needs to take as many as ten glucose readings per day. Each reading is then manually entered into a tracking app to help determine how much insulin he needs to regulate his blood glucose levels.

“From a purely selfish point of view, this was something I really wanted to develop,” he said.

“We wanted something that was efficient, quick and easy to use,” said Professor Roberto Cipolla, also from the Department of Engineering. “Diabetes can affect eyesight or even lead to blindness, so we needed the app to be easy to use for those with reduced vision.”

The computer vision technology behind the GlucoRx app is made up of two steps. First, the screen of the glucose meter is detected. The researchers used a single training image and augmented it with random backgrounds, particularly backgrounds with people. This helps ensure the system is robust when the user’s face is reflected in the phone’s screen.

Second, a neural network called LeDigit detects each digit on the screen and reads it. The network is trained with computer-generated synthetic data, avoiding the need for labour-intensive labelling of data which is commonly needed to train a neural network.

“Since the font on these meters is digital, it’s easy to train the neural network to recognise lots of different inputs and synthesise the data,” said Charles. “This makes it highly efficient to run on a mobile phone.”

“It doesn’t matter which orientation the meter is in – we tested it in all types of orientations, viewpoints and light levels,” said Cipolla. “The app will vibrate when it’s read the information, so you get a clear signal when you’ve done it correctly. The system is accurate across a range of different types of meters, with read accuracies close to 100%”

In addition to blood glucose monitor, the researchers also tested their system on different types of digital meters, such as blood pressure monitors, kitchen and bathroom scales. The researchers also recently presented their results at the 31st British Machine Vision Conference.

As for Charles, who has been using the app to track his glucose levels, he said it “makes the whole process easier. I’ve now forgotten what it was like to enter the values in manually, but I do know I wouldn’t want to go back to it. There are a few areas in the system which could still be made even better, but all in all I’m very happy with the outcome.”

Read the paper in full

Real-time screen reading: reducing domain shift for one-shot learning
James Charles, Stefano Bucciarelli and Roberto Cipolla
Paper presented at the British Machine Vision Conference.

Watch this short video put together by the authors to accompany their paper



tags: ,


University of Cambridge




            AIhub is supported by:


Related posts :



Interview with Eden Hartman: Investigating social choice problems

  24 Apr 2025
Find out more about research presented at AAAI 2025.

The Machine Ethics podcast: Co-design with Pinar Guvenc

This episode, Ben chats to Pinar Guvenc about co-design, whether AI ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, and more.

Why AI can’t take over creative writing

  22 Apr 2025
A large language model tries to generate what a random person who had produced the previous text would produce.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Images of AI – between fiction and function

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

Grace Wahba awarded the 2025 International Prize in Statistics

  16 Apr 2025
Her contributions laid the foundation for modern statistical techniques that power machine learning algorithms such as gradient boosting and neural networks.

Repurposing protein folding models for generation with latent diffusion

  14 Apr 2025
The awarding of the 2024 Nobel Prize to AlphaFold2 marks an important moment of recognition for the of AI role in biology. What comes next after protein folding?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association