ΑΙhub.org
 

Computer vision app allows easier monitoring of glucose levels

by
06 January 2021



share this:
Reading diabetes monitor Credit: James Charles
Reading diabetes monitor. Credit: James Charles

A computer vision technology developed by University of Cambridge engineers has now been integrated into a free mobile phone app for regular monitoring of glucose levels in people with diabetes.

The app uses computer vision techniques to read and record the glucose levels, time and date displayed on a typical glucose test via the camera on a mobile phone. The technology, which doesn’t require an internet or Bluetooth connection, works for any type of glucose meter, in any orientation and in a variety of light levels. It also reduces waste by eliminating the need to replace high-quality non-Bluetooth meters, making it a cost-effective solution.
AIhub focus issue on good health and well-being
Working with UK glucose testing company GlucoRx, the Cambridge researchers have developed the technology into a free mobile phone app, called GlucoRx Vision. To use the app, users simply take a picture of their glucose meter and the results are automatically read and recorded, allowing much easier monitoring of blood glucose levels.

In addition to the glucose meters which people with diabetes use on a daily basis, many other types of digital meters are used in the medical and industrial sectors. However, many of these meters still do not have wireless connectivity, so connecting them to phone tracking apps often requires manual input.

“These meters work perfectly well, so we don’t want them sent to landfill just because they don’t have wireless connectivity,” said Dr James Charles from Cambridge’s Department of Engineering. “We wanted to find a way to retrofit them in an inexpensive and environmentally-friendly way using a mobile phone app.”

In addition to his interest in solving the challenge from an engineering point of view, Charles also had a personal interest in the problem. He has type 1 diabetes and needs to take as many as ten glucose readings per day. Each reading is then manually entered into a tracking app to help determine how much insulin he needs to regulate his blood glucose levels.

“From a purely selfish point of view, this was something I really wanted to develop,” he said.

“We wanted something that was efficient, quick and easy to use,” said Professor Roberto Cipolla, also from the Department of Engineering. “Diabetes can affect eyesight or even lead to blindness, so we needed the app to be easy to use for those with reduced vision.”

The computer vision technology behind the GlucoRx app is made up of two steps. First, the screen of the glucose meter is detected. The researchers used a single training image and augmented it with random backgrounds, particularly backgrounds with people. This helps ensure the system is robust when the user’s face is reflected in the phone’s screen.

Second, a neural network called LeDigit detects each digit on the screen and reads it. The network is trained with computer-generated synthetic data, avoiding the need for labour-intensive labelling of data which is commonly needed to train a neural network.

“Since the font on these meters is digital, it’s easy to train the neural network to recognise lots of different inputs and synthesise the data,” said Charles. “This makes it highly efficient to run on a mobile phone.”

“It doesn’t matter which orientation the meter is in – we tested it in all types of orientations, viewpoints and light levels,” said Cipolla. “The app will vibrate when it’s read the information, so you get a clear signal when you’ve done it correctly. The system is accurate across a range of different types of meters, with read accuracies close to 100%”

In addition to blood glucose monitor, the researchers also tested their system on different types of digital meters, such as blood pressure monitors, kitchen and bathroom scales. The researchers also recently presented their results at the 31st British Machine Vision Conference.

As for Charles, who has been using the app to track his glucose levels, he said it “makes the whole process easier. I’ve now forgotten what it was like to enter the values in manually, but I do know I wouldn’t want to go back to it. There are a few areas in the system which could still be made even better, but all in all I’m very happy with the outcome.”

Read the paper in full

Real-time screen reading: reducing domain shift for one-shot learning
James Charles, Stefano Bucciarelli and Roberto Cipolla
Paper presented at the British Machine Vision Conference.

Watch this short video put together by the authors to accompany their paper



tags: ,


University of Cambridge




            AIhub is supported by:


Related posts :



Training AI requires more data than we have — generating synthetic data could help solve this challenge

The rapid rise of generative AI has brought advancements, but it also presents significant risks.
26 July 2024, by

Congratulations to the #ICML2024 award winners

Find out who won the Test of Time award, and the Best Paper award at ICML this year.
25 July 2024, by

#ICML2024 – tweet round-up from the first few days

We take a look at what participants have been getting up to at the International Conference on Machine Learning.
24 July 2024, by

International collaboration lays the foundation for future AI for materials

Presenting an extended version of the Open databases integration for materials design (OPTIMADE) standard.
23 July 2024, by

#RoboCup2024 – daily digest: 21 July

In the last of our digests, we report on the closing day of competitions in Eindhoven.
21 July 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association