ΑΙhub.org
 

Q&A: research into sound-collecting app to aid respiratory disease diagnosis

by
18 May 2020



share this:
Cecilia Mascolo
Professor Cecilia Mascolo at the University of Cambridge, UK, hopes her coronavirus sounds app could provide the data needed to build a quick and cheap Covid-19 screening test in the future. Image credit – Salvatore Scellato

By Richard Gray

A recording of a cough, the noise of a person’s breathing or even the sound of their voice could be used to help diagnose patients with Covid-19 in the future, according to Professor Cecilia Mascolo, co-director of the centre for mobile, wearable systems and augmented intelligence at the University of Cambridge, UK.

Prof. Mascolo has developed a sound-collecting app to help train machine learning algorithms to detect the tell-tale sounds of coronavirus infection. Created as part of a project called EAR, she hopes it might eventually lead to new ways of diagnosing respiratory diseases and help in the global fight against coronavirus.

Why are noises important when it comes to disease?

The human body makes noises all of the time. Our heart, lungs and digestive system all make noises and they can tell us a lot. Doctors have used these to help them diagnose diseases for a long time. Most people are familiar with the stethoscope around a doctor’s neck for listening to a patient’s heart and lungs. But this technique – auscultation – has almost completely gone from practice in cardiology as it has been replaced by echo-imaging that is done by machine.

How can machines help?

The technique of listening to the body is actually very difficult for humans to acquire without a lot of training, but machines are much better at it. Artificial intelligence technologies like machine learning can identify features or patterns in a sound that the human ear cannot. They can also “listen” to sounds that are beyond human hearing – microphones can pick up a lot of noise that our ears cannot. Ultrasound, for example, is already used a lot for diagnostics, but unlike an ultrasound scan that relies upon sound waves bouncing back to a microphone, we (on the EAR project) are just listening to those produced by the body.

We are not the first people to try to automate listening to the body in this way. But the main problem is there is not a lot of large data sets to train machine learning algorithms to do this effectively, so that is something our project is trying to collect.

Is this what led you to build the Covid-19 Sounds app?

Our project actually started in October before the coronavirus outbreak started. The first thing we were trying was to look at cardiovascular sounds, but when the coronavirus started spreading, we decided to build an application that would gather data about it instead. We are hoping to use machine learning to identify certain characteristics that could be used to diagnose someone with a Covid-19 infection.

What information are you collecting?

We have a website (which went live at the start of April) and an Android app (launched a few weeks later) that people can download. We then ask them some basic medical questions, along with whether they have been tested and diagnosed with Covid-19. We also ask them if they have any symptoms. They then record themselves breathing, speaking and coughing.

What does Covid-19 sound like?

It is still too early for us to have a definite answer as we are just starting to collect the data, but there have been some research papers that indicate the cough that comes with Covid-19 has some specific features – it has been described as being a dry cough (with some specific distinguishing features that allow it to be identified). Having spoken to doctors who are treating people (Covid-19 patients) in hospitals, there may be some changes to their voice, their patterns of breathing or the way they catch their breath as they talk like they are exhausted. We are looking at all of these things by asking participants to record themselves breathing and reading sentences out loud.

Can your app help with the global response to coronavirus?

The machine learning (algorithm) will analyse the recordings we collect to see if it can spot anything different in the voice, cough and breathing of people who have coronavirus. If we do find something, then that could be used to create a diagnostic tool.

How might that work?

For cardiovascular disease we checked with an iPhone on the heart in different positions and found the microphone was enough to pick up the sound of the valves as it pumps blood (and so could be enough for respiratory diseases too). If the machine learning algorithm learns to distinguish Covid-19 patients from their cough, for example, it might be also be possible to record a cough or their voice on their phone. Then the algorithm can say whether someone has a cough like those we have classified as having the disease. There might need to be a second line of diagnostic test after that to confirm, but it could be a cheap and quick (screening) test.

Having spoken to doctors who are treating people (Covid-19 patients) in hospitals, there may be some changes to their voice.
Prof. Cecilia Mascolo, University of Cambridge, UK

How many people contributed so far?

We had 3,000 in the first three days from the website alone before the Android app went live and that number is growing. But to be useful we are going to need tens of thousands of people taking part. So far, we have less than 50 people who have said they have tested positive for coronavirus. We need more positives. We are trying to get the app into hospitals to reach positive patients in triage sites.

What challenges have you faced?

We have had to be very clear with people that this is not a diagnostic tool at the moment. We are not giving them a result from the recordings they are giving us, it is just for data collection so we can analyse it and hopefully build something later. We are also having to be very careful with the data we collect as it is a lot of personal information and recordings of people’s voices. While we want to make the data public at some point in the future, we will have to ensure it is anonymised.

The other major hurdle has been with getting the apps published. Google restricted who could publish apps to do with Covid-19 to avoid misinformation, but we argued ours is helping in the global fight (and came from a reputable source) so they reviewed it and allowed it to go live. The app will also allow us to follow up by asking users every three days for updates so we can see how their condition progresses. We are also hoping to develop an app for iOS soon too, but we were able to do one quicker for Android.

Is there a danger your analysis will come too late for Covid-19?

It is definitely going to take some time, but there are some countries that are quite a bit behind. There is also talk of the virus coming in waves. We don’t know how successful lockdown will be and when it will end, so we are hoping our research will be useful for these later stages.

Will this research be useful after Covid-19?

There is a chance the data we are collecting now could also identify diagnostic sounds for other conditions such as asthma. We don’t know yet. Our big vision though is for machine learning algorithms to be linked to wearable devices and smartphones so it can automate the diagnosis of disease through sound. Most of us might have a doctor listen to our body’s sounds periodically, but what happens if you have something that can listen to you continuously. It could be a new form of diagnostic. We just have to listen to our bodies more.

This post Q&A: The sound of your voice could help fight Covid-19 was originally published on Horizon: the EU Research & Innovation magazine | European Commission.




Horizon brings you the latest news and features about thought-provoking science and innovative research projects funded by the EU.
Horizon brings you the latest news and features about thought-provoking science and innovative research projects funded by the EU.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association