ΑΙhub.org
 

Context-aware sentence retrieval method reduces ‘communication gap’ for nonverbal people


by
23 June 2020



share this:
Speech bubble | NLP

Researchers have used artificial intelligence to reduce the ‘communication gap’ for nonverbal people with motor disabilities who rely on computers to converse with others.

The team, from the University of Cambridge and the University of Dundee, developed a new context-aware method that reduces this communication gap by eliminating between 50% and 96% of the keystrokes the person has to type to communicate.

“This method gives us hope for more innovative AI-infused systems to help people with motor disabilities to communicate in the future”
– Per Ola Kristensson

The system is specifically tailored for nonverbal people and uses a range of context ‘clues’ – such as the user’s location, the time of day or the identity of the user’s speaking partner – to assist in suggesting sentences that are the most relevant for the user.

Nonverbal people with motor disabilities often use a computer with speech output to communicate with others. However, even without a physical disability that affects the typing process, these communication aids are too slow and error-prone for meaningful conversation: typical typing rates are between five and 20 words per minute, while a typical speaking rate is in the range of 100 to 140 words per minute.

“This difference in communication rates is referred to as the communication gap,” said Professor Per Ola Kristensson from Cambridge’s Department of Engineering, the study’s lead author. “The gap is typically between 80 and 135 words per minute and affects the quality of everyday interactions for people who rely on computers to communicate.”

The method developed by Kristensson and his colleagues uses artificial intelligence to allow a user to quickly retrieve sentences they have typed in the past. Prior research has shown that people who rely on speech synthesis, just like everyone else, tend to reuse many of the same phrases and sentences in everyday conversation. However, retrieving these phrases and sentences is a time-consuming process for users of existing speech synthesis technologies, further slowing down the flow of conversation.

In the new system, as the person is typing, the system uses information retrieval algorithms to automatically retrieve the most relevant previous sentences based on the text typed and the context the conversation the person is involved in. Context includes information about the conversation such as the location, time of day, and automatic identification of the speaking partner’s face. The other speaker is identified using a computer vision algorithm trained to recognise human faces from a front-mounted camera.

The system was developed using design engineering methods typically used for jet engines or medical devices. The researchers first identified the critical functions of the system, such as the word auto-complete function and the sentence retrieval function. After these functions had been identified, the researchers simulated a nonverbal person typing a large set of sentences from a sentence set representative of the type of text a nonverbal person would like to communicate.

This analysis allowed the researchers to understand the best method for retrieving sentences and the impact of a range of parameters on performance, such as the accuracy of word auto-complete and the impact of using many context tags. For example, this analysis revealed that only two reasonably accurate context tags are required to provide the majority of the gain. Word-auto complete provides a positive contribution but is not essential for realising the majority of the gain. The sentences are retrieved using information retrieval algorithms, similar to web search. Context tags are added to the words the user types to form a query.

The study is the first to integrate context-aware information retrieval with speech-generating devices for people with motor disabilities, demonstrating how context-sensitive artificial intelligence can improve the lives of people with motor disabilities.

“This method gives us hope for more innovative AI-infused systems to help people with motor disabilities to communicate in the future,” said Kristensson. “We’ve shown it’s possible to reduce the opportunity cost of not doing innovative research with AI-infused user interfaces that challenge traditional user interface design mantra and processes.”

The research paper was published at CHI (Conference on Human Factors in Computing Systems) 2020.

The research was funded by the Engineering and Physical Sciences Research Council.

Read the research article in full

Kristensson, P.O., Lilley, J., Black, R. and Waller, A. A design engineering approach for quantitatively exploring context-aware sentence retrieval for nonspeaking individuals with motor disabilities.
Proceedings of the 38th ACM Conference on Human Factors in Computing Systems (CHI 2020).

This article originally appeared on the Cambridge University website and is reproduced here under a CC BY 4.0 license. Image credit: Volodymyr Hryshchenko.




University of Cambridge




            AIhub is supported by:


Related posts :



Exploring counterfactuals in continuous-action reinforcement learning

  20 Jun 2025
Shuyang Dong writes about her work that will be presented at IJCAI 2025.

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

  19 Jun 2025
Until recently, most computer code was written, at least originally, by human beings. But with the advent of GenAI, that has begun to change.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

  18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Interview with Mahammed Kamruzzaman: Understanding and mitigating biases in large language models

  17 Jun 2025
Find out how Mahammed is investigating multiple facets of biases in LLMs.

Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?

  16 Jun 2025
Last month, Google announced SynthID Detector, a new tool to detect AI-generated content.

The Good Robot podcast: Symbiosis from bacteria to AI with N. Katherine Hayles

  13 Jun 2025
In this episode, Eleanor and Kerry talk to N. Katherine Hayles about her new book, and discuss how the biological concept of symbiosis can inform the relationships we have with AI.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

  12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Graphic novel explains the environmental impact of AI

  11 Jun 2025
EPFL’s Center for Learning Sciences has released Utop’IA, an educational graphic novel that explores the environmental impact of artificial intelligence.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence