ΑΙhub.org
 

Ethics in voice technologies


by
20 July 2021



share this:
ethics in AI

By Álvaro Moreton and Ariadna Jaramillo

Ethical concerns related to the use of voice technologies

The ethical use of voice technologies, such as speech and voice recognition, is becoming more important every day. Devices such as smart speakers, smartphones or smartwatches collect massive amounts of data from users thanks to the wide range of activities they allow (e.g., asking questions, setting reminders, checking bank accounts, accessing calendars, etc.). This data, as you might imagine, is often personal or private by nature. Companies offering services through these gadgets now have to assure not only a legal processing of user’s data but also an ethical one.

The above issue is not the only one that concerns ethics. The average user does not have a full understanding of how voice technologies work (e.g., what data they record, how their behaviour is audited, to what extent their responses are explainable, etc.), and some users have a tendency to anthropomorphise voice assistants and reveal more details about their lives than necessary without being aware of the consequence this may pose.

We briefly describe below some of the main scenarios related to the use of voice technologies that could carry ethical implications.

  • Data ownership: In 2015, the American authorities requested access to all recordings from an Amazon Echo device to investigate the murder case of a citizen called Victor Collins. The question is: who is the actual owner of the data recorded by the device?
  • Societal biases: Data used to train machine learning applications like those present in smart speakers and similar devices could learn societal biases. For instance, the study “Biased bots: Human prejudices sneak into AI systems” showed that in typical training data used for machine learning African American names are often used alongside unpleasant words (e.g., “hatred”, “poverty”, “ugly”), while European American names are often paired with words such as “love”, “lucky” or “happy”.
  • Anthropomorphisation of voice technologies: As mentioned below, the intention of tech companies to provide voice assistants with human-like personalities leads users to anthropomorphise them and, thus, share an excessive amount of sensitive information. This phenomenon, which ascribes human attributes to machines, increases the risk of deteriorating human self-determination and leads users to overestimate the system’s capabilities.
  • Niche in sensitive fields: AI-based technologies like voice and speech recognition have found a niche in fields rich in personal data. The best example is probably personal health, where information such as the history of the patient medical diagnoses, diseases or interventions, medications prescriptions, tests results, behavioural patterns, and sexual life, to mention a few, is recorded on a daily basis for a variety of purposes. For instance, some ICT tools for medical practitioners allow for the development of decision support systems to improve the individual capacity of medical professionals. The main issue with this type of system — at least in the medical field — is that decision-making becomes a spatially distributed process, where multiple actors (e.g., medical specialists, nurses, pharmacologists, etc.) converge and thus, gain access to data.
  • Absence of human intervention: Algorithms are sometimes used for delicate tasks such as determining how much an individual should pay for insurance or filter candidates applying for a position. Although these tasks may be performed more efficiently with the help of AI, they still require strict supervision of human beings. Unfortunately, some service providers skip this requirement. For instance, several speech recognition solutions on the market claim to successfully identify fraudulent call centre conversations and even criminals pretending to be customers. Without enough human intervention to verify that they are indeed criminals or scammers, these systems could end up wrongly labelling legitimate users as such.

Initiatives to regulate ethics in voice technologies

Fortunately, initiatives aimed at regulating ethics in AI-based technologies (e.g., speech and voice recognition) exist at the European level. This is the case of the European Commission’s Ethics Guidelines for Trustworthy AI. According to these guidelines, for AI to be considered trustworthy, it must comply with the following principles:

  • Human action and supervision: AI should empower humans and enabled them to make informed decisions while ensuring proper oversight mechanisms. This goal can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
  • Technical security and robustness: AI systems should be resilient and secure to ensure the prevention or minimisation of unintentional harm.
  • Data and privacy management: Privacy, data protection, and adequate data management should be ensured, considering the quality and integrity of the data.
  • Transparency: AI systems and their decision-making mechanisms should be explainable (if not black boxes) to the users concerned, keeping in mind their capabilities and limitations.
  • Diversity and non-discrimination: AI systems should be accessible to everyone and avoid biases of any kind.
  • Social and environmental wellbeing: AI systems should benefit all human beings, be environmentally friendly, and consider their social impact.
  • Accountability: The responsibility and accountability of AI systems and their outcomes should be ensured through adequate mechanisms.

In the case of voice technologies, the principles above can be fulfilled, for instance, by designing the technologies behind voice assistants (Speech-to-Text, Natural Language Understanding, etc.) to preserve user privacy and to enable interactions in several languages and dialects, as is the case of COMPRISE, and not to make decisions that may affect the user.

Within this context, there are other European or national initiatives that aim to regulate and study the ethics in the use of various technologies. These include the European Observatory on Society and Artificial Intelligence, a project created under the H2020 programme that offers tools to help people better understand the impact AI technologies have across the EU, or the French National Pilot Committee for Digital Ethics (CNPEN), that issued a consultation on “Ethical issues of conversational agents” in 2020.

Conclusions

There is great uncertainty as to what should be considered an ethical use of technologies, although some points are clear. No technology should produce negative consequences to the users, neither in terms of their privacy nor in terms of the security of their data or how they perceive themselves as individuals.

To a large extent, the task of achieving ethical use of voice technologies rests with the developers in charge of designing, training and integrating the models that enable voice devices to function. Of course, it is up to multiples bodies (e.g., the European Commission) to work on standards, guidelines, and good practices that determine the minimum requirements developers and other stakeholders should maintain to be considered as making ethical use of voice technologies.




COMPRISE is a European-funded Horizon 2020 project looking into the next generation of voice interaction services.
COMPRISE is a European-funded Horizon 2020 project looking into the next generation of voice interaction services.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association