ΑΙhub.org
 

Ethics in voice technologies

by
20 July 2021



share this:
ethics in AI

By Álvaro Moreton and Ariadna Jaramillo

Ethical concerns related to the use of voice technologies

The ethical use of voice technologies, such as speech and voice recognition, is becoming more important every day. Devices such as smart speakers, smartphones or smartwatches collect massive amounts of data from users thanks to the wide range of activities they allow (e.g., asking questions, setting reminders, checking bank accounts, accessing calendars, etc.). This data, as you might imagine, is often personal or private by nature. Companies offering services through these gadgets now have to assure not only a legal processing of user’s data but also an ethical one.

The above issue is not the only one that concerns ethics. The average user does not have a full understanding of how voice technologies work (e.g., what data they record, how their behaviour is audited, to what extent their responses are explainable, etc.), and some users have a tendency to anthropomorphise voice assistants and reveal more details about their lives than necessary without being aware of the consequence this may pose.

We briefly describe below some of the main scenarios related to the use of voice technologies that could carry ethical implications.

  • Data ownership: In 2015, the American authorities requested access to all recordings from an Amazon Echo device to investigate the murder case of a citizen called Victor Collins. The question is: who is the actual owner of the data recorded by the device?
  • Societal biases: Data used to train machine learning applications like those present in smart speakers and similar devices could learn societal biases. For instance, the study “Biased bots: Human prejudices sneak into AI systems” showed that in typical training data used for machine learning African American names are often used alongside unpleasant words (e.g., “hatred”, “poverty”, “ugly”), while European American names are often paired with words such as “love”, “lucky” or “happy”.
  • Anthropomorphisation of voice technologies: As mentioned below, the intention of tech companies to provide voice assistants with human-like personalities leads users to anthropomorphise them and, thus, share an excessive amount of sensitive information. This phenomenon, which ascribes human attributes to machines, increases the risk of deteriorating human self-determination and leads users to overestimate the system’s capabilities.
  • Niche in sensitive fields: AI-based technologies like voice and speech recognition have found a niche in fields rich in personal data. The best example is probably personal health, where information such as the history of the patient medical diagnoses, diseases or interventions, medications prescriptions, tests results, behavioural patterns, and sexual life, to mention a few, is recorded on a daily basis for a variety of purposes. For instance, some ICT tools for medical practitioners allow for the development of decision support systems to improve the individual capacity of medical professionals. The main issue with this type of system — at least in the medical field — is that decision-making becomes a spatially distributed process, where multiple actors (e.g., medical specialists, nurses, pharmacologists, etc.) converge and thus, gain access to data.
  • Absence of human intervention: Algorithms are sometimes used for delicate tasks such as determining how much an individual should pay for insurance or filter candidates applying for a position. Although these tasks may be performed more efficiently with the help of AI, they still require strict supervision of human beings. Unfortunately, some service providers skip this requirement. For instance, several speech recognition solutions on the market claim to successfully identify fraudulent call centre conversations and even criminals pretending to be customers. Without enough human intervention to verify that they are indeed criminals or scammers, these systems could end up wrongly labelling legitimate users as such.

Initiatives to regulate ethics in voice technologies

Fortunately, initiatives aimed at regulating ethics in AI-based technologies (e.g., speech and voice recognition) exist at the European level. This is the case of the European Commission’s Ethics Guidelines for Trustworthy AI. According to these guidelines, for AI to be considered trustworthy, it must comply with the following principles:

  • Human action and supervision: AI should empower humans and enabled them to make informed decisions while ensuring proper oversight mechanisms. This goal can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
  • Technical security and robustness: AI systems should be resilient and secure to ensure the prevention or minimisation of unintentional harm.
  • Data and privacy management: Privacy, data protection, and adequate data management should be ensured, considering the quality and integrity of the data.
  • Transparency: AI systems and their decision-making mechanisms should be explainable (if not black boxes) to the users concerned, keeping in mind their capabilities and limitations.
  • Diversity and non-discrimination: AI systems should be accessible to everyone and avoid biases of any kind.
  • Social and environmental wellbeing: AI systems should benefit all human beings, be environmentally friendly, and consider their social impact.
  • Accountability: The responsibility and accountability of AI systems and their outcomes should be ensured through adequate mechanisms.

In the case of voice technologies, the principles above can be fulfilled, for instance, by designing the technologies behind voice assistants (Speech-to-Text, Natural Language Understanding, etc.) to preserve user privacy and to enable interactions in several languages and dialects, as is the case of COMPRISE, and not to make decisions that may affect the user.

Within this context, there are other European or national initiatives that aim to regulate and study the ethics in the use of various technologies. These include the European Observatory on Society and Artificial Intelligence, a project created under the H2020 programme that offers tools to help people better understand the impact AI technologies have across the EU, or the French National Pilot Committee for Digital Ethics (CNPEN), that issued a consultation on “Ethical issues of conversational agents” in 2020.

Conclusions

There is great uncertainty as to what should be considered an ethical use of technologies, although some points are clear. No technology should produce negative consequences to the users, neither in terms of their privacy nor in terms of the security of their data or how they perceive themselves as individuals.

To a large extent, the task of achieving ethical use of voice technologies rests with the developers in charge of designing, training and integrating the models that enable voice devices to function. Of course, it is up to multiples bodies (e.g., the European Commission) to work on standards, guidelines, and good practices that determine the minimum requirements developers and other stakeholders should maintain to be considered as making ethical use of voice technologies.




COMPRISE is a European-funded Horizon 2020 project looking into the next generation of voice interaction services.
COMPRISE is a European-funded Horizon 2020 project looking into the next generation of voice interaction services.




            AIhub is supported by:


Related posts :



The Machine Ethics Podcast: AI readiness with Tim El-Sheikh

In this episode, Ben chats with Tim El-Sheikh about ethical AI as the smarter AI, the importance of a business AI strategy, getting data ready, and more.
22 October 2021, by

Join our team of AIhub ambassadors!

We are looking for people to join us as AIhub ambassadors.
21 October 2021, by

Interview with Lily Xu – applying machine learning to the prevention of illegal wildlife poaching

Lily Xu tells us about her work applying machine learning and game theory to wildlife conservation.
20 October 2021, by

What bird is singing? Merlin Bird ID app offers instant answers

The Cornell Lab of Ornithology’s free Merlin Bird ID app can identify bird sounds.
19 October 2021, by

Distilling neural networks into wavelet models using interpretations

We propose a method which distills information from a trained DNN into a wavelet transform.
18 October 2021, by

Cynthia Rudin wins AAAI Squirrel AI Award

Duke professor becomes second recipient of AAAI Squirrel AI Award for pioneering socially responsible AI.
15 October 2021, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association