ΑΙhub.org
 

ACM statement on facial recognition technology


by
01 July 2020



share this:
ACM logo

The Association for Computing Machinery (ACM) U.S. Technology Policy Committee (USTPC) released a statement on 30 June calling for “an immediate suspension of the current and future private and governmental use of FR [facial recognition] technologies in all circumstances known or reasonably foreseeable to be prejudicial to established human and legal rights.”

In the document, the ACM write:

The Committee concludes that, when rigorously evaluated, the technology too often produces results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems. The consequences of such bias, USTPC notes, frequently can and do extend well beyond inconvenience to profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups, including some of the most vulnerable populations in our society.

Such bias and its effects are scientifically and socially unacceptable.

The USTPC find that, at present, facial recognition technology is not sufficiently mature and reliable to be used fairly and safely. Systems have been adopted by governments and industry before the necessary regulation and guiding principles have been put in place.

Therefore, the USTPC call for urgent development of standards and regulation and provide a list of guiding principles in the document. These cover the areas of accuracy, transparency, governance, risk management and accountability. Their recommendations include:

  • Before a facial recognition system is used to make or support decisions that can seriously adversely affect the human and legal rights of individuals, the magnitude and effects of such system’s initial and dynamic biases and inaccuracies must be fully understood.
  • When error rates are reported, they must be disaggregated by sex, race, and other context-dependent demographic features, as appropriate.
  • A facial recognition system should be activated only after some form of meaningful advance public notice of the intention to deploy it is provided and, once activated, ongoing public notice that it is in use should be provided at the point of use or online, as practicable and contextually appropriate. These notices should contain a description of the training data and details about the algorithm.
  • No facial recognition system should be deployed prior to establishing appropriate policies governing its use and the management of data collected by the system.
  • No facial recognition system should be made available or deployed unless its relevant material risks to vulnerable populations, or to society as a whole, can be sufficiently eliminated or remediated.
  • When harm results from the use of such systems, the organization, institution, or agency responsible for its deployment must be fully accountable under law for all resulting external risks and harms.

You can see the full list of recommendations and read the ACM USTPC statement in full here.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Generative AI is already being used in journalism – here’s how people feel about it

  21 Feb 2025
New report draws on three years of interviews and focus group research into generative AI and journalism

Charlotte Bunne on developing AI-based diagnostic tools

  20 Feb 2025
To advance modern medicine, EPFL researchers are developing AI-based diagnostic tools. Their goal is to predict the best treatment a patient should receive.

What’s coming up at #AAAI2025?

  19 Feb 2025
Find out what's on the programme at the 39th Annual AAAI Conference on Artificial Intelligence

An introduction to science communication at #AAAI2025

  18 Feb 2025
Find out more about our forthcoming training session at AAAI on 26 February 2025.

The Good Robot podcast: Critiquing tech through comedy with Laura Allcorn

  17 Feb 2025
Eleanor and Kerry chat to Laura Allcorn about how she pairs humour and entertainment with participatory public engagement to raise awareness of AI use cases

Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies

  14 Feb 2025
Hear from Doctoral Consortium participant Kayla about her work focussed on explanations for multi-agent reinforcement learning, and human-centric explanations.

The Machine Ethics podcast: Running faster with Enrico Panai

This episode, Ben chats to Enrico Panai about different aspects of AI ethics.

Diffusion model predicts 3D genomic structures

  12 Feb 2025
A new approach predicts how a specific DNA sequence will arrange itself in the cell nucleus.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association