ΑΙhub.org
 

ACM statement on facial recognition technology


by
01 July 2020



share this:
ACM logo

The Association for Computing Machinery (ACM) U.S. Technology Policy Committee (USTPC) released a statement on 30 June calling for “an immediate suspension of the current and future private and governmental use of FR [facial recognition] technologies in all circumstances known or reasonably foreseeable to be prejudicial to established human and legal rights.”

In the document, the ACM write:

The Committee concludes that, when rigorously evaluated, the technology too often produces results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems. The consequences of such bias, USTPC notes, frequently can and do extend well beyond inconvenience to profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups, including some of the most vulnerable populations in our society.

Such bias and its effects are scientifically and socially unacceptable.

The USTPC find that, at present, facial recognition technology is not sufficiently mature and reliable to be used fairly and safely. Systems have been adopted by governments and industry before the necessary regulation and guiding principles have been put in place.

Therefore, the USTPC call for urgent development of standards and regulation and provide a list of guiding principles in the document. These cover the areas of accuracy, transparency, governance, risk management and accountability. Their recommendations include:

  • Before a facial recognition system is used to make or support decisions that can seriously adversely affect the human and legal rights of individuals, the magnitude and effects of such system’s initial and dynamic biases and inaccuracies must be fully understood.
  • When error rates are reported, they must be disaggregated by sex, race, and other context-dependent demographic features, as appropriate.
  • A facial recognition system should be activated only after some form of meaningful advance public notice of the intention to deploy it is provided and, once activated, ongoing public notice that it is in use should be provided at the point of use or online, as practicable and contextually appropriate. These notices should contain a description of the training data and details about the algorithm.
  • No facial recognition system should be deployed prior to establishing appropriate policies governing its use and the management of data collected by the system.
  • No facial recognition system should be made available or deployed unless its relevant material risks to vulnerable populations, or to society as a whole, can be sufficiently eliminated or remediated.
  • When harm results from the use of such systems, the organization, institution, or agency responsible for its deployment must be fully accountable under law for all resulting external risks and harms.

You can see the full list of recommendations and read the ACM USTPC statement in full here.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.

Relational neurosymbolic Markov models

and   19 Feb 2026
Relational neurosymbolic Markov models make deep sequential models logically consistent, intervenable and generalisable

AI enables a Who’s Who of brown bears in Alaska

  18 Feb 2026
A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence