ΑΙhub.org
 

ACM statement on facial recognition technology


by
01 July 2020



share this:
ACM logo

The Association for Computing Machinery (ACM) U.S. Technology Policy Committee (USTPC) released a statement on 30 June calling for “an immediate suspension of the current and future private and governmental use of FR [facial recognition] technologies in all circumstances known or reasonably foreseeable to be prejudicial to established human and legal rights.”

In the document, the ACM write:

The Committee concludes that, when rigorously evaluated, the technology too often produces results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems. The consequences of such bias, USTPC notes, frequently can and do extend well beyond inconvenience to profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups, including some of the most vulnerable populations in our society.

Such bias and its effects are scientifically and socially unacceptable.

The USTPC find that, at present, facial recognition technology is not sufficiently mature and reliable to be used fairly and safely. Systems have been adopted by governments and industry before the necessary regulation and guiding principles have been put in place.

Therefore, the USTPC call for urgent development of standards and regulation and provide a list of guiding principles in the document. These cover the areas of accuracy, transparency, governance, risk management and accountability. Their recommendations include:

  • Before a facial recognition system is used to make or support decisions that can seriously adversely affect the human and legal rights of individuals, the magnitude and effects of such system’s initial and dynamic biases and inaccuracies must be fully understood.
  • When error rates are reported, they must be disaggregated by sex, race, and other context-dependent demographic features, as appropriate.
  • A facial recognition system should be activated only after some form of meaningful advance public notice of the intention to deploy it is provided and, once activated, ongoing public notice that it is in use should be provided at the point of use or online, as practicable and contextually appropriate. These notices should contain a description of the training data and details about the algorithm.
  • No facial recognition system should be deployed prior to establishing appropriate policies governing its use and the management of data collected by the system.
  • No facial recognition system should be made available or deployed unless its relevant material risks to vulnerable populations, or to society as a whole, can be sufficiently eliminated or remediated.
  • When harm results from the use of such systems, the organization, institution, or agency responsible for its deployment must be fully accountable under law for all resulting external risks and harms.

You can see the full list of recommendations and read the ACM USTPC statement in full here.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Optimizing LLM test-time compute involves solving a meta-RL problem

  20 Jan 2025
By altering the LLM training objective, we can reuse existing data along with more test-time compute to train models to do better.

Generating a biomedical knowledge graph question answering dataset

  17 Jan 2025
Introducing PrimeKGQA - a scalable approach to dataset generation, harnessing the power of large language models.

The Machine Ethics podcast: 2024 in review with Karin Rudolph and Ben Byford

Karin Rudolph and Ben Byford talk about 2024 touching on the EU AI Act, agent-based AI and advertising, AI search and access to information, conflicting goals of many AI agents, and much more.

Playbook released with guidance on creating images of AI

  15 Jan 2025
Archival Images of AI project enables the creation of meaningful and compelling images of AI.

The Good Robot podcast: Lithium extraction in the Atacama with Sebastián Lehuedé

  13 Jan 2025
Eleanor and Kerry chat to Sebastián Lehuedé about data activism, the effects of lithium extraction, and the importance of reflexive research ethics.

Interview with Erica Kimei: Using ML for studying greenhouse gas emissions from livestock

  10 Jan 2025
Find out about work that brings together agriculture, environmental science, and advanced data analytics.

TELL: Explaining neural networks using logic

  09 Jan 2025
Alessio and colleagues have developed a neural network that can be directly transformed into logic.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association