ΑΙhub.org
 

The White House’s “AI Bill of Rights” outlines five principles to make artificial intelligence safer, more transparent and less discriminatory


by
03 November 2022



share this:

A photographic rendering of a young black man standing in front of a cloudy blue sky, seen through a refractive glass grid and overlaid with a diagram of a neural network.Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / Licenced by CC-BY 4.0

By Christopher Dancy, Penn State

Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there is very little policy or regulation governing the development and use of AI systems in the United States. Tech companies have largely been left to regulate themselves in this arena, potentially leading to decisions and situations that have garnered criticism.

Google fired an employee who publicly raised concerns over how a certain type of AI can contribute to environmental and social problems. Other AI companies have developed products that are used by organizations like the Los Angeles Police Department where they have been shown to bolster existing racially biased policies.

There are some government recommendations and guidance regarding AI use. But in early October 2022, the White House Office of Science and Technology Policy added to federal guidance in a big way by releasing the Blueprint for an AI Bill of Rights.

The Office of Science and Technology says that the protections outlined in the document should be applied to all automated systems. The blueprint spells out “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The hope is that this document can act as a guide to help prevent AI systems from limiting the rights of United States residents.

As a computer scientist who studies the ways people interact with AI systems – and in particular how anti-Blackness mediates those interactions – I find this guide a step in the right direction, even though it has some holes and is not enforceable.

Improving systems for all

The first two principles aim to address the safety and effectiveness of AI systems as well as the major risk of AI furthering discrimination.

To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with direct input from the people and communities who will use and be affected by the systems. Exploited and marginalized communities are often left to deal with the consequences of AI systems without having much say in their development. Research has shown that direct and genuine community involvement in the development process is important for deploying technologies that have a positive and lasting impact on those communities.

The second principle focuses on the known problem of algorithmic discrimination within AI systems. A well-known example of this problem is how mortgage approval algorithms discriminate against minorities. The document asks for companies to develop AI systems that do not treat people differently based on their race, sex or other protected class status. It suggests companies employ tools such as equity assessments that can help assess how an AI system may impact members of exploited and marginalized communities.

These first two principles address big issues of bias and fairness found in AI development and use.

Privacy, transparency and control

The final three principles outline ways to give people more control when interacting with AI systems.

The third principle is on data privacy. It seeks to ensure that people have more say about how their data is used and are protected from abusive data practices. This section aims to address situations where, for example, companies use deceptive design to manipulate users into giving away their data. The blueprint calls for practices like not taking a person’s data unless they consent to it and asking in a way that is understandable to that person.

The next principle focuses on “notice and explanation.” It highlights the importance of transparency – people should know how an AI system is being used as well as the ways in which an AI contributes to outcomes that might affect them. Take, for example the New York City Administration for Child Services. Research has shown that the agency uses outsourced AI systems to predict child maltreatment, systems that most people don’t realize are being used, even when they are being investigated.

The AI Bill of Rights provides a guideline that people in New York in this example who are affected by the AI systems in use should be notified that an AI was involved and have access to an explanation of what the AI did. Research has shown that building transparency into AI systems can reduce the risk of errors or misuse.

The last principle of the AI Bill of Rights outlines a framework for human alternatives, consideration and feedback. The section specifies that people should be able to opt out of the use of AI or other automated systems in favor of a human alternative where reasonable.

As an example of how these last two principles might work together, take the case of someone applying for a mortgage. They would be informed if an AI algorithm was used to consider their application and would have the option of opting out of that AI use in favor of an actual person.

Smart guidelines, no enforceability

The five principles laid out in the AI Bill of Rights address many of the issues scholars have raised over the design and use of AI. Nonetheless, this is a nonbinding document and not currently enforceable.

It may be too much to hope that industry and government agencies will put these ideas to use in the exact ways the White House urges. If the ongoing regulatory battle over data privacy offers any guidance, tech companies will continue to push for self-regulation.

One other issue that I see within the AI Bill of Rights is that it fails to directly call out systems of oppression – like racism or sexism – and how they can influence the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in health care have led to worse care for Black patients. I have argued that anti-Black racism should be directly addressed when developing AI systems. While the AI Bill of Rights addresses ideas of bias and fairness, the lack of focus on systems of oppression is a notable hole and a known issue within AI development.

Despite these shortcomings, this blueprint could be a positive step toward better AI systems, and maybe the first step toward regulation. A document such as this one, even if not policy, can be a powerful reference for people advocating for changes in the way an organization develops and uses AI systems.The Conversation

Christopher Dancy, Associate Professor of Industrial & Manufacturing Engineering and Computer Science & Engineering, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:



Related posts :



RoboCup@Work League: Interview with Christoph Steup

  22 Aug 2025
Find out more about the RoboCup League focussed on industrial production systems.

Interview with Haimin Hu: Game-theoretic integration of safety, interaction and learning for human-centered autonomy

  21 Aug 2025
Hear from Haimin in the latest in our series featuring the 2025 AAAI / ACM SIGAI Doctoral Consortium participants.

Congratulations to the #IJCAI2025 distinguished paper award winners

  20 Aug 2025
Find out who has won the prestigious awards at the International Joint Conference on Artificial Intelligence.

#IJCAI2025 social media round-up: part one

  19 Aug 2025
Find out what participants have been getting up to during the first few days of IJCAI 2025.

Forthcoming machine learning and AI seminars: August 2025 edition

  19 Aug 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 19 August and 30 September 2025.
coffee corner

AIhub coffee corner: Agentic AI

  15 Aug 2025
The AIhub coffee corner captures the musings of AI experts over a short conversation.

New research could block AI models learning from your online content

  14 Aug 2025
The method protects images from being used to train AI or create deepfakes by adding invisible changes that confuse the technology.

What’s coming up at #IJCAI2025?

  13 Aug 2025
Find out what's on the programme at the forthcoming International Joint Conference on Artificial Intelligence.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence