ΑΙhub.org
 

The White House’s “AI Bill of Rights” outlines five principles to make artificial intelligence safer, more transparent and less discriminatory

by
03 November 2022



share this:

A photographic rendering of a young black man standing in front of a cloudy blue sky, seen through a refractive glass grid and overlaid with a diagram of a neural network.Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / Licenced by CC-BY 4.0

By Christopher Dancy, Penn State

Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there is very little policy or regulation governing the development and use of AI systems in the United States. Tech companies have largely been left to regulate themselves in this arena, potentially leading to decisions and situations that have garnered criticism.

Google fired an employee who publicly raised concerns over how a certain type of AI can contribute to environmental and social problems. Other AI companies have developed products that are used by organizations like the Los Angeles Police Department where they have been shown to bolster existing racially biased policies.

There are some government recommendations and guidance regarding AI use. But in early October 2022, the White House Office of Science and Technology Policy added to federal guidance in a big way by releasing the Blueprint for an AI Bill of Rights.

The Office of Science and Technology says that the protections outlined in the document should be applied to all automated systems. The blueprint spells out “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The hope is that this document can act as a guide to help prevent AI systems from limiting the rights of United States residents.

As a computer scientist who studies the ways people interact with AI systems – and in particular how anti-Blackness mediates those interactions – I find this guide a step in the right direction, even though it has some holes and is not enforceable.

Improving systems for all

The first two principles aim to address the safety and effectiveness of AI systems as well as the major risk of AI furthering discrimination.

To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with direct input from the people and communities who will use and be affected by the systems. Exploited and marginalized communities are often left to deal with the consequences of AI systems without having much say in their development. Research has shown that direct and genuine community involvement in the development process is important for deploying technologies that have a positive and lasting impact on those communities.

The second principle focuses on the known problem of algorithmic discrimination within AI systems. A well-known example of this problem is how mortgage approval algorithms discriminate against minorities. The document asks for companies to develop AI systems that do not treat people differently based on their race, sex or other protected class status. It suggests companies employ tools such as equity assessments that can help assess how an AI system may impact members of exploited and marginalized communities.

These first two principles address big issues of bias and fairness found in AI development and use.

Privacy, transparency and control

The final three principles outline ways to give people more control when interacting with AI systems.

The third principle is on data privacy. It seeks to ensure that people have more say about how their data is used and are protected from abusive data practices. This section aims to address situations where, for example, companies use deceptive design to manipulate users into giving away their data. The blueprint calls for practices like not taking a person’s data unless they consent to it and asking in a way that is understandable to that person.

The next principle focuses on “notice and explanation.” It highlights the importance of transparency – people should know how an AI system is being used as well as the ways in which an AI contributes to outcomes that might affect them. Take, for example the New York City Administration for Child Services. Research has shown that the agency uses outsourced AI systems to predict child maltreatment, systems that most people don’t realize are being used, even when they are being investigated.

The AI Bill of Rights provides a guideline that people in New York in this example who are affected by the AI systems in use should be notified that an AI was involved and have access to an explanation of what the AI did. Research has shown that building transparency into AI systems can reduce the risk of errors or misuse.

The last principle of the AI Bill of Rights outlines a framework for human alternatives, consideration and feedback. The section specifies that people should be able to opt out of the use of AI or other automated systems in favor of a human alternative where reasonable.

As an example of how these last two principles might work together, take the case of someone applying for a mortgage. They would be informed if an AI algorithm was used to consider their application and would have the option of opting out of that AI use in favor of an actual person.

Smart guidelines, no enforceability

The five principles laid out in the AI Bill of Rights address many of the issues scholars have raised over the design and use of AI. Nonetheless, this is a nonbinding document and not currently enforceable.

It may be too much to hope that industry and government agencies will put these ideas to use in the exact ways the White House urges. If the ongoing regulatory battle over data privacy offers any guidance, tech companies will continue to push for self-regulation.

One other issue that I see within the AI Bill of Rights is that it fails to directly call out systems of oppression – like racism or sexism – and how they can influence the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in health care have led to worse care for Black patients. I have argued that anti-Black racism should be directly addressed when developing AI systems. While the AI Bill of Rights addresses ideas of bias and fairness, the lack of focus on systems of oppression is a notable hole and a known issue within AI development.

Despite these shortcomings, this blueprint could be a positive step toward better AI systems, and maybe the first step toward regulation. A document such as this one, even if not policy, can be a powerful reference for people advocating for changes in the way an organization develops and uses AI systems.The Conversation

Christopher Dancy, Associate Professor of Industrial & Manufacturing Engineering and Computer Science & Engineering, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:


Related posts :



Unmasking AlphaFold to predict large protein complexes

“We’re giving a new type of input to AlphaFold. The idea is to get the whole picture, both from experiments and neural networks, making it possible to build larger structures."
03 December 2024, by

How to benefit from AI without losing your human self – a fireside chat from IEEE Computational Intelligence Society

Tayo Obafemi-Ajayi (Missouri State University) chats to Hava T. Siegelmann (University of Massachusetts, Amherst)

AIhub monthly digest: November 2024 – dynamic faceted search, the kidney exchange problem, and AfriClimate AI

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
29 November 2024, by

Improving calibration by relating focal loss, temperature scaling, and properness

In ML classification tasks, achieving high accuracy is only part of the goal; it's equally important for models to express how confident they are in their predictions.
28 November 2024, by

The Good Robot podcast: art, technology and justice with Yasmine Boudiaf

In this episode, Eleanor and Kerry chat to Yasmine Boudiaf, a researcher, artist and creative technologist who uses technology in beautiful and interesting ways to challenge and redefine what we think of as 'good'.
27 November 2024, by

AI in cancer research & care: perspectives of three KU Leuven institutes

This story is a collaboration of three Institutes that are working at the intersection of cell research, cancer research and care, and artificial intelligence.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association