ΑΙhub.org
 

Executive order on safe, secure, and trustworthy artificial intelligence


by
30 October 2023



share this:
US flag

President Biden today issued an Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence”. A fact sheet from the White House states that the order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

Among the AI directives in the Executive Order are the following actions:

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
  • Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques.
  • Evaluate how agencies collect and use commercially available information.
  • Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.
  • Address algorithmic discrimination.
  • Ensure fairness throughout the criminal justice system.
  • Advance the responsible use of AI in healthcare.
  • Shape AI’s potential to transform education.
  • Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers.
  • Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.
  • Expand bilateral, multilateral, and multi-stakeholder engagements to collaborate on AI.
  • Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges.
  • Issue guidance for agencies’ use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment.

Find out more



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.

#ICML2025 outstanding position paper: Interview with Jaeho Kim on addressing the problems with conference reviewing

  02 Sep 2025
Jaeho argues that the AI conference peer review crisis demands author feedback and reviewer rewards.

Forthcoming machine learning and AI seminars: September 2025 edition

  01 Sep 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 September and 31 October 2025.
monthly digest

AIhub monthly digest: August 2025 – causality and generative modelling, responsible multimodal AI, and IJCAI in Montréal and Guangzhou

  29 Aug 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Interview with Benyamin Tabarsi: Computing education and generative AI

  28 Aug 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

The value of prediction in identifying the worst-off: Interview with Unai Fischer Abaigar

  27 Aug 2025
We hear from the winner of an outstanding paper award at ICML2025.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence