ΑΙhub.org
 

AI Policy Matters – US national AI strategy


by and
30 January 2020



share this:

By Larry Medsker

AI Policy Matters is a regular column in AI Matters featuring summaries and commentary based on postings that appear twice a month in the AI Matters blog.

National AI Strategy

The National Artificial Intelligence Research and Development Strategic Plan, an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council, was released in June, 2019, and the President’s Executive Order 13859 Maintaining American Leadership in Artificial Intelligence was released on February 11. The Computing Community Consortium (CCC) recently released the AI Roadmap, and an interesting industry response is “Intel Gets Specific on a National Strategy for AI, How to Propel the US into a Sustainable Leadership Position on the Global Artificial Intelligence Stage“ by Naveen Rao and David Hoffman. Excerpts follow and the accompanying links provide the details:
“AI is more than a matter of making good technology; it is also a matter of making good policy. And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries. At least 20 other countries have published, and often funded, their national AI strategies. Last month, the administration signaled its commitment to U.S. leadership in AI by issuing an executive order to launch the American AI Initiative, focusing federal government resources to develop AI. Now it’s time to take the next step and bring industry and government together to develop a fully realized U.S. national strategy to continue leading AI innovation . . . to sustain leadership and effectively manage the broad social implications of AI, the U.S. needs coordination across government, academia, industry and civil society. This challenge is too big for silos, and it requires that technologists and policymakers work together and understand each other’s worlds.”

Their call to action was released in May 2018.

Four Key Pillars

“Our recommendation for a national AI strategy lays out four key responsibilities for government. Within each of these areas we propose actionable steps. We provide some highlights here, and we encourage you to read the full white paper or scan the shorter fact sheet.

  • Sustainable and funded government AI research and development can help to advance the capabilities of AI in areas such as healthcare, cybersecurity, national security and education, but there need to be clear ethical guidelines.
  • Create new employment opportunities and protect people’s welfare given that AI has the potential to automate certain work activities.
  • Liberate and share data responsibly, as the more data that is available, the more “intelligent“ an AI system can become. But we need guardrails.
  • Remove barriers and create a legal and policy environment that supports AI so that the responsible development and use of AI is not inadvertently derailed.

Work Transitions

AI and other automation technologies have great promise for benefitting society and enhancing productivity, but appropriate policies by companies and governments are needed to help manage workforce transitions and make them as smooth as possible. The McKinsey Global Institute report AI, automation, and the future of work: Ten things to solve for states that “There is work for everyone today and there will be work for everyone tomorrow, even in a future with automation. Yet that work will be different, requiring new skills, and a far greater adaptability of the workforce than we have seen. Training and retraining both mid-career workers and new generations for the coming challenges will be an imperative. Government, private-sector leaders, and innovators all need to work together to better coordinate public and private initiatives, including creating the right incentives to invest more in human capital. The future with automation and AI will be challenging, but a much richer one if we harness the technologies with aplomb and mitigate the negative effects.” They list likely actionable and scalable solutions in several key areas:

  1. Ensuring robust economic and productivity growth
  2. Fostering business dynamism
  3. Evolving education systems and learning for a changed workplace
  4. Investing in human capital
  5. Improving labor-market dynamism
  6. Redesigning work
  7. Rethinking incomes
  8. Rethinking transition support and safety nets for workers affected
  9. Investing in drivers of demand for work
  10. Embracing AI and automation safely

In redesigning work and rethinking incomes, we have the chance for bold ideas that improve the lives of workers and give them more interesting jobs that could provide meaning, purpose, and dignity. Some of the categories of new jobs that could replace old jobs are:

  1. Making, designing, and coding in AI, data science, and engineering occupations
  2. Working in new types of non-AI jobs that are enhanced by AI, making unpleasant old jobs more palatable or providing new jobs that are more interesting; the gig economy and crowd sourcing ideas are examples that could provide creative employment options
  3. Providing living wages for people to do things they love; for example, in the arts through dramatic funding increases for NEA and NEH programs. Grants to individual artists and musicians, professional and amateur musical organizations, and informal arts education initiatives could enrich communities while providing income for millions of people. Policies to implement this idea could be one piece of the future-of-work puzzle and be much more preferable for the economy and society than allowing largescale unemployment due to loss of work from automation.

Executive Order on The President’s Council of Advisors on Science and Technology (PCAST)

President Trump issued an executive order reestablishing the President’s Council of Advisors on Science and Technology (PCAST), an advisory body that consists of science and technology leaders from the private and academic sectors. PCAST is to be chaired by Kelvin Droegemeier, director of the Office of Science and Technology Policy, and Edward McGinnis, formerly with DOE, is to serve as the executive director. The majority of the 16 members are from key industry sectors. The executive order says that the council is expected to address “strengthening American leadership in science and technology, building the Workforce of the Future, and supporting foundational research and development across the country.”

For more information, see this Inside Higher Education article. Please join our discussions at the SIGAI Policy Blog.




Larry Medsker is a Research Professor at the University of Vermont and a Research Affiliate at George Mason University.
Larry Medsker is a Research Professor at the University of Vermont and a Research Affiliate at George Mason University.

AI Matters is the blog and newsletter of the ACM Special Interest Group on Artificial Intelligence.
AI Matters is the blog and newsletter of the ACM Special Interest Group on Artificial Intelligence.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence