ΑΙhub.org
 

AI Policy Matters – US national AI strategy

by and
30 January 2020



share this:

By Larry Medsker

AI Policy Matters is a regular column in AI Matters featuring summaries and commentary based on postings that appear twice a month in the AI Matters blog.

National AI Strategy

The National Artificial Intelligence Research and Development Strategic Plan, an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council, was released in June, 2019, and the President’s Executive Order 13859 Maintaining American Leadership in Artificial Intelligence was released on February 11. The Computing Community Consortium (CCC) recently released the AI Roadmap, and an interesting industry response is “Intel Gets Specific on a National Strategy for AI, How to Propel the US into a Sustainable Leadership Position on the Global Artificial Intelligence Stage“ by Naveen Rao and David Hoffman. Excerpts follow and the accompanying links provide the details:
“AI is more than a matter of making good technology; it is also a matter of making good policy. And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries. At least 20 other countries have published, and often funded, their national AI strategies. Last month, the administration signaled its commitment to U.S. leadership in AI by issuing an executive order to launch the American AI Initiative, focusing federal government resources to develop AI. Now it’s time to take the next step and bring industry and government together to develop a fully realized U.S. national strategy to continue leading AI innovation . . . to sustain leadership and effectively manage the broad social implications of AI, the U.S. needs coordination across government, academia, industry and civil society. This challenge is too big for silos, and it requires that technologists and policymakers work together and understand each other’s worlds.”

Their call to action was released in May 2018.

Four Key Pillars

“Our recommendation for a national AI strategy lays out four key responsibilities for government. Within each of these areas we propose actionable steps. We provide some highlights here, and we encourage you to read the full white paper or scan the shorter fact sheet.

  • Sustainable and funded government AI research and development can help to advance the capabilities of AI in areas such as healthcare, cybersecurity, national security and education, but there need to be clear ethical guidelines.
  • Create new employment opportunities and protect people’s welfare given that AI has the potential to automate certain work activities.
  • Liberate and share data responsibly, as the more data that is available, the more “intelligent“ an AI system can become. But we need guardrails.
  • Remove barriers and create a legal and policy environment that supports AI so that the responsible development and use of AI is not inadvertently derailed.

Work Transitions

AI and other automation technologies have great promise for benefitting society and enhancing productivity, but appropriate policies by companies and governments are needed to help manage workforce transitions and make them as smooth as possible. The McKinsey Global Institute report AI, automation, and the future of work: Ten things to solve for states that “There is work for everyone today and there will be work for everyone tomorrow, even in a future with automation. Yet that work will be different, requiring new skills, and a far greater adaptability of the workforce than we have seen. Training and retraining both mid-career workers and new generations for the coming challenges will be an imperative. Government, private-sector leaders, and innovators all need to work together to better coordinate public and private initiatives, including creating the right incentives to invest more in human capital. The future with automation and AI will be challenging, but a much richer one if we harness the technologies with aplomb and mitigate the negative effects.” They list likely actionable and scalable solutions in several key areas:

  1. Ensuring robust economic and productivity growth
  2. Fostering business dynamism
  3. Evolving education systems and learning for a changed workplace
  4. Investing in human capital
  5. Improving labor-market dynamism
  6. Redesigning work
  7. Rethinking incomes
  8. Rethinking transition support and safety nets for workers affected
  9. Investing in drivers of demand for work
  10. Embracing AI and automation safely

In redesigning work and rethinking incomes, we have the chance for bold ideas that improve the lives of workers and give them more interesting jobs that could provide meaning, purpose, and dignity. Some of the categories of new jobs that could replace old jobs are:

  1. Making, designing, and coding in AI, data science, and engineering occupations
  2. Working in new types of non-AI jobs that are enhanced by AI, making unpleasant old jobs more palatable or providing new jobs that are more interesting; the gig economy and crowd sourcing ideas are examples that could provide creative employment options
  3. Providing living wages for people to do things they love; for example, in the arts through dramatic funding increases for NEA and NEH programs. Grants to individual artists and musicians, professional and amateur musical organizations, and informal arts education initiatives could enrich communities while providing income for millions of people. Policies to implement this idea could be one piece of the future-of-work puzzle and be much more preferable for the economy and society than allowing largescale unemployment due to loss of work from automation.

Executive Order on The President’s Council of Advisors on Science and Technology (PCAST)

President Trump issued an executive order reestablishing the President’s Council of Advisors on Science and Technology (PCAST), an advisory body that consists of science and technology leaders from the private and academic sectors. PCAST is to be chaired by Kelvin Droegemeier, director of the Office of Science and Technology Policy, and Edward McGinnis, formerly with DOE, is to serve as the executive director. The majority of the 16 members are from key industry sectors. The executive order says that the council is expected to address “strengthening American leadership in science and technology, building the Workforce of the Future, and supporting foundational research and development across the country.”

For more information, see this Inside Higher Education article. Please join our discussions at the SIGAI Policy Blog.




Larry Medsker is Research Professor of Physics at The George Washington University.
Larry Medsker is Research Professor of Physics at The George Washington University.

AI Matters is the blog and newsletter of the ACM Special Interest Group on Artificial Intelligence.
AI Matters is the blog and newsletter of the ACM Special Interest Group on Artificial Intelligence.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association