ΑΙhub.org
 

AI Policy Matters – US national AI strategy

by and
30 January 2020



share this:

By Larry Medsker

AI Policy Matters is a regular column in AI Matters featuring summaries and commentary based on postings that appear twice a month in the AI Matters blog.

National AI Strategy

The National Artificial Intelligence Research and Development Strategic Plan, an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council, was released in June, 2019, and the President’s Executive Order 13859 Maintaining American Leadership in Artificial Intelligence was released on February 11. The Computing Community Consortium (CCC) recently released the AI Roadmap, and an interesting industry response is “Intel Gets Specific on a National Strategy for AI, How to Propel the US into a Sustainable Leadership Position on the Global Artificial Intelligence Stage“ by Naveen Rao and David Hoffman. Excerpts follow and the accompanying links provide the details:
“AI is more than a matter of making good technology; it is also a matter of making good policy. And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries. At least 20 other countries have published, and often funded, their national AI strategies. Last month, the administration signaled its commitment to U.S. leadership in AI by issuing an executive order to launch the American AI Initiative, focusing federal government resources to develop AI. Now it’s time to take the next step and bring industry and government together to develop a fully realized U.S. national strategy to continue leading AI innovation . . . to sustain leadership and effectively manage the broad social implications of AI, the U.S. needs coordination across government, academia, industry and civil society. This challenge is too big for silos, and it requires that technologists and policymakers work together and understand each other’s worlds.”

Their call to action was released in May 2018.

Four Key Pillars

“Our recommendation for a national AI strategy lays out four key responsibilities for government. Within each of these areas we propose actionable steps. We provide some highlights here, and we encourage you to read the full white paper or scan the shorter fact sheet.

  • Sustainable and funded government AI research and development can help to advance the capabilities of AI in areas such as healthcare, cybersecurity, national security and education, but there need to be clear ethical guidelines.
  • Create new employment opportunities and protect people’s welfare given that AI has the potential to automate certain work activities.
  • Liberate and share data responsibly, as the more data that is available, the more “intelligent“ an AI system can become. But we need guardrails.
  • Remove barriers and create a legal and policy environment that supports AI so that the responsible development and use of AI is not inadvertently derailed.

Work Transitions

AI and other automation technologies have great promise for benefitting society and enhancing productivity, but appropriate policies by companies and governments are needed to help manage workforce transitions and make them as smooth as possible. The McKinsey Global Institute report AI, automation, and the future of work: Ten things to solve for states that “There is work for everyone today and there will be work for everyone tomorrow, even in a future with automation. Yet that work will be different, requiring new skills, and a far greater adaptability of the workforce than we have seen. Training and retraining both mid-career workers and new generations for the coming challenges will be an imperative. Government, private-sector leaders, and innovators all need to work together to better coordinate public and private initiatives, including creating the right incentives to invest more in human capital. The future with automation and AI will be challenging, but a much richer one if we harness the technologies with aplomb and mitigate the negative effects.” They list likely actionable and scalable solutions in several key areas:

  1. Ensuring robust economic and productivity growth
  2. Fostering business dynamism
  3. Evolving education systems and learning for a changed workplace
  4. Investing in human capital
  5. Improving labor-market dynamism
  6. Redesigning work
  7. Rethinking incomes
  8. Rethinking transition support and safety nets for workers affected
  9. Investing in drivers of demand for work
  10. Embracing AI and automation safely

In redesigning work and rethinking incomes, we have the chance for bold ideas that improve the lives of workers and give them more interesting jobs that could provide meaning, purpose, and dignity. Some of the categories of new jobs that could replace old jobs are:

  1. Making, designing, and coding in AI, data science, and engineering occupations
  2. Working in new types of non-AI jobs that are enhanced by AI, making unpleasant old jobs more palatable or providing new jobs that are more interesting; the gig economy and crowd sourcing ideas are examples that could provide creative employment options
  3. Providing living wages for people to do things they love; for example, in the arts through dramatic funding increases for NEA and NEH programs. Grants to individual artists and musicians, professional and amateur musical organizations, and informal arts education initiatives could enrich communities while providing income for millions of people. Policies to implement this idea could be one piece of the future-of-work puzzle and be much more preferable for the economy and society than allowing largescale unemployment due to loss of work from automation.

Executive Order on The President’s Council of Advisors on Science and Technology (PCAST)

President Trump issued an executive order reestablishing the President’s Council of Advisors on Science and Technology (PCAST), an advisory body that consists of science and technology leaders from the private and academic sectors. PCAST is to be chaired by Kelvin Droegemeier, director of the Office of Science and Technology Policy, and Edward McGinnis, formerly with DOE, is to serve as the executive director. The majority of the 16 members are from key industry sectors. The executive order says that the council is expected to address “strengthening American leadership in science and technology, building the Workforce of the Future, and supporting foundational research and development across the country.”

For more information, see this Inside Higher Education article. Please join our discussions at the SIGAI Policy Blog.




Larry Medsker is Research Professor of Physics at The George Washington University.
Larry Medsker is Research Professor of Physics at The George Washington University.

AI Matters is the blog and newsletter of the ACM Special Interest Group on Artificial Intelligence.
AI Matters is the blog and newsletter of the ACM Special Interest Group on Artificial Intelligence.




            AIhub is supported by:


Related posts :



Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by

DataLike: Interview with Tẹjúmádé Àfọ̀njá

"I place an emphasis on wellness and meticulously plan my schedule to ensure I can make meaningful contributions to what's important to me."

Beyond the mud: Datasets, benchmarks, and methods for computer vision in off-road racing

Off-road motorcycle racing poses unique challenges that push the boundaries of what existing computer vision systems can handle
17 April 2024, by

Interview with Bálint Gyevnár: Creating explanations for AI-based decision-making systems

PhD student and AAAI/SIGAI Doctoral Consortium participant tells us about his research.
16 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association