Face recognition (FR) research has made great progress in recent years and has been prominent in the news. In public policy, many are calling for a reversal of the trajectory for FR systems and products. In the hands of people of good will, using products designed for safety and training systems with appropriate data, FR benefits society and individuals. The Verge reports the use in China of unique facial markings of pandas to identify individual animals. FR research includes work to mitigate negative outcomes, as with the Adobe and UC Berkeley work on Detecting Facial Manipulations in Adobe Photoshop for automatic detection of facial images that have been manipulated by splicing, cloning, and removing objects.
Intentional and unintentional application of systems that are not designed and trained for ethical use are a threat to society. Screening for terrorists could be good, but FR lie and fraud detection systems sometimes do not work properly. The safety of FR is currently an important issue for policymakers, but regulations could have negative consequences for AI researchers. As with many contemporary issues, conflicts arise because of conflicting policies in different countries. Recent and current legislation is attempting to restrict FR use and possibly inhibit FR research; for example,
• San Francisco, CA, Somerville, MA, and Copyright c 2019 by the author(s). Oakland, CA, are the first three cities to limit use of FR to identify people.
• In “Facial recognition may be banned from public housing thanks to proposed law” CNET reports that a bill will be introduced to address the issue that “landlords across the country continue to install smart home technology and tenants worry about unchecked surveillance.”
• A call for a more comprehensive ban on FR has been launched by the digital rights group Fight for the Future, seeking a complete Federal ban on government use of facial recognition surveillance.
Beyond legislation against FR research and banning certain products, work is in progress to enable safe and ethical use of FR. A more general example that could be applied to FR is the MITRE work The Ethical Framework for the Use of Consumer-Generated Data in Health Care, which “establishes ethical values, principles, and guidelines.”
With AI in the news so much over the past year, the public awareness of potential problems arising from the proliferation of AI systems and products has led to increasing calls for regulation. The popular media, and even technical media, do contain misinformation and misplaced fears, but plenty of legitimate issues exist even if their relative importance is sometimes misunderstood. Policymakers, researchers, and developers need to be in dialog about the true needs for and potential dangers of regulation. From our policy perspective, the significant risks from AI systems include misuse and faulty unsafe designs that can create bias, non-transparency of use, and loss of privacy. Some AI systems are known to discriminate against minorities, unintentionally and not.
An important discussion we should be having is if governments, international organizations, and big corporations, which have already released dozens of non-binding guidelines for the responsible development and use of AI, are the best entities for writing and enforcing regulations. Non-binding principles will not make some companies developing and applying AI products accountable. An important point in this regard is to hold companies responsible for the product design process itself, not just for testing products after they are in use.
Introduction of new government regulations is a long process and subject to pressure from lobbyists, and the current US administration is generally inclined against regulations anyway. We should discuss alternatives like clearinghouses and consumer groups endorsing AI products designed for safety and ethical use. If well publicized, the endorsements of respected non-partisan groups including professional societies might be more effective and timely than government regulations.
The European Union has released its Ethics Guidelines for Trustworthy AI, and a second document with recommendations on how to boost investment in Europe’s AI industry is to be published. In May, 2019, the Organization for Economic Cooperation and Development (OECD) issued their first set of international OECD Principles on Artificial Intelligence, which are embraced by the United State and leading AI companies.
The AI Race
China, the European Union, and the United States have been in the news about strategic plans and policies on the future of AI. The U.S. National Artificial Intelligence Research and Development Strategic Plan, was released in June, 2019, as an update of the report by the Select Committee on Artificial Intelligence of The National Science and Technology Council. The Computing Community Consortium (CCC) recently released the AI Roadmap Website. Now, the Center for Data Innovation has issued a Report comparing the current standings of China, the European Union, and the United States. Here is a summary of their policy recommendations: “Many nations are racing to achieve a global innovation advantage in artificial intelligence (AI) because they understand that AI is a foundational technology that can boost competitiveness, increase productivity, protect national security, and help solve societal challenges. This report compares China, the European Union, and the United States in terms of their relative standing in the AI economy by examining six categories of metrics: talent, research, development, adoption, data, and hardware. It finds that despite the bold AI initiatives in China, the United States still leads in absolute terms. China comes in second, and the European Union lags further behind. This order could change in coming years as China appears to be making more rapid progress than either the United States or the European Union. Nonetheless, when controlling for the size of the labor force in the three regions, the current U.S. lead becomes even larger, while China drops to third place, behind the European Union. This report also offers a range of policy recommendations to help each nation or region improve its AI capabilities.”
US and G20 AI Policy
The G20 AI Ministers from the Group of 20 major economies conducted meetings on trade and the digital economy. They produced guiding principles for using artificial intelligence based on principles adopted earlier by the 36-member OECD and an additional six countries. The G20 guidelines call for users and developers of AI to be fair and accountable, with transparent decision-making processes and to respect the rule of law and values including privacy, equality, diversity and internationally recognized labor rights. Meanwhile, the principles also urge governments to ensure a fair transition for workers through training programs and access to new job opportunities.
Bipartisan Legislators On Deepfake Videos
Senators introduced legislation intended to lessen the threat posed by “deepfake“ videos, which use AI technologies to manipulate original videos and produce misleading information. With this legislation, the Department of Homeland Security would conduct an annual study of deepfakes and related content and require the department to assess the AI technologies used to create deepfakes. This could lead to changes in regulations or to new regulations impacting the use of AI.
Hearing on Societal and Ethical Impacts The House Science, Space and Technology Committee held a hearing on June 26th about the societal and ethical implications of artificial intelligence, now available on video. The National Artificial Intelligence Research and Development Strategic Plan, released in June, is an update of the report by the Select Committee on Artificial Intelligence of The National Science and Technology Council.
On February 11, 2019, the President signed Executive Order 13859: Maintaining American Leadership in Artificial Intelligence. According to Michael Kratsios, Deputy Assistant to the President for Technology Policy, this order “launched the American AI Initiative, which is a concerted effort to promote and protect AI technology and innovation in the United States. The Initiative implements a whole-of-government strategy in collaboration and engagement with the private sector, academia, the public, and like-minded international partners. Among other actions, key directives in the Initiative call for Federal agencies to prioritize AI research and development investments, enhance access to high-quality cyber infrastructure and data, ensure that the Nation leads in the development of technical standards for AI, and provide education and training opportunities to prepare the American workforce for the new era of AI.”
The first seven strategies continue from the 2016 Plan, reflecting the reaffirmation of the importance of these strategies by multiple respondents from the public and government, with no calls to remove any of the strategies. The eighth strategy is new and focuses on the increasing importance of effective partnerships between the Federal Government and academia, industry, other non-Federal entities, and international allies to generate technological breakthroughs in AI and to rapidly transition those breakthroughs into capabilities.
Strategy 8: Expand Public–Private Partnerships to Accelerate Advances in AI is new in the June, 2019, plan and reflects the growing importance of public-private partnerships enabling AI research and expanding public private partnerships to accelerate advances in AI. A goal is to promote opportunities for sustained investment in AI research and development and transitions into practical capabilities, in collaboration with academia, industry, international partners, and other non-Federal entities.
Continued points from the seven Strategies in the previous Executive Order in February include
• support for the development of instructional materials and teacher professional development in computer science at all levels, with emphasis at the K–12 levels,
• consideration of AI as a priority area within existing Federal fellowship and service programs,
• development of AI techniques for human augmentation,
• emphasis on achieving trust: AI system designers need to create accurate, reliable systems with informative, user-friendly interfaces.
The National Science and Technology Council (NSTC) is functioning again. NSTC is the principal means by which the Executive Branch coordinates science and technology policy across the diverse entities that make up the Federal research and development enterprise. A primary objective of the NSTC is to ensure that science and technology policy decisions and programs are consistent with the President’s stated goals. The NSTC prepares research and development strategies that are coordinated across Federal agencies aimed at accomplishing multiple national goals. The work of the NSTC is organized under committees that oversee subcommittees and working groups focused on different aspects of science and technology. More information is available.
The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office of the President with advice on the scientific, engineering, and technological aspects of the economy, national security, homeland security, health, foreign relations, the environment, and the technological recovery and use of resources, among other topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of Management and Budget with an annual review and analysis of Federal research and development in budgets, and serves as a source of scientific and technological analysis and judgment for the President with respect to major policies, plans, and programs of the Federal Government. More information is available.
Groups that advise and assist the NSTC on AI include
• The Select Committee on Artificial Intelligence addresses Federal AI research and development activities, including those related to autonomous systems, biometric identification, computer vision, human computer interactions, machine learning, natural language processing, and robotics. The committee supports policy on technical, national AI workforce issues
• The Subcommittee on Machine Learning and Artificial Intelligence monitors the state of the art in machine learning (ML) and artificial intelligence within the Federal Government, in the private sector, and internationally
• The Artificial Intelligence Research and Development Interagency Working Group coordinates Federal research and development in AI and supports and coordinates activities tasked by the Select Committee on AI and the NSTC Subcommittee on Machine Learning and Artificial Intelligence.