Artificial Intelligence: Monitoring Regulatory Developments
As artificial intelligence (AI) continues to rapidly evolve, policymakers are increasingly focused on its regulation to ensure the protection of individuals and the broader public interest. Following the publication of Dechert’s April 2023 OnPoint discussing legal and regulatory issues for financial institutions,1 the Biden-Harris Administration announced a series of new actions on May 4, 2023, aimed at promoting responsible American innovation in AI and protecting the public’s rights and safety.2
The Administration stated that it has taken considerable steps to promote responsible innovation, citing as examples the Blueprint for an AI Bill of Rights (Blueprint)3 and related executive actions, the AI Risk Management Framework and a roadmap for a National AI Research Resource Task Force. The Administration is also working to address the national security concerns raised by AI, especially in critical areas like cybersecurity and biosecurity by enlisting experts from the national security community.
The Administration outlined five key principles in the Blueprint (Principles), which was announced in October 2022.
- You should be protected from unsafe or ineffective systems.
- You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.
- You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
- You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.4
These five principles have clear and significant implications for AI use by financial institutions. These principles would seem to apply whether the AI use is consumer facing or being used on behalf of consumers or using consumer data. We will continue to monitor how these principles are incorporated by other regulatory agencies as they refine their approaches to AI technology.
The Administration’s most recent announcement included three initiatives, which are discussed below.
- The National Science Foundation announced $140 million in funding to launch seven new National AI Research Institutes, which will bring the total number of Institutes to 25 across the country and include organizations in almost every state.5 As noted in the Administration’s press release, these institutes catalyze collaborative efforts across institutions in critical areas such as climate, public health, education and cybersecurity to pursue transformative AI advances that are ethical and trustworthy.
- The Administration also announced an independent commitment from AI developers, including personnel from leading technology companies, to participate in a public evaluation of AI systems.6 Public evaluators and AI experts will review AI models to determine the extent to which the models align with the Principles and other practices outlined in the Blueprint and AI Risk Management Framework. The tests will be conducted independent of the government or companies that developed the technology and feedback will be provided to the relevant companies so that they may take steps to fix any issues discovered.
- The Office of Management and Budget announced that it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment. This guidance will establish specific policies for federal departments and agencies to follow to ensure their development, procurement and use of AI systems centers on safeguarding the American people’s rights and safety. The draft guidance is expected to be released for public comment this summer.7
Federal agencies, including the Federal Trade Commission and Consumer Financial Protection Bureau, have issued statements confirming that they will apply existing laws and regulations to new technologies, including AI, with a particular focus on enforcing civil rights, non-discrimination, fair competition, consumer protection and other vitally important legal protections.8 Regulatory officials are also calling for the adoption and enforcement of AI regulations, highlighting the need to mitigate risks.9 Additionally, executives from leading technology companies have recognized the need for AI regulation, pointing to the potential for dissemination of misinformation as a cause for concern. 10
While it may seem at times that AI brings new change and disruption to financial institutions and everyday life on a near-constant basis, it is important to remember that regulatory agencies and even some AI developers are monitoring these developments and looking for opportunities to pump the brakes through new regulations and initiatives. Financial institutions looking to use AI for a competitive advantage would be prudent to consider this regulatory environment when designing their AI products, use cases and roll outs. Dechert is here to advise our clients on these challenges and opportunities as they look to use AI technology to enhance and grow their businesses in this ever-developing environment.
Footnotes
1) See Artificial Intelligence: Legal and Regulatory Issues for Financial Institutions, Dechert LLP (Apr. 25, 2023), https://www.dechert.com/knowledge/onpoint/2023/4/artificial-intelligence--legal-and-regulatory-issues-for-financi.html.
2) Press Release, The White House, FACT SHEET: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety (May 4, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/ [hereinafter Press Release].
3) See generally WHITE HOUSE OFF. OF SCI. AND TECH. POL’Y, BLUEPRINT FOR AN AI BILL OF RIGHTS (2022).
4) Id.
5) See NSF announces 7 new National Artificial Intelligence Research Institutes, Nat’l Sci. Found. (May 8, 2023), https://www.nsf.gov/news/news_summ.jsp?cntn_id=307446&org=BCS#:~:text=May%208%2C%202023&text=The%20U.S.%20National%20Science%20Foundation,Research%20Institutes%20(AI%20Institutes).
6) Press Release, supra note 2.
7) Id.
8) See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (Apr. 25, 2023), https://www.eeoc.gov/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems.
9) See Lina Khan, Lina Khan: We Must Regulate A.I. Here’s How, NYT (May 3, 2023), https://www.nytimes.com/2023/05/03/opinion/ai-lina-khan-ftc-technology.html.
10) See Cristina Criddle & Hannah Murphy, OpenAI chief says new rules are needed to guard against AI risks, Fin. Times (May 16, 2023), https://www.ft.com/content/aa3598f7-1470-45e4-a296-bd26953c176f?campaign_id=4&emc=edit_dk_20230516&instance_id=92682&nl=dealbook®i_id=9595156&segment_id=133070&te=1&user_id=61a21949c7f253bd334cf5ee38fc08e8.