Russell T. Vought
Office of Management and Budget
725 17th St NW
Washington, DC 20503
Dear Mr. Vought:
The American Bankers Association (ABA) appreciates the opportunity to provide the Office of Management and Budget (OMB) with comments on its January 7, 2020 Memorandum on the Guidance for Regulation of Artificial Intelligence Applications (the Memo). We support the principles-based approach outlined in the Memo and OMB’s efforts to ensure America maintains its leadership in artificial intelligence (AI), a technology that will be critical to our competitiveness in the coming years.
In banking, AI promises to add efficiencies that can provide more Americans with access to safe and fair financial products. AI can also help banks extend credit to more borrowers, enhance the customer experience, improve fraud detection, lower the cost of offering services, and more.
As with any new technology, the use of AI also presents certain novel risks that must be managed. Unlike other sectors, banks are already subject to a strong regulatory framework and proactive supervision that ensures that AI and any other new technology is implanted in a careful way that does not lead to unintended consequences.
In many ways, AI is a technology like any other. It facilitates certain activities but does little to change the underlying nature of those activities or the services being offered. There are many parallels between the state of AI today and the early days of the internet. Despite early calls to “regulate the internet,” as the technology has matured it has facilitated a set of uses far too diverse to be covered by a single regulation. Instead, regulation has focused on the activities that take place on the internet rather than the internet itself.
Similarly, the principles-based approach outlined in the Memo will help provide a flexible framework for the use of AI that promotes innovation while ensuring that emerging risks are captured. With respect to banking, new regulations are unnecessary. Instead, the banking agencies should consider areas where they can clarify existing regulation to facilitate the use of AI and related technologies and ensure that these principles are applied consistently to all participants in the financial services ecosystem. Our views on areas that would benefit from regulatory clarification are set forth below.
AI is a catch-all term that generally refers to technologies capable of analyzing data and identifying patterns to make a decision and effect an outcome. OMB’s focus on “narrow” or “weak” AI is appropriate given the immediate application of this technology to businesses today. The use of “strong” or “general” AI also raise important questions that will need to be addressed, but the focus on “narrow” AI makes the Memo more applicable to the decisions businesses are implementing today.
This technology has the potential to apply to many areas of banking. The term AI encompasses a broad array of interrelated technologies. The following technologies are the most common in banking today.
Ultimately, AI has applicability to any business line that benefits from better data analytics. In the future, it is likely that AI will play a role in all activities a bank does. The current state of adoption by banks varies by application and institution. Some applications, like fraud controls, have already seen widespread adoption of AI. In other areas, like lending, banks have been slow to adopt AI due to regulatory uncertainty. The following areas are a few examples of where AI holds promise to improve banking:
Bank systems are constantly under attack from hackers, cybercriminals and fraudsters of all types and they have started to use AI techniques to help break into networks to gain access to valuable financial and other personal information. This means that banks need to constantly upgrade their systems to detect, prevent and mitigate cyber threats and the possible breaches that affect the data security and privacy of our customers. AI is a promising tool for that purpose. For instance, AI algorithms can be used to protect consumer accounts by learning how the customer normally acts and then flagging any unusual behavior in real-time. This can have a major impact in preventing fraudulent transactions and also preventing “false positives” that may degrade a customer’s experience with the bank. Another AI tool called “natural language processing” can be trained to flag suspicious text in emails that indicate phishing attacks, and anomaly detection can be used to alert the bank to deviations in network IP traffic that are similar to known cyber threats. AI is almost certain to play a large role in the future of data privacy protection, fraud prevention and cybersecurity.
AI promises to help banks better evaluate creditworthiness and more quickly provide credit to customers at lower cost. This has the potential to lead to credit being available to more creditworthy borrowers on more affordable terms, particularly applicants with minimal or no credit records and low-income applicants. Despite this promise, banks are moving slowly to implement AI in lending to ensure that they do not introduce unfair and prohibited biases into the lending process.
The most immediate application of AI in lending is through robotics process automation. These automated processes can apply traditional underwriting decisions in an automated way, reducing underwriting times and lowering costs. This allows banks to extend financing to more applicants and allows borrowers to more quickly receive loan approvals and, in turn, funds.
Machine learning has seen slower adoption in lending, although it can allow banks to incorporate nontraditional data like cashflow or a company’s daily sales into their credit decisioning engines. This process is sometimes referred to as advanced credit analytics. Advanced credit analytics can reduce delinquency rates and allow banks to extend credit to more qualified borrowers with thin or nonexistent credit files.
As customer interactions move outside of branches and onto online and mobile platforms, banks can use AI to better connect with customers. They can help customers better manage budgets and make digital tools more accessible. Chatbots, for example, allow people who are not as familiar with technology to interact digitally.
Reg-Tech applications can lower the cost of regulatory compliance allowing banks to profitably serve more customers. It can do this without the traditional tradeoff between cost and level of regulatory insight.
When banks innovate and implement new technologies, they do so within a strong regulatory environment. This is backed by a culture of compliance and proactive supervision and examination that ensures any risks are identified and remediated before there is consumer harm.
Today, extensive banking regulation already applies to many of the activities that AI promises to support. Furthermore, any risks that AI may pose are well considered and managed by these regulations. In some cases, the regulations were written long before today’s technology was considered and may unintentionally inhibit its application.
While nearly every banking regulation could impact the use of AI, we believe the following are particularly relevant to promoting the benefits of AI while managing any risks.
Machine learning fits squarely within the “model” definition set out in the prudential regulators’ model risk management frameworks. We appreciate that the guidance is principles-based and, accordingly, offers an intrinsic flexibility vis-a-vis the risk to an institution and consumers posed by specific use cases. As recently described by Federal Reserve Governor Lael Brainard, the guidance “highlights the importance of embedding critical analysis throughout the development, implementation, and use of models, which include complex algorithms like AI.” The guidance also underscores “effective challenge” of models by unbiased, qualified individuals separated from model development, implementation, and use (i.e., a “second set of eyes”). The guidance, paired with prudential regulators’ guidance on third party risk management, clarify expectations for firms when they turn to outside vendors to assist with AI-based tools or services. The guidance emphasizes that regulators’ expectations have to be framed in the context of the relative risk and importance of the specific use-case in question. And the guidance explains how AI tools that may be unexplainable or opaque may, with particular use cases, be used in practice with the appropriate controls.
Our first concern is that policymakers and examiners may, with the best of intentions, seek to create or apply new regulatory expectations that ultimately may chill further progress in this space. For instance, such new expectations may be too specific to particular technologies or methodologies. Or such new expectations may actually create higher expectations for machine learning models, as compared to traditional models -- particularly with respect to issues like explainability or transparency. The regulatory framework should not place greater compliance burdens on a model simply because it is labeled “machine learning,” without considering whether the model actually introduces greater risk than a comparable traditional model.
Our second concern is that two separate sets of regulatory expectations may emerge regarding the use of machine learning in financial services. Banks have a long history of compliance with model risk management frameworks set out by prudential regulators. In contrast, non-bank lenders may not have as much familiarity with the model risk management framework, if at all. Particularly given the lack of regular examinations of most non-bank lenders, it isn’t clear that there is a level playing field across banks and non-banks.
Banks fully support fair lending and strive to make credit available to all qualified borrowers. The banking industry devotes substantial resources on an ongoing basis to ensure that credit decisions for all loan applicants are made without regard to race or other bases that are prohibited under the following laws:
Both ECOA and the Fair Housing Act prohibit intentional discrimination against credit applicants. In addition, the Fair Housing Act prohibits a creditor from applying a neutral policy or practice that has a disparate impact on one or more protected groups, if the practice is not necessary to achieve a legitimate business objective or the business objective could be met through less discriminatory means. While ECOA does not expressly recognize a disparate impact theory of liability, regulators routinely assert that disparate impact is cognizable under ECOA. Thus, banks must consider disparate impact risk as they contemplate using AI or alternative data in the credit process.
Our members welcome artificial intelligence as a means to reach more creditworthy borrowers, however, assessing and addressing disparate impact risk is a complicated, lengthy and expensive process for many banks, particularly community banks, given the complexity of new models and the sheer amount of data that can be manipulated. Examiners expect banks to determine whether the data used could be correlated with or have an adverse impact on minorities or other protected groups. Banks are also expected to develop rigorous evidence to support a business justification for use of the data, such as whether they are predictive of default risk or fraud, and to demonstrate that no less discriminatory alternative exists. These tasks may be challenging for many banks when massive amounts of data are used and because attributes may be bundled and cannot be readily separated, or the vendor may refuse to test or validate predictability if certain attributes are removed. Moreover, such testing is beyond many banks' in-house expertise and reliance on outside consultants is costly.
Wide adoption of AI is further hampered by the lack of clear and unified guidance from regulators that enforce ECOA and the Fair Housing Act. HUD recently proposed changes to its Fair Housing Act rules, including certain defenses a creditor could assert in a disparate impact claim based on the creditor's use of algorithms. ABA has expressed support for HUD's efforts but has encouraged HUD to work with the Consumer Financial Protection Bureau (CFPB) and the prudential federal banking regulators to ensure a consistent approach to disparate impact under ECOA and the Fair Housing Act. Similarly, ABA welcomes a recent policy statement on alternative data and fair lending risks issued jointly by the CFPB and the banking agencies. However, the policy statement contains little specific guidance and its application to the Fair Housing Act is unclear because HUD is not a signatory.
Banks believe strongly in protecting consumers’ sensitive personal and financial information and their privacy. For hundreds of years, customers have relied on banks to protect the privacy of their financial information. Because banks are literally at the center of people’s financial lives, our industry has long been subject to federal and state data protection and privacy laws. For example, Title V of the Gramm-Leach-Bliley Act (GLBA) not only requires banks to protect the security and confidentiality of customer records and information, but it also requires banks to provide consumers with notice of their privacy practices and limits the disclosure of financial and other consumer information with nonaffiliated third parties.
The GLBA also required the federal regulatory agencies to establish standards for safeguarding customer information. These standards require financial institutions to ensure the security and confidentiality of customer information, protect against any anticipated threats to such information, and protect against unauthorized access to or use of customer information that could result in substantial harm or inconvenience to any customer. And, since April 1, 2005, the federal banking agencies have required banks to have in place incident response programs to address security incidents involving unauthorized access to customer information, including notifying customers of possible breaches when appropriate.
As noted in part I (above), AI is already a very promising and useful tool for purposes of protecting consumer data while also reducing the risk of cyberattacks and fraud. In the future, it is likely to be even more helpful to strengthen banks’ efforts in these areas, consistent with regulatory requirements.
The Dodd-Frank Act prohibits banks and other covered entities from engaging in any unfair, deceptive, or abusive act or practices (UDAAP) in connection with providing consumer financial services. In labeling conduct as UDAAP, the regulatory agencies examine whether an act or practice harms the consumer (or consumers more generally) and use a consumer-oriented focus to determine whether the conduct is unfair, deceptive, or abusive from the perspective of the consumer. Thus, banks’ existing adherence to UDAAP principles ensure that consumer well-being is put at the forefront of how banks use AI and other machine learning techniques. Banks, in compliance with UDAAP, already engage in a variety of prophylactic measures to prevent consumer harm, including tracking and analyzing complaint data , managing conduct risk within the institution, and paying close attention to the needs of vulnerable consumers, such as students, the elderly, service members, and those with limited English proficiencies. For more examples of how banks manage UDAAP risks, please see ABA’s UDAAP Risk Assessment Matrix.
The use of AI offers a great deal of potential to combat money laundering and terrorist financing. For many years, financial institutions have used increasingly sophisticated software programs to detect anomalies in customer transaction patterns to detect possible fraud. The progression to applying that same technology to detect unusual or inexplicable behavior lends itself naturally to the same type of monitoring that is expected in measures for anti-money laundering (AML) and countering the financing of terrorism (CFT). In fact, the federal banking regulators and FinCEN confirmed that step in their Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing, issued December 3, 2018, where they stated:
“Innovation has the potential to augment aspects of banks’ BSA/AML compliance programs, such as risk identification, transaction monitoring, and suspicious activity reporting. Some banks are becoming increasingly sophisticated in their approaches to identifying suspicious activity, commensurate with their risk profiles, for example, by building or enhancing innovative internal financial intelligence units devoted to identifying complex and strategic illicit finance, vulnerabilities and threats. Some banks are also experimenting with artificial intelligence and digital identity technologies applicable to their BSA/AML compliance programs. These innovations and technologies can strengthen BSA/AML compliance approaches, as well as enhance transaction monitoring systems. The Agencies welcome these types of innovative approaches to further efforts to protect the financial system against illicit financial activity. In addition, these types of innovative approaches can maximize utilization of banks’ BSA/AML compliance resources.”
Current legislation that is designed to update and make AML/CFT also reflects the increasing expectations for applying technological solutions for AML/CFT. On October 22, 2019, the House of Representatives passed H.R. 2513, the Corporate Transparency Act of 2019, by a bipartisan margin. Among other things, the bill would require the Financial Crimes Enforcement Network (FinCEN) to examine technological solutions to streamline the filing of Suspicious Activity Reports (SARs), create an Innovation Lab at FinCEN and require each of the federal financial regulators to explore new technologies for AML/CFT compliance. It also would require FinCEN to study technology, specifically AI, to determine whether it can be further leveraged to make FinCEN’s data analysis more efficient and effective and whether technology can help FinCEN better disseminate information.
As detailed above, new banking regulations specific to AI are unnecessary and existing regulation provides a strong regulatory structure. However, as the banking regulators consider their response to OMB’s principles, they should look for opportunities to provide clarity that can promote adoption of AI.
We urge the banking agencies, the CFPB, and other agencies that regulate the financial services sector to coordinate as they address these issues. Since banks often have more than one regulator, they are only as innovative as their least innovative regulator. A coordinated approach will give banks clarity about how to implement new technologies and confidence to move forward. Moreover, as non-banks begin offering banking products and services through digital channels, bank regulators and other agencies must coordinate to ensure that these activities are appropriately being monitored, the emerging risks adequately captured, and all applicable legal requirements met.
As outlined below, there are several specific steps that banking agencies could take to promote the responsible adoption of AI and ensure that these principles are applied consistently.
We agree with the OMB’s recommendation that agencies consider granting waivers or exemptions to allow for and encourage pilot programs. The CFPB offers waivers through formal programs for innovative products and services , but other regulators are not always bound by those safe harbors. We believe that proper implementation of pilot programs can help drive innovation that stands to benefit customers without putting those customers at risk.
As technology changes how banking services are delivered certain innovations will inevitably challenge existing regulatory assumptions and raise questions that will require collaboration between banks, regulators and technologists. If implemented properly, pilot programs can encourage banks to test new innovations and give regulators insight into new technologies while ensuring that customers remain protected in the event of unforeseen adverse consequences.
We have outlined our views in more detail in several comment letters to the various banking agencies.
As discussed above, confusion about how to satisfy fair lending requirements has limited the adoption of AI in lending. While some regulators have issued guidance on alternative data, the scope is limited and not all regulators joined the effort. Regulators should coordinate and provide consistent and clear guidance on how to test and demonstrate that models comply with the ECOA, and with the Fair Housing Act’s disparate impact liability consistent with the Supreme Court’s Inclusive Communities framework.
Data is the fuel that powers AI insights. The unprecedented proliferation and availability of this data has enabled the development of new financial innovations that stand to benefit customers. However, the inherent sensitivity of this data means we need to ensure financial data are handled appropriately.
It is clear that the advent of AI and other technologies has also resulted in new business models that may not be well covered by existing regulatory and supervisory structures. Today, many non-bank businesses connect directly with customers to offer financial services. While these companies are generally subject to the same banking laws, they often lack the proactive oversight that ensures regulation is being applied evenly. The banking agencies should consider how they could ensure that these principles are being applied evenly to banks and nonbanks alike.
In particular, banks and technology companies are currently investing in technologies that give consumers control when they share their financial data. However current practices in the data aggregation market may leave consumers exposed and create risks. Legacy processes known as “screen scraping” require users to forfeit their bank username and password, granting technology companies’ unfettered access to a customer’s most sensitive data. It is critically important that consumers be fully informed about the activities involved, the potential risks, and the security steps that are in play to protect their data.
The CFPB and other banking agencies should allow private sector developments to continue their strong progress, but take steps to ensure that consumer financial data remains protected when it leaves the secure bank environment. The ABA has provided specific recommendations in a statement for the record provided to the Task Force on Financial Technology of the House Financial Services Committee.
Community banks rely on technology infrastructure from companies that provide software systems known as core banking platforms (core providers). Core technology supports everything from accepting deposits to originating loans, all of which tie into operating the core ledger that keeps track of customers’ accounts. For many banks, their core provider is the heart of their IT infrastructure. Without the support of these core providers, it is nearly impossible for community banks to adopt new technologies.
ABA has engaged with the core providers through its banker driven Core Platforms Committee, made up of community and mid-sized banks, in an effort to strengthen relationships between banks and cores. One of the key priorities that this committee has identified is data access. Community banks often struggle to quickly and easily access the data held in their core platforms, severely limiting the ability to apply AI. For community banks to remain competitive, it is critical that the core providers give them the ability to quickly analyze their data and apply new technologies to gain insights from it.
As mentioned above, financial services are being offered by banks and non-banks alike. While generally subject to the same rules, non-banks typically lack the proactive supervision and oversight that characterizes the banking community and which ensures that regulations are applied consistently.
A cornerstone of Title X of the Dodd-Frank Act was the authority given to the CFPB to establish a supervisory program for nonbanks to ensure that federal consumer financial law is “enforced consistently, without regard to the status of a person as a depository institution, in order to promote fair competition.” Experience demonstrates that consumer protection laws and regulations must be enforced in a fair and comparable way if there is to be any hope that the legal and regulatory obligations are observed. ABA believes that establishing accountability across all providers of comparable financial products and services is a fundamental mission of the Bureau, and we have outlined our views in more detail in a recent statement for the record.
The OMB’s work to ensure America maintains its leadership in AI is important, as this technology will be critical to our global competitiveness in coming years. We believe that AI holds great promise to make banking services better, cheaper, and more widely available. These benefits do not come without risks. We believe that the robust bank regulatory structure already captures these risks today. The principles proposed by OMB in its Memo provides a flexible framework that can encourage innovation while mitigating risks. Consistent with the Memo, we urge bank regulators to make appropriate clarifications, such as those outlined in this comment letter, to enable adoption of this important technology and to ensure that these principles are applied consistently.
VP Emerging Technologies