Interest in big data and machine learning recently has been expanding at what seems an exponential rate. Reductions in data storage costs have permitted the development of very large databases (big data), and increases in computer processing power and advancements in computer algorithms have greatly enhanced our ability to identify patterns in economic data using machine learning (ML) techniques. As a result, the combination of big data and ML is likely to transform the way financial services are produced, delivered, and consumed. This transformation would likely result in faster, better, and cheaper financial products and services. Big data and ML also could disrupt the entire financial landscape, affecting a wide variety of financial services, ranging from the way small dollar loans are delivered to the way institutional investors allocate funds.
This ongoing transformation is also likely to have important public policy implications. On the one hand, the use of big data and ML could help financial firms become more efficient and effective in meeting regulatory compliance obligations and make it easier for bank supervisors to monitor compliance. On the other hand, the use of big data and ML could be used by financial firms (both traditional banks and those in the shadow banking sector) to avoid financial regulations and make it more difficult for bank supervisors to monitor regulatory compliance. Given the potential for these developments to have both positive and negative implications for the attainment of public policy goals, regulators will have an important role to play in overseeing the development of banking firms’ ML processes. As a part of their efforts to allow the benefits while limiting the costs of big data and ML, financial regulators have had numerous meetings and conversations with industry practitioners on a variety of issues, for example, with consumers, traditional lenders, and innovative lending platforms to better understand the application of these technologies to retail lending.
Because big data and ML have the potential to change so many parts of banking, they raise a wide variety of policy issues. Many of these public policy concerns relate to consumer protection (fairness, equal opportunities, and privacy), efficiency, and, financial stability (Note: The financial stability issues raised by AI and ML are discussed by the Financial Stability Board: Artificial intelligence and machine learning in financial services, market developments and financial stability implications. See also: Some Financial Regulatory Implications of Artificial Intelligence). However, as these bigger-picture concerns are being debated by scholars, policymakers, and market participants, practitioners throughout the financial systems around the globe are moving ahead with real-world applications of big data and ML. This article examines some of the issues in implementing these advanced technologies from a supervisory perspective (Note: There are other deep issues around the use of big data and ML that we are not addressing in this paper, such as ownership of the data, market structure, and financial stability implications of widespread adoption of ML—especially in financial markets where machines may be trading with other machines). We will start by highlighting the ways in which these technologies are being applied and then discuss the supervisory aspects and emerging risks.
Wide Variety of ML Usages
Both existing financial firms and new entrants have been applying big data and ML (BD/ML) in a variety of areas, such as to provide better products and services to their customers and to lower their own costs. A complete listing of the ML applications is far beyond the scope of this article and, in any case, the list would likely be out of date by the time this gets published, given the dramatic speed with which BD/ML is being applied. However, a partial listing of recent developments will help to highlight the diversity of potential benefits of BD/ML applications.
One of the BD/ML applications that has attracted considerable attention involves its use by lenders in making credit decisions: consumer and small-business lending. Along with the incumbent lenders that have long used advanced analytics, a number of newer FinTech lending platforms have entered this market. Another BD/ML application that has generated considerable interest is “robo advisers,” which deploy big data and ML algorithms to deliver investment advice and financial planning services to retail customers. Many incumbent retail investment firms have adopted robo advisers and now rank among the largest providers of these services.
In addition to serving retail customers, big data and ML are starting to play an important role in providing stock recommendations and analysis of incoming information for investors. For example, Goldman Sachs has invested in the ML startup platform Kensho and used its services to provide automated analysis of breaking news, such as from the National Bureau of Labor Statistics, and compile that information into regular summaries. Wells Fargo has its own process, AIERA (artificial intelligent equity research analyst), to issue buy and sell call options on stocks, though the bank’s official recommendation still comes from a human.
In the wholesale payments market, Bank of America/Merrill Lynch has a process called Intelligent Receivables Solution, using cutting-edge ML technology to improve its reconciliation of incoming payments and help post its receivables faster. Last but not least, ML algorithms are being used by the legal department at JPMorgan’s COIN (Contract Intelligence) to extract 150 relevant attributes from commercial loan agreements.
Along with providing customers with better, faster, and cheaper services, BD/ML is being used by financial firms to improve the effectiveness and reduce the costs of regulatory compliance. Such uses of technology fall under the umbrella of “RegTech” and are being used for compliance with various laws, such as compliance with the Bank Secrecy Act (BSA) and related anti-money laundering and know-your-customers regulations. Citigroup also uses ML to help meet its stress-testing requirements.
Financial supervisors themselves are also turning to big data and ML to assist in their monitoring responsibilities. For example, the Office of the Comptroller and the Currency and the Federal Reserve use big data in certain aspects of their supervision of large systemically important banks and bank holding companies. The Federal Reserve uses big data in its Comprehensive Capital Analysis and Review (CCAR) stress-testing process. The Federal Reserve System has been collecting monthly data on each individual loan (mortgages, home equity products, and credit cards) originated by CCAR banks. This big data has been used to project CCAR losses for each retail product.
Big data and ML are also being applied by securities market regulators. The Financial Industry Regulatory Authority (FINRA), which tracks potential rule violations of its members, has been using ML to look beyond these set patterns to further understand which situations truly warrant red flags. The U.S. Securities and Exchange Commission is using ML to extract actionable insights from its massive data sets to better enable the agency to regulate market activities for compliance, facilitate automated security registration processes, and assess corporate risk (to identify risk of corporate misconduct from corporate filings and look for unusual patterns in the data to identify possible violations). In Europe, the London Stock Exchange has teamed up with IBM Watson and cybersecurity firm SparkCognition to develop its artificial intelligence (AI)-enhanced market surveillance. (Note: The SEC process is not fully automated as it relies on human expertise and evaluations to train the machine and uses the ML results to inform enforcement and compliance. Scott W. Bauguess notes that the SEC is using ML only to flag activities that might violate existing regulations and not to be a “robocop” that imposes penalties without further investigation by people).
Financial supervisors’ interest in big data and ML goes beyond merely ensuring compliance with existing regulations; they seek to understand how these tools are changing the financial system for better and worse. As an example, one of us (Julapa Jagtiani) teamed up with Cathy Lemieux in 2016 and 2017 to study the growth of FinTech lending platforms and their impact on consumers.4 The team found that the use of big data and ML could make consumers better off by allowing lenders to identify low-risk subprime borrowers (with poor credit scores) to access credit at lower costs. (Note: See Julapa Jagtiani and Cathy Lemieux (2016), “Small Business Lending After the Financial Crisis: A New Competitive Landscape for Community Banks,” Economic Perspectives, Federal Reserve Bank of Chicago, Number 3. Also, Jagtiani and Cathy (2017), “Fintech Lending: Financial Inclusion, Risk Pricing, and Alternative Information,” Federal Reserve Bank of Philadelphia, Research Working Paper RWP #17-17).
Given that BD/ML involves some new technologies with potentially transformative implications, one might be tempted to think they necessitate a fundamental rethinking of all aspects of supervision. However, financial firms and their supervisors have long operated in an evolving environment characterized by innovations and changes. As a result, supervisory expectations for financial firms to a large extent have been built on solid principles that have stood the test of time.5 One core element is that financial institutions are expected to understand the risks being taken by their various operations and to manage these risks in a safe and sound manner. This principle has been supplemented with more-detailed guidance addressing the issues raised by institutions’ use of BD/ML technologies. (Note: This is not to deny, however, that BD/ML may raise new issues. For example, the issues surrounding consumers’ control over their own data, including their ability to ask one financial services provider to share that data with another provider).
Data accuracy and reliability may sound like a broad concept. Given the increasing roles in quantitative supervision in the past decades, data accuracy and reliability play an important role in prudential quantitative modeling and supervision. Banking institutions have long been required to submit data and file various financial reports with their regulators, such as the quarterly Call Reports and FR Y-9 Reports, the annual CRA Reports and, recently, monthly loan-level data on the FR Y-14M report. The development of ML has only made the role of big data more critical to the supervisory process because data is the essential raw input into most types of ML algorithms. Specifically, banking firms are expected to develop rigorous processes around data collection, understand their data and its limitations, appropriately define the variables, and use a representative sample of their data in the ML process. Note that where historical data is used, as it is in a wide variety of instances, care should be taken to make sure that the historical data (over the years or decades) is accurate.
An unavoidable problem in many cases is working with an incomplete data set. The ideal data set has complete information across the full range of situations where the output from the BD/ML might be applied. However, such complete information is rarely available. Almost all BD/ML applications rely on historical data, and thus there will be limitations in its usefulness if some important conditions are not present in the data set. (Note: A ML technique called reinforcement learning does not necessarily rely on historical data. For example, when applied to learning games such as Go, it is possible for the machines to generate their own training data by playing against each other). For example, if the historical data does not include observations from an economic downturn, then any analysis based on the data would have limited usefulness in measuring risk under stress. Another limitation of historical data is that the data could have been censored – that is, by design it excludes certain relevant observations. For example, a historical data set on retail lending may exclude categories of consumers such as higher-risk borrowers. Thus, it is necessary to understand the process by which individuals are included or excluded from the big data set.
Interpretation of the ML Prediction
It has been natural for financial firms to enter new markets and to be creative in adopting new innovations. The expectation from banking supervisors consistently has been that prior to entering into new activities, banks need to have conducted an appropriate level of due diligence to fully understand costs/benefits and the risks involved. In addition, banking firms should have layers of control points to ensure that there are appropriate controls and oversights over those applications and technologies. One area of special concern about ML is that its users may not fully understand the limitations of the technology. It would be unreasonable for financial firms to just believe that their AI/ML process is immune to problems simply because it is purported to be “intelligent” or that it can “learn.” Two basic problems that experienced users of ML have encountered illustrate some of these ML limitations.
First, it is important to differentiate between correlation and causation. The ML process may identify statistical correlations in the data, while it may not be saying anything about the causality of the relationships. That is, correlation does not necessarily imply causation. Causation comes from theory, and statistics merely confirm those predictions. Understanding causality in the relationship could be achieved, but not without some further steps. For example, one could adopt an “experimental and iterative” procedure where ML results from large samples are tested in the real world in small experiments to determine which ML relationships can be usefully exploited. Second, it is possible that a supervised ML analysis would come up with a model that fits the data too well, a problem called model overfitting. A model that fits the data almost perfectly in the sample (because it fits all of the characteristics, including some random quirks of the data set) will often perform poorly when applied outside of the sample. Financial firms that use these ML algorithms should make sure that they fully understand the analytical process of their supervised ML and be able to provide documentation that supports their modeling choices. Any necessary analytical procedures that they have developed or utilized to mitigate these risks should also be well documented.
ML Transparency and Documentation
An important part of evaluating the risks associated with a given procedure is to fully understand how that procedure works. Obtaining such an understanding can be a problem with ML, depending upon which ML algorithms are being used. The way in which some ML algorithms produce their predictions could be easily understood by someone who has a general understanding of multivariate statistics such as the use of logit models for classification. However, the way that most other complex ML algorithms models generate their output may not be readily interpretable even by the person who did the analysis.
Most ML processes used in finance take the form of either supervised or unsupervised learning. For supervised ML, the data is labeled with the correct answer or outcome (the loan did or did not default). For unsupervised ML, the input data is not clearly labeled. The most common use of unsupervised ML is to detect patterns in the data so that similar observations can be clustered together. The choice of which patterns are most important is determined according to certain algorithm rules, but the resulting clusters are not necessarily the only ways of grouping the data. For many uses, such as when banking firms use these ML algorithms to project CCAR losses or for credit decisions, only supervised ML should be used so that the analytical processes and conclusions are fully transparent and could be documented with step-by-step details. There are, however, cases where the ML process potentially could be allowed to be unsupervised, such as when the algorithms are used to flag potential fraud. In these cases, ordinarily there would be no need for bank supervisors to review and monitor for possible biases in the analysis.
Even within the family of supervised ML, there are also some algorithms whose analysis is not very transparent. For example, deep learning, such as a deep neural network, uses multiple layers of “neurons” with each layer having potentially hundreds or thousands of cells analyzing the data. This approach can be used for supervised and unsupervised learning. The power of these algorithms has made them one of the most rapidly growing areas of ML. However, while these algorithms could perform extremely well with strong predictive power, they are often referred to as “black boxes” because of users’ limited ability to understand how the processes derived their predictions.
This inability to understand how predictions are made can be problematic in a variety of settings. For example, credit decisions based on deep learning ML potentially could contain biases against a protected group under the Fair Lending Act without the lenders being aware of it. This could happen even if the input data (the set of variables entered into the process) does not include a variable for a protected class (such as race, sex, or age), because the other variables that enter the process could serve as a good proxy for status in a protected class. Banking supervisors potentially could get involved at this early development stage to help guide the ML process at lending institutions – to ensure that the firm’s compliance could be monitored.
Many financial institutions rely on outside vendors to take advantage of the promises of AI. Many key AI technologies are available as open source protocols, but identifying and securing training data as well as the computational and technical demands inherent in AI projects, often necessitate the use of outside specialists. There are concerns that vendors may not meet the standards required for banking institutions. Compliance with vendor risk management guidance is an important question all banks must contend with as they explore the potential of AI through an outside vendor. While “there may be value to examining the vendor risk management guidance” in light of recent technological developments,7 many of the longstanding guidelines for general outsourcing services transcend technological change (Note: Brainard, Lael. Where Do Banks Fit in the Fintech Stack?).
Supervisors have long emphasized that banks’ risk management programs should be risk-focused, in that they should provide oversight commensurate with the level of risk presented by the outsourcing arrangements. Newly developed AI tools are not likely to change this supervisory expectation. It follows that “[t]he depth and formality of the service provider risk management program” associated with a bank’s use of their AI and ML services “will depend on the criticality, complexity, and number of material business activities being outsourced.” For example, the risk management practices employed in the case of an AI tool that flags potential fraud for further investigation by a human will necessarily differ from those needed to safeguard a tool that is used for account-access identity authentication or for final credit decisions. In other words, there will be no one-size-fits-all compliance solution across various AI tools, and, in most cases, the initial compliance questions should be based on pre-existing frameworks that were intentionally crafted to be technologically neutral. (Note: See Board of Governors of the Federal Reserve System: Division of Banking Supervision & Regulation, Division of Consumer & Community Affairs, Guidance on Managing Outsourcing Risk; Office of the Comptroller of the Currency, Risk Management: “A bank’s use of third parties does not diminish the responsibility of its board of directors and senior management to ensure that the activity is performed in a safe and sound manner and in compliance with applicable laws.”; Federal Deposit Insurance Corporation, Guidance for Managing Third-Party Risk).
Another concern is that if banks subscribe to the same outside vendors for their AI services, banking firms could be exposed to risks from the same vendors – potentially resulting in increased interconnectedness and could have an impact on the overall stability of the banking system. Further, potential errors in the AI process at one vendor could result in widespread problems across all banks that subscribe to that vendor.
The growing use of big data and ML only add to the importance of banks’ cybersecurity programs. Specifically, big data and ML may create new points of access vulnerability. As in other areas related to cybersecurity, contingency plans for cyberattacks, information sharing, and monitoring of a bank’s own system are important elements of a comprehensive cybersecurity program. Moreover, these concerns should apply not only to outside vendors that are processing a bank’s data but also to outside sources that supply data used in the bank’s ML analysis.
Big data and ML, powerful new tools currently used at financial firms and by their supervisors, have started to change the way some financial products and services are produced, delivered, and consumed. New types of risks come with these powerful new tools. Bank supervisors would expect users of these tools to follow some well-developed principles of risk management in financial services. Among these principles are that decisions will be made based on accurate and complete data, that financial firms will understand the limitations as well as the strengths of these technologies, and that firms will follow sound principles in managing their risks with third-party vendors. These developments may also increase various types of cyberrisk, thus increasing the importance of robust cybersecurity programs at financial institutions.
The supervisory process has long been adapting to the new financial landscapes and will continue to do so, while maintaining the same goals of safety and soundness of the financial systems and consumer protection. Ideally, the use of big data and ML would allow financial firms to provide superior products and services to their customers without sacrificing banking safety and soundness and overall financial stability.