Unintended consequences of fed capital analysis

Everyone who works in financial services has heard something about the Comprehensive Capital Analysis and Review (CCAR). Whether you work on it directly, indirectly, or not at all, you probably have some idea of where your institution sits on the Federal Reserve’s most comprehensive assessment of financial stability. CCAR influences stock prices, impacts bonuses and governs operating flexibility.
CCAR was designed to assess the ability of the country’s largest and most complex financial institutions ability to rapidly absorb substantial unexpected losses and experience significant declines in revenue yet continue to operate as a functional credit intermediary during these periods of systemic distress. Like many regulations, however, it has brought with it unintended consequences, the most interesting of which has been to accelerate the adoption of machine intelligence technologies by the largest and most sophisticated financial institutions.
This unintended consequence may actually exacerbate capital concentration. By forcing those institutions to adopt sophisticated technologies to support the regulatory burden, the Fed has started a chain reaction that, will, in turn provide significant competitive advantages for these companies over competitive institutions.
No ‘God’ Algorithm
The financial institutions subject to CCAR are, by definition, the most complex in existence. Their operations span dozens of markets from both the geographical and product perspective. Further, the “interconnectedness” of these institutions to each other is exceptionally difficult to measure. The requirement to distill that complexity for the regulators has, in recent years, spawned several new mathematical techniques to project revenue, quantify risk and improve model performance.
Those techniques can be categorized loosely into three buckets: enhanced regression models; highly tuned machine learning techniques; and new meta-machine learning approaches. However, there is no one approach that captures the complexity of these institutions – no “god algorithm” that magically resolves the assets and liabilities into a single number for the Federal Reserve to evaluate. Further, each approach has its benefits and drawbacks:
Enhanced regression models. Linear regression is a common analytical technique with wide application and a range of approaches that can be either simple or sophisticated. One of the elements of linear regression that makes it so popular is that it is relatively straightforward, interpretable, easy to maintain and, as such, is widely understood. Because of its interpretability, the Federal Reserve actually likes final results in this format. The challenge with regression, however, is that with some types of data, assumptions don’t hold; the output may be described by a simple but highly nonlinear condition on the inputs, or the output may be determined by complicated interactions among the inputs. As a result, there is a limited range of problems that linear regression can be applied to. It’s particularly ill suited to capture the range of complex data types that define large, modern financial institutions.
Highly tuned machine learning techniques. One of the hottest topics in finance today is machine learning, which is a class of algorithms that is designed to automatically learn from experience where experience equals data. Machine learning itself has lots of sub-disciplines. Some tackle very specific problems, such as how a robot learns to walk, whereas others specialize in finding anomalies. The latter has made machine learning popular with quantitative trading desks the world over and has replaced derivatives as the “boogeyman” in the minds of some regulators.
Machine learning models are robust, flexible and pack predictive power. Because they learn and benefit from large amounts of data, they are well suited to the challenges encountered by large financial institutions/bank holding companies. They are also, however, difficult to interpret and maintain. Because of these attributes, machine learning techniques remain reserved for data scientists. This wouldn’t be a problem if it were not for the fact that data scientists are both scarce and correspondently expensive, which limits the number of individuals who can execute these models effectively.
We know of one institution that spent almost $200 million on traditional machine learning approaches only to fail CCAR. The reason? The complex models were too well adapted to historical data, picking up spurious relationships rather than fundamental phenomena, and thus did not readily generalize to future data. This is called “overfitting.” Machine learning algorithms require lots of data to avoid overfitting and thus are not suitable to model some elements of the bank.
Meta machine learning approaches. There is a third approach, one that is gaining traction with complex bank holding companies that utilize machine intelligence techniques to draw upon regression, machine learning and other geometric algorithms to rapidly identify the salient features associated with the Fed’s requirements.
Using a technique called topological data analysis (TDA), banks are able to combine a variety of algorithms to present the best “picture” of the data – immediately identifying those features that matter in the analysis. Not only does TDA deliver results more quickly than other approaches, it does so with an exceptionally high level of fidelity. The result is that banks can deploy fewer quantitative resources to this task while having the time to involve line-of-business executives at different steps in the process.
Furthermore, these machine intelligence approaches, adopted out of necessity, are operationalizing some very sophisticated techniques, making those institutions more competitive than their counterparts and introducing an interesting dynamic that the Fed will have to contend with down the road.
Take, for example, a bank that is using machine intelligence techniques to segment customers, match them with appropriate relationship managers, evaluate market conditions to determine when to approach those clients and with what products. This herculean task is based on reams of data that define a client’s goals, risk profile and other demographic attributes. Each of these discrete tasks (segmentation, market conditions analysis, risk assessment) can be performed by standard machine learning techniques. Combining all of those elements, however, requires a different approach, at least if the bank wants to complete the task in their client’s lifetime.
The decision to invest in machine intelligence will provide a sustainable competitive advantage for this bank, enabling it to win new clients from traditional and emerging wealth management competitors while retaining existing ones. The reason is simple: computers will find the patterns that humans simply cannot grasp. Putting that technology into the hands of smart business executives will result in better outcomes. That technology will attract the best and the brightest – creating a virtuous cycle that, ironically, favors these highly regulated institutions.
Over time, therefore, the short-term negative effects of CCAR will give way to a sustainable competitive advantage for the banks that use the technology. Whether or not this creates more systemic risk for the financial system as a whole remains to be seen. In any case, however, the large banks are likely to keep getting larger, which is definitely an unintended consequence of CCAR.
Mr. Singh is CEO of Menlo Park, Calif.-based Ayasdi. He can be reached at [email protected].