As the Biden administration gets into full swing, many regulatory agencies are in the process of reviewing and recalibrating their agendas. In few places is this more true than at the Consumer Financial Protection Bureau, which is widely expected to emphasize enforcement and fair lending compliance now and in the years to come.
This creates an opportunity to answer President Biden’s call for racial equity by accelerating the use of new credit-scoring techniques that will bring greater fairness to American consumers. Financial inequality is one of the biggest problems our society faces today and poor access to affordable credit is a key driver.
The CFPB’s likely new director, Rohit Chopra, must be bold and forward-leaning in guiding the agency’s approach to regulating innovation that can reverse this inequity. Done right, new approaches and technologies, like artificial intelligence and machine learning, can be far more transparent than historic lending models and can drive measurably a more fair and accessible financial system.
Overcoming fears of The Terminator
Chopra has long been a champion of consumers and an advocate for a fairer financial system. He recognizes the benefits that come from using more data in new ways to underwrite loans. Like many regulators, he has also expressed some trepidation about the role of AI in financial services.
In general, there are two big concerns about AI. One is the “Terminator” scenario, where AI becomes too powerful for humans to manage. The other concern is that AI replicates, solidifies and even magnifies human biases.
While it’s easy to understand why these fears exist, they are based on outdated views of AI – views that Chopra and the CFPB can help dispel by taking a “lead by example” approach. This should include further expanding and empowering the agency’s Office of Innovation to drive deeper understanding of new models, which can be quantifiably more transparent than legacy models.
There are simple guardrails that can be put in place to address concerns around AI. To avoid the Terminator scenario, humans can train an AI model up to a point and when that point is reached, the model can be locked, evaluated and monitored by humans to ensure its accuracy and precision. This is hardly Skynet.
Bias concerns could be addressed by insisting that lenders maintain or improve standards of fair lending and disparate impact analysis.
One of the CPFB’s jobs is to ensure that banks are communicating to consumers exactly why they were denied for credit. Our work at the CFPB’s October tech sprint showed that better ML math can produce more accurate denial reasons when consumers are turned down for credit. In being the opposite of being a black box, AI/ML holds potential to be the model of transparency.
Traditional models reinforce discrimination
Today’s consumer is trickier than ever to evaluate. The Advisory Council of the Federal Reserve Board in December warned that, with widespread unemployment and forbearance programs in place, traditional credit scores “do not provide meaningful insights, as the average score in the third quarter was 711, which was the highest since FICO started tracking in 2005.”
Between gig work and stimulus support, the financial situations of Americans fluctuate wildly. Yet many institutions are still using scoring methods built 50 years ago to assess a very different society. This is a recipe for perpetuating systemic racism and discrimination.
With AI, lenders can evaluate hundreds of variables in determining credit risk, instead of the usual 20 or 30. With more signals, lenders can “yes” more often to people with thin credit files, among them new immigrants and members of underprivileged communities.
We are confident that Chopra and his team at the CFPB will embrace new technologies that advance the standard of care in compliance and fair lending. We are excited about the impact that AI can have. The new administration will need as many tools as it can get its hands on to create a more equitable America.