Brain power

AI-based credit risk tools can be ruined by noisy data

Originally posted on The Horizons Tracker.

One’s credit score is often hugely important, with it very difficult to secure substantial loans, such as mortgages, without a healthy credit rating.  It’s increasingly common for financial providers to use AI to produce credit risk scores, but research1 from Stanford University highlights how bad data can cause such systems to go astray.

The study finds that predictive tools are often up to 10% less accurate for minority groups and lower-income families.  This isn’t due to any inherent bias in the systems but rather the relative paucity of data which means that they’re less accurate in predicting the creditworthiness of these groups.

Thin credit history

It’s well known that a thin credit history will often result in higher borrowing costs, simply because lenders don’t have as much data to go on as to your trustworthiness.  This can also mean, however, that it doesn’t take much to send your credit rating spiralling in the wrong direction.

“We’re working with data that’s flawed for all sorts of historical reasons,” the researchers say.  “If you have only one credit card and never had a mortgage, there’s much less information to predict whether you’re going to default. If you defaulted one time several years ago, that may not tell much about the future.”

The researchers themselves used AI to analyze vast quantities of consumer data, which allowed them to test various credit-scoring models.  They began by analyzing credit data from 50 million people to see if existing methods were equally accurate for all demographic groups.

Risk assessment

The key to the challenge with risk assessments is understanding if people who were rejected for loans would have gone on to default or not.  The researchers were able to examine whether people who had been rejected for one loan were able to keep up with payments on other loans.

The results suggest that credit ratings tended to be much less accurate for minority borrowers or those with low income than they were for other borrowers.  The researchers hypothesize that this is because these groups have much more misleading data in their credit scores.

To test this, they tried out a number of alternative scoring models that had been built to better respond to minority and low-income borrowers.  These didn’t seem to help, and indeed the scores were even less accurate.  This highlighted that the problem was not the models themselves, but the data they rely on.

Limited information

The real problem is that people with poor credit scores often have a very limited financial history, so it’s harder to assess them for creditworthiness.  This was particularly so for people who had a couple of blemishes on their record.

When the researchers were able to recruit more data with which to feed the models, they were able to eliminate around half of the disparity in accuracy that existed.

The results clearly illustrate how people from poorer backgrounds may be being rejected for credit unfairly, which results in a misallocation of credit and even perpetuation of inequality as poorer people miss out on the ability to build a credit score and thus increase their wealth.

The researchers accept that there is no straightforward solution to this problem, and it may even require financial firms to experiment with giving credit to people even if they have relatively poor credit scores.

“If you’re a bank, you could give loans to people and see who pays,” the authors conclude. “That’s exactly what some fin-tech companies are doing: giving loans and then learning.”

Article source: AI-Based Credit Risk Tools Can Be Ruined By Noisy Data.

Header image source: Credit Report text with magnifying glass by Marco Verch, CC BY 2.0.

Reference:

  1. Blattner, L., & Nelson, S. (2021). How Costly is Noise? Data and Disparities in Consumer Credit. arXiv preprint arXiv:2105.07554.
Rate this post

Adi Gaskell

I'm an old school liberal with a love of self organizing systems. I hold a masters degree in IT, specializing in artificial intelligence and enjoy exploring the edge of organizational behavior. I specialize in finding the many great things that are happening in the world, and helping organizations apply these changes to their own environments. I also blog for some of the biggest sites in the industry, including Forbes, Social Business News, Social Media Today and Work.com, whilst also covering the latest trends in the social business world on my own website. I have also delivered talks on the subject for the likes of the NUJ, the Guardian, Stevenage Bioscience and CMI, whilst also appearing on shows such as BBC Radio 5 Live and Calgary Today.

Related Articles

Back to top button