Advancing Greater Fairness and Explainability for AI Across the Banking Industry

In the past few years, artificial intelligence and machine learning (AI/ML) have demonstrated impressive gains in areas previously thought impossible for computers to excel, such as playing games like chess and Go, intelligent assistants, self-driving cars, and even running a fully automated fast food restaurant. On the heels of these advancements, others are exploring advanced applications of AI/ML in medicine, law enforcement, college admissions, and other areas that could have a profound impact on people’s everyday lives.

At Capital One, we believe there is great potential to leverage data analysis and machine learning in the banking industry, not least for the ability to transform customers’ experiences and to enhance business processes. Yet many regulated, industry-wide business decisions that can benefit from the application of AI/ML, such as credit decisioning, are tied intimately to explainability: a business must be able to explain the reasons why a particular machine learning model made the decisions it made.

How can developers prove that the AI/ML systems making these decisions are doing so ethically, fairly, and in compliance with relevant laws?

Organizations like the FAT/ML conference series, research groups at major technology firms, nonprofits, and other organizations, as well as academics, have undertaken institutional commitments to ensure that AI/ML applications are fair, accountable, transparent, explainable, safe, and secure. And machine learning researchers, including teams here at Capital One, have been advancing efforts to address questions around ethics and explainability in AI/ML development.

While at Capital One we’re leveraging AI/ML across nearly every facet of our business to look out for our customers’ financial well-being, help them become more financially empowered, and better manage their spending, we want to ensure that we’re balancing the use of more sophisticated technology with the appropriate development and applications that ensure fair, unbiased, and explainable outcomes that all of our stakeholders can understand. Further, we want to make sure we’re not simply meeting regulatory requirements in this field, but that we help set the standard for fair and ethical machine learning development and deployment in financial services broadly.

A foundational aspect of our — and the industry’s — ability to continue building transparent, ethical, and fair models for more sophisticated use cases like credit decisioning is an imperative to understand the black boxes that constitute the many layers of AI/ML algorithms. In use cases like credit decisioning, unpacking these black boxes is critical in order to understand and satisfy the explicit fairness and explainability requirements of the industry.

The first clear challenge in undertaking this endeavor is demonstrating that a machine learning model meant to make credit decisions complies with fair lending laws. Laws like the Equal Credit Opportunity Act (ECOA) require all banks to show that the way they extend credit to customers does not discriminate on the basis of protected classes such as race, color, religion, national origin, sex, marital status, and age. However, translating these legal notions into precise mathematical statements immediately presents the problem of having multiple legal notions of fairness.

There is disparate treatment, treating people differently based on their protected attribute, and also disparate impact, in which the outcome of a policy could be evidence of discrimination. Banks want to be fair in both senses, with respect to the inputs to a decision as well as the outcomes of a decision. However, recent work by machine learning researchers shows that there are challenges when satisfying both notions of fairness in practical decision-making. The challenges of fully mitigating both disparate treatment and disparate impact risks in this use case requires a discussion between business leaders, data scientists, and legal experts to determine the best risk management strategy for each application. It also requires a decidedly human-centered approach to instill confidence that machine-generated decisions are being made with the customer’s interest in mind.

Given the need for fair explanations, several questions begin to arise: Can banks use complex models such as deep neural networks? Would a court of law accept an explanation that a bank gave applicant “A” a credit card but not applicant “B” because of the 294,012 coefficients that differed between their inputs into a decisioning algorithm? And how can the bank prove, in both the legal and mathematical senses, that the way it makes these decisions is fair?

The conclusion is that our industry needs to arrive at a more clear understanding of black box algorithms so that developers can construct them in a way that all stakeholders can understand their decisions; we also need more precise articulations of what constitutes the acceptable explanation of a model and its decisions. By achieving these goals, the industry can better incorporate transparent, fair, and equitable gears into its models and systems.

Ultimately, customers, courts of law, data scientists, and other stakeholders require different levels of explanation. This is a challenge that we at Capital One are actively pursuing to ensure that we can maintain the highest standards for explainability — in an ethical and fair way that puts humans first — as we develop more advanced models for more use cases.

One of my recent papers delves into this topic in more depth, and was recently presented at the FATREC Workshop at ACM RecSys; it summarizes in more depth some of the challenges for using AI/ML in the financial services industry that I’ve mentioned here. Researchers at Capital One are also co-hosting a workshop at the upcoming NIPS Conference titled “Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy.” If you’re interested in tackling this problem with us, we invite you to join and contribute to the discussion.


Jiahao Chen, Machine Learning Research Scientist at Capital One

Machine Learning Research Scientist at Capital One, New York. Former Research Scientist at MIT CSAIL. Problematic Developer of the Julia language.