Decoding AI’s Risk of Bias in FinanceJul 28, 2023
By Jane E. Hood, Advisor and Head of Marketing FoxyAI
A recent CNBC article by Ryan Browne and MacKenzie Sigalos explored the risk of bias when using Artificial Intelligence (AI) within the financial sector.
I’ve summarized and expanded on a few key takeaways regarding the challenges and risks associated with implementing AI in the financial industry so that we may continue to advance technology for the benefit of all.
The Data Problem
Let’s first examine how AI works. AI systems heavily depend on the quality of their training data, making incomplete or biased datasets a critical concern. Deloitte’s insights in their article underscore the significance of maintaining data integrity for AI’s objectivity and decision-making capabilities.
As AI technology advances rapidly, financial institutions must find the right balance to keep pace with tech-savvy counterparts while thoughtfully incorporating its transformative potential.
Ending a Long History of Bias in Lending Practices
Sadly, shadows of discriminatory practices from the past still influence AI, upholding negative impacts on marginalized communities. Our exploration prompts an essential dialogue about confronting biases to foster fair and inclusive lending practices.
As we venture into uncharted waters, we must proceed mindfully. AI’s personalization dimension presents unique challenges, requiring careful selection of relevant features for training AI models and steering them toward desired outcomes.
Enhancing Data Quality with Generative AI
Amidst the AI spotlight, it’s time to shed light on the unsung hero—Generative AI. Unlike traditional AI models, which focus on classification or prediction tasks, Generative AI takes data processing to a whole new level.
At its core, Generative AI seeks to understand the underlying patterns and structures within raw, unstructured data. It goes beyond merely classifying or predicting outcomes; instead, it delves into the essence of the data, aiming to create new, synthetic data that adheres to the same distribution as the original information. In other words, Generative AI takes on the role of a skilled curator, arranging scattered pieces of data with remarkable precision to create meaningful insights.
However, Rumman Chowdhury, Twitter’s former head of machine learning ethics, transparency, and accountability, stated, “I worry quite a bit that, due to generative AI, we are entering this post-truth world where nothing we see online is trustworthy — not any of the text, not any of the video, not any of the audio, but then how do we get our information? And how do we ensure that information has a high amount of integrity?” This is a critical issue that needs to be addressed. Just this week, seven corporations involved in the space—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—have voluntarily pledged to adhere to new “guardrails” to manage the potential risks associated with AI.
Onward to a More Inclusive Future
The power of AI demands immense responsibility. Banks must proactively address biases and discriminatory risks, fostering a financial landscape that upholds fairness and inclusivity. The key is leveraging AI to be a transformative and responsible agent for change, promoting positive outcomes without perpetuating harmful practices. Transparency and collaboration among industry leaders will enhance the responsible deployment of AI and promote its widespread adoption.