Eliminating Gender Bias in Finance

Guest Blog: Sharon Lee, Onespan

Artificial Intelligence (AI) and Machine Learning (ML) have given banks and financial institutions (FI) critical capabilities to act instantaneously in today’s fast-paced digital world. Not so long ago, it would have seemed mind-boggling to many people that these digital tools would be able to detect suspicious, potentially fraudulent activity in enormous amount of data in real-time. 

While these tools are exceptional for pattern recognition in big data and automating decision making such as detecting anomaly and subsequently implementing additional security measures, other tasks utilising the technologies have raised concerns.  

This demonstrates that, while these technologies can be cost-effective and greatly increase operational efficiencies, there’s still more work to be done to ensure technology does not discriminate against certain minority groups. 

 

Instances of bias technology in society 

The problem right now is that AI and ML models get fed massive data which could be biased in the first place, because the data may not be a representative sample of the real world, or it simply captures biases in our society. ML models are capable of recognising patterns in the data including the biases, and therefore discriminating against a specific group of people. This applies to factors such as gender, age, economic background, and race. For example, universities using AI technology for candidate screening may adversely select prospective students based on their personal information such as race, hometown, household income. Similar algorithmic biases may also present in processing job applications. 

AI and ML algorithms do not understand what discrimination is, so organisations need to be aware about the potential AI biases and develop a strategy to detect and mitigate their ability to do so, or to create better models that are more capable of eliminating biases in AI and ML technologies. 

 

Improving AI technologies to shape an equal future 

To begin to create a fair and equal future using AI and ML technologies, we need to establish an understanding of why an AI tool makes a certain decision. Why did Apple’s credit card algorithm offer a woman less credit than a man, even when both of their assets and credit history were the same? Organisations automating their operations using these technologies require a transparent and accountable way of ensuring that cases of AI biases which could potentially lead to discrimination are swiftly identified and dealt with. 

Work has begun on explainable AI (XAI) models to help shed light into the decision-making processes of such algorithms. In finance, gaining this understanding will allow banks and FIs to identify potential causes of discrimination such as gender bias. AI and ML technologies certainly can cut costs and enhance operational efficiency, but there still needs to be a human element in these processes to ensure that no one is at a disadvantage because of their gender, or any other identifiable characteristic. 

Regulators tend to lag behind issuing legislation for such innovative technologies, but it’s quickly becoming apparent that a legal framework is needed to guide the real-life application of AI and ML technologies. In April 2021, the European Commission proposed the first ever AI regulation which addresses the risk of AI. As an industry it will be important to continue to collaborate with both private and public sector organisations to create a transparent market and a fair society for all. 

 

This article, written by Sharon Lee, was first published on the OneSpan blog

 

You can read all insights from techUK's AI Week here

Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore