Building trustworthy AI
Customers are increasingly interacting with financial services (FS) firms through digital channels. Reduced human interaction requires firms to use AI and data analytics to understand and serve customer needs better. However, the combination of a digital channel and use of AI presents risks and opportunities, particularly from a governance and compliance perspective.
Take the case of detecting and supporting customer vulnerability in a digital journey. Without AI and data analytics, it is incredibly hard to detect patterns of vulnerable behaviour and therefore provide timely support. However, the use of AI requires careful consideration - data protection, conduct requirements, and robust review and challenge of customer outcomes are all essential to the safe and successful application of AI.
In addition, there is greater social pressure on firms to serve a purpose beyond pure commercial gain. This brings a third dimension to the use of AI and consumer data - the ethical use of data.
In this report, we explore the alignment and potential regulatory uncertainty between conduct and privacy regulatory requirements and set out how ethics interacts with regulation and informs difficult judgment decisions and trade-offs when using AI-enabled solutions. We bring this to life through an illustrative case study - identifying and supporting vulnerability in a digital banking journey - and highlight what firms need to do to build trustworthy AI solutions. We conclude by making the case for further regulatory guidance to remove uncertainty to allow firms to innovate with confidence. Our analysis and exploration of these issues are designed to inform and support boards, senior management and digital leads who are responsible for AI-enabled solutions as they navigate their way through these complex issues.
This report builds on our previous paper on AI and Risk management, where we explored the dynamic nature of AI models and the resulting risk management implications. We have not repeated the key elements discussed in the previous paper here, but they continue to be relevant.
While this report draws on UK regulations, the challenges and solutions proposed for firms will be relevant to other jurisdictions, especially in the EU.
Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme.
Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.
Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.
Katherine has a BSc degree in Biology from the University of Nottingham.
- [email protected]
- 020 7331 2019
Zoe is a Programme Assistant, supporting techUK's work across Policy, Technology and Innovation.
The team makes the tech case to government and policymakers in Westminster, Whitehall, Brussels and across the UK on the most pressing issues affecting this sector and supports the Technology and Innovation team in the application and expansion of emerging technologies across business, including Geospatial Data, Quantum Computing, AR/VR/XR and Edge technologies.
Before joining techUK, Zoe worked as a Business Development and Membership Coordinator at London First and prior to that Zoe worked in Partnerships at a number of Forex and CFD brokerage firms including Think Markets, ETX Capital and Central Markets.
Zoe has a degree (BA Hons) from the University of Westminster and in her spare time, Zoe enjoys travelling, painting, keeping fit and socialising with friends.