30 Apr 2025
by Tess Buckley

Infrastructure for Trust: JP Morgan's Open Letter to Third-Party Suppliers Signaling a New Standard for Technology Procurement 

JP Morgan Chase's Chief Information Security Officer, Patrick Opet, has issued an open letter to third-party suppliers outlining new requirements for SaaS delivery models. This landmark communication represents one of the first major financial institutions to call to action comprehensive AI assurance documentation from vendors and suppliers. 

JP Morgan is now requiring suppliers to demonstrate responsible AI practices through detailed documentation of their systems, including information on training data, model development processes, fairness assessments, and ongoing monitoring procedures. 

"We stand at a critical juncture," Opet states. "Providers must urgently reprioritise security, placing it equal to or above launching new products. 'Secure and resilient by design' must go beyond slogans—it requires continuous, demonstrable evidence that controls are working effectively, not simply relying on annual compliance checks." 

These requirements apply to any supplier providing JP Morgan with AI-powered solutions or components, establishing a clear threshold for acceptable AI risk management. The letter provides specific documentation templates and requirements for different types of AI systems based on their risk profile and application context, which includes: 

  • Implementing AI governance frameworks before deployment 

  • Conducting regular red team exercises against AI systems 

  • Establishing clear model documentation standards 

  • Creating dedicated AI security response teams 

The UK has been at the forefront of developing AI assurance, contributing to both national and international efforts. This approach aligns with the work of DSIT's assurance teams. Following the UK government's recent report "Assuring a Responsible Future for AI"—which projects that the UK's AI Assurance Market could add over £6.5 billion in Gross Value Added (GVA) within the next decade—the importance of these tools in making ethical AI more achievable is increasingly evident. This priority continues in the UK's AI Opportunities Plan, which specifically addresses this need in recommendation 29: "develop the AI assurance ecosystem." 

The UK's leading work in risk mitigation for AI systems includes BSI's development of AI-specific standards and contributions to ISO/IEC frameworks, notably ISO 42001 (December 2023 - AI management) and ISO 27001 (October 2022 - information security management). For companies—especially resource-constrained SMEs—DSIT's AI Management Essentials Tool offers valuable guidance by synthesising three leading AI governance frameworks (ISO 42001 from the UK, NIST RMF from the USA, and the EU AI Act) into 13 core self-assessment questions. 

The business case for AI assurance continues to strengthen, with JP Morgan's open letter adding momentum to this movement. Justified trust through evidenced action supports adoption and compliance preparation while enhancing access to capital from investors and procurement teams. As Patrick Opet warns, "Companies that prioritise security now will emerge as leaders." 

 

Financial Sector Leadership in Responsible AI & Impact on the Technology Industry 

JP Morgan's recent move in the financial services sector creates a notable shift that will likely influence the broader technology landscape. Technology vendors working with financial institutions will need to develop more thorough AI documentation and assurance capabilities, which may gradually reshape their supply chains. Companies that have already invested in thoughtful AI governance practices may find themselves with a natural advantage when seeking financial sector partnerships. 

Smaller technology providers might face some challenges in adapting to these documentation expectations, which could encourage collaborative approaches or industry partnerships. As time passes, JP Morgan's guidelines might naturally evolve into helpful reference points for AI documentation in financial services, gently guiding industry practices. In response to these changes, technology firms may begin to thoughtfully adjust their resources toward AI governance and documentation to remain relevant in this evolving environment. 

The financial sector's emergence as a leader in responsible AI implementation is encouraging and can lead the way for other sectors. Banks and financial institutions operate within highly regulated environments where system failures can impact the market and consumer trust. This lower risk appetite, existing fiduciary duties combined with extensive regulatory oversight from bodies like the FCA, BOE and global financial authorities, creates a natural imperative for rigorous AI governance. 

The financial sector is uniquely positioned to lead responsible AI adoption due to several interconnected factors. Financial institutions already operate within comprehensive model risk management frameworks, providing a foundation that can be naturally extended to govern AI systems. The interconnected nature of global finance means algorithmic failures could potentially cascade through markets, creating a natural incentive for heightened caution and thorough testing.  

Financial institutions also handle vast amounts of highly sensitive personal and financial information, making privacy and security considerations especially important in their AI implementations. Additionally, many financial decisions—from credit approvals to wealth management advice and insurance underwriting—directly impact individuals' financial wellbeing, elevating the importance of fairness in algorithmic decision-making.  

Finally, the sector has developed robust accountability mechanisms following previous financial crises, establishing organisational structures that can be effectively leveraged for responsible AI governance and oversight. JP Morgan's requirements demonstrate how the financial sector is leveraging this regulatory maturity to establish leadership in responsible AI practices that will likely influence standards across industries. 

 

The Rise of Responsible AI Practitioners 

JP Morgan's supplier requirements coincide with a critical evolution in the professional landscape surrounding AI ethics and governance. As highlighted in techUK's recent paper, a new class of Responsible AI (RAI) practitioners is emerging as essential human infrastructure for operationalising ethical principles and regulatory requirements across the UK economy. 

This professional field stands at a pivotal juncture—transitioning from an emergent discipline into an essential organisational function. These practitioners serve as the critical bridge between abstract ethical principles and concrete implementation, translating regulatory requirements into technical specifications and governance processes like those now mandated by JP Morgan. 

The financial sector's stringent requirements will further accelerate demand for these specialised professionals who can develop, implement, and oversee AI documentation and assurance mechanisms. JP Morgan's supplier letter effectively creates market pull for RAI expertise, potentially catalysing further professionalisation of this emerging discipline. 

Organisations across sectors must now consider how to build or acquire this essential capacity, whether through upskilling existing staff, creating dedicated roles, or engaging specialised consultancies. As the field matures, we can expect to see more formalised career pathways, professional standards, and certification programs develop to support this crucial workforce. 

 

Supporting the Responsible AI and AI Assurance Ecosystem 

JP Morgan's requirements demonstrate the critical relationship between ethics and risk management in AI deployment. The financial institution's approach shows how ethical considerations are increasingly viewed through a risk management lens. By requiring suppliers to document fairness assessments, bias mitigation strategies, and ongoing monitoring, JP Morgan recognises that ethical failures in AI systems directly translate to business and reputational risks. 

This approach aligns with emerging investor perspectives on AI due diligence, initially illustrated in June 2024 World Economic Forum's "Responsible AI Playbook for Investors", JP Morgan's requirements highlight how mainstream institutional investors increasingly view ethical AI practices as fundamental risk management. 

Investors and procurement teams are now asking increasingly sophisticated questions about AI systems: 

  • How are you documenting model development decisions? 

  • What fairness metrics are you tracking? 

  • How are you monitoring for model drift and performance degradation? 

  • What governance mechanisms oversee AI deployment? 

  • How do you ensure transparency with end-users about AI use? 

These questions aren't merely about ethics in the abstract—they represent concrete risk assessment criteria that directly impact valuation, procurement decisions, and access to capital. 

The continued development of AI assurance practices highlights the need for collaborative approaches between investors, technology providers, and ethics experts. 

Join us at our upcoming sector specific AI Assurance event series and our Investors for Digital Ethics event to explore these themes in greater depth. Industry leaders will discuss practical approaches to integrating ethical considerations into investment decisions and procurement processes. 

 

For further information on techUK’s financial services programme review our website page here and contact [email protected] for details. For further information on techUK’s digital ethics and AI assurance programme review our website page here and contact [email protected] for details.

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

A digital ethicist and musician, Tess holds a MA in AI and Philosophy, specialising in ableism in biotechnologies. Their professional journey includes working as an AI Ethics Analyst with a dataset on corporate digital responsibility, followed by supporting the development of a specialised model for sustainability disclosure requests. Currently at techUK as programme manager in digital ethics and AI safety, Tess focuses on demystifying and operationalising ethics through assurance mechanisms and standards. Their primary research interests encompass AI music systems, AI fluency, and technology created by and for differently abled individuals. Their overarching goal is to apply philosophical principles to make emerging technologies both explainable and ethical.

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music. 

Email:
[email protected]
Website:
tessbuckley.me
LinkedIn:
https://www.linkedin.com/in/tesssbuckley/

Read lessmore

James Challinor

James Challinor

Head of Financial Services, techUK

James leads our financial services programme of activity. He works closely with member firms from across the sector to ensure innovation and technology are fully harnessed and embraced by both industry and regulators. 

Prior to joining us James worked at other business organisations including TheCityUK and the Confederation of British Industry (CBI) in roles focused on supporting the financial & related professional services eco-system, with a particular focus on financial technology and market infrastructure. 

He holds degrees from King's College London and Oxford Brookes University, and outside of work enjoys socialising, exercising, and travelling to new locations.

Email:
[email protected]
LinkedIn:
https://www.linkedin.com/in/james-challinor-105212177/

Read lessmore

 

Related topics

Authors

Tess Buckley

Tess Buckley

Programme Manager, Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.

Email:
[email protected]

Read lessmore