Responsible AI: From principles to practice
Responsible AI in practice—essential but not easy
Despite the real value organisations can achieve through Artificial Intelligence (AI), many still struggle to address the risks associated with it.
In a global survey of risk managers, 58% identify AI as the biggest potential cause of unintended consequences over the next two years. Only 11% say they’re fully capable of assessing risks associated with organisation-wide AI adoption.
Bias, discrimination, fairness, and explainability are areas of paramount concern. And while there are some specific definitions for these problem areas, translating them into action involves tough decisions and application-specific constraints.
As AI decisions increasingly influence and impact people’s lives at scale, so does the associated responsibility on the enterprise to manage the potential ethical and socio-technical implications of AI adoption.
So how do leaders meet that responsibility, while scaling AI for the exponential business benefits at stake?
Professionalising AI to scale with confidence
Faced with this scenario, many companies have begun to professionalise their approach to AI and data. And what we observe is this: the companies that have put in place the right structures from the start, including considering Responsible AI, are able to scale with confidence, achieving nearly three times the return on their AI investments when compared to those that have not.
But scaling effectively is tough. Many organisations still struggle to scale Responsible AI proofs of concept across their live processes. So what are the challenges? And how can they overcome them—and move from principles to practice?
Practitioner insights: The realities of Responsible AI
To answer these questions, we spoke to Responsible AI practitioners from 19 organisations, across four continents. Our analysis indicates that some organisations have struggled to develop a systematic internal approach to convert principles into practice. And our experience shows this is because they underestimate the technical complexity and scale of people and process change required.
The four pillars of Responsible AI
Organisations everywhere need to tackle a central challenge: translating ethical principles into practical, measurable metrics that work for them.
To embed these into everyday processes, they also need the right organisational, technical, operational, and reputational scaffolding. Based on our experience delivering Responsible AI solutions to organisations worldwide, we’ve defined four pillars of successful Responsible AI implementations. In short, those pillars are:
- Organisational: Democratise new ways of working and facilitate human+machine collaboration.
- Operational: Set up governance and systems that enable AI to flourish.
- Technical: Make systems and platforms trustworthy and explainable by design.
- Reputational: Articulate the Responsible AI mission and ensure it’s anchored to company values and ethical guardrails.
Read our report for a detailed view on practitioner pain points for each of these, and approaches to smooth those pain points and enable AI scaling with confidence.
Responsible AI: From practice to proof
The value of AI is clear. But it can bring with it new, dynamic, ethical and social issues. While many organisations have taken the first step and defined AI principles, translating these into practice is far from easy, especially with few standards or regulations to guide them.
We use a set of 25 questions to help our clients to benchmark their motivators and challenges, together with their maturity in terms of people, process and technology against their peers. Where are you on your Responsible AI journey?
Ray Eitel-Porter, MANAGING DIRECTOR – APPLIED INTELLIGENCE, GLOBAL LEAD FOR RESPONSIBLE AI
Medb Corcoran, MANAGING DIRECTOR – ACCENTURE LABS, GLOBAL RESPONSIBLE AI LEAD FOR TECHNOLOGY INNOVATION
Patrick Connolly, RESEARCH MANAGER – THE DOCK, ACCENTURE RESEARCH
Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme.
Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.
Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.
Katherine has a BSc degree in Biology from the University of Nottingham.
- [email protected]
- 020 7331 2019
Zoe is a Programme Assistant, supporting techUK's work across Policy, Technology and Innovation.
The team makes the tech case to government and policymakers in Westminster, Whitehall, Brussels and across the UK on the most pressing issues affecting this sector and supports the Technology and Innovation team in the application and expansion of emerging technologies across business, including Geospatial Data, Quantum Computing, AR/VR/XR and Edge technologies.
Before joining techUK, Zoe worked as a Business Development and Membership Coordinator at London First and prior to that Zoe worked in Partnerships at a number of Forex and CFD brokerage firms including Think Markets, ETX Capital and Central Markets.
Zoe has a degree (BA Hons) from the University of Westminster and in her spare time, Zoe enjoys travelling, painting, keeping fit and socialising with friends.