Responsible AI: From principles to practice

Guest Blog: Ray Eitel-Porter, Medb Corcoran & Patrick Connolly from Accenture #AIWeek2021

RESEARCH REPORT                        

Responsible AI in practice—essential but not easy

Despite the real value organisations can achieve through Artificial Intelligence (AI), many still struggle to address the risks associated with it.

In a global survey of risk managers, 58% identify AI as the biggest potential cause of unintended consequences over the next two years. Only 11% say they’re fully capable of assessing risks associated with organisation-wide AI adoption.

Bias, discrimination, fairness, and explainability are areas of paramount concern. And while there are some specific definitions for these problem areas, translating them into action involves tough decisions and application-specific constraints.

As AI decisions increasingly influence and impact people’s lives at scale, so does the associated responsibility on the enterprise to manage the potential ethical and socio-technical implications of AI adoption.

So how do leaders meet that responsibility, while scaling AI for the exponential business benefits at stake?

Professionalising AI to scale with confidence

Faced with this scenario, many companies have begun to professionalise their approach to AI and data. And what we observe is this: the companies that have put in place the right structures from the start, including considering Responsible AI, are able to scale with confidence, achieving nearly three times the return on their AI investments when compared to those that have not.

But scaling effectively is tough. Many organisations still struggle to scale Responsible AI proofs of concept across their live processes. So what are the challenges? And how can they overcome them—and move from principles to practice?

Practitioner insights: The realities of Responsible AI

To answer these questions, we spoke to Responsible AI practitioners from 19 organisations, across four continents. Our analysis indicates that some organisations have struggled to develop a systematic internal approach to convert principles into practice. And our experience shows this is because they underestimate the technical complexity and scale of people and process change required.

The four pillars of Responsible AI

Organisations everywhere need to tackle a central challenge: translating ethical principles into practical, measurable metrics that work for them.

To embed these into everyday processes, they also need the right organisational, technical, operational, and reputational scaffolding. Based on our experience delivering Responsible AI solutions to organisations worldwide, we’ve defined four pillars of successful Responsible AI implementations. In short, those pillars are:

  • Organisational: Democratise new ways of working and facilitate human+machine collaboration.
  • Operational: Set up governance and systems that enable AI to flourish.
  • Technical: Make systems and platforms trustworthy and explainable by design.
  • Reputational: Articulate the Responsible AI mission and ensure it’s anchored to company values and ethical guardrails.

Read our report for a detailed view on practitioner pain points for each of these, and approaches to smooth those pain points and enable AI scaling with confidence.

Responsible AI: From practice to proof

The value of AI is clear. But it can bring with it new, dynamic, ethical and social issues. While many organisations have taken the first step and defined AI principles, translating these into practice is far from easy, especially with few standards or regulations to guide them.

We use a set of 25 questions to help our clients to benchmark their motivators and challenges, together with their maturity in terms of people, process and technology against their peers. Where are you on your Responsible AI journey?

 

Authors:

Ray Eitel-Porter, MANAGING DIRECTOR – APPLIED INTELLIGENCE, GLOBAL LEAD FOR RESPONSIBLE AI

Medb Corcoran, MANAGING DIRECTOR – ACCENTURE LABS, GLOBAL RESPONSIBLE AI LEAD FOR TECHNOLOGY INNOVATION

Patrick Connolly, RESEARCH MANAGER – THE DOCK, ACCENTURE RESEARCH

 

You can read all insights from techUK's AI Week here

Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore