For trustworthy AI, explaining "why" is essential

As organisations set their direction of travel over the years to come, harnessing the power of AI and machine learning technologies will undoubtedly play a crucial, pace-setting role. The convergence of the availability of large datasets, increases in computing power, and the advancement of algorithmic techniques is unlocking unprecedented potential for better, more efficient, and more personalised product- and service-delivery.

However, in order for staff, customers, and the public in general to embrace the growing suite of tools provided by AI technologies, these tools must, first and foremost, be trustworthy. Not only must the results of the tasks that AI systems help to complete be reliable, safe, ethical and fair, but these outcomes should be explainable as such. In other words, those who are affected by the use of AI-assisted decision-making systems and other forms of automated problem-solving should be able to understand the reasons underlying any outcomes from these processes that impact them.

Over the last year, The Alan Turing Institute and the Information Commissioner’s Office (ICO) have been working together to explore how best to tackle the set of challenges that arise from demands for explainable and trustworthy AI. The ultimate product of this joint endeavour — the most comprehensive practical guidance on AI explanation produced anywhere to date — has now been released for consultation. The consultation runs until 24 January 2020, with the final guidance due to be released later in the year.

At the heart of the guidance, which is introduced in greater depth in this extended blog post, is a series of related questions: What makes for a good explanation of decisions supported by AI systems? How can such explanations be reliably extracted and made understandable to a non-technical audience? How should organisations go about providing meaningful explanations of the AI-supported decisions they make? And what do the people affected by these decisions deserve, desire and need to know?

The main focus of the guidance is the need to tailor explanations to the context in which AI systems are used for decision-making. This vital contextual aspect includes the domain or sector in which an organisation operates, and the individual circumstances of the person receiving the decision.

The guidance stresses a principles-based approach to the governance of AI explanations, presenting four principles of explainability that underpin the guidance and steer the recommendations it proposes. These principles are to be transparent, be accountable, consider context, and reflect on impacts.

Building off these principles, we identify a range of different explanation types, which cover various facets of an explanation, such as explanations of who is responsible, explanations of the rationale that led to a particular decision, explanations of how data has been collected, curated, and used, and explanations of measures taken across an AI model’s design and deployment to ensure fair and safe outcomes.

For organisations, the emphasis is on how to set up and govern the use of AI systems to be suitably transparent and accountable, and to ensure that these systems prioritise, where appropriate, inherently explainable AI models over less interpretable ‘black box’ systems. We aim to help governance and technical teams think about how best to extract explanations from the AI systems their organisations use.

The guidance is intended to be a useful and inclusive tool, so the ICO and the Turing welcome comments from members of the public, experts, and practitioners who are developing and deploying AI systems. You can find details on responding to the consultation here. Maximising the explainability of AI systems and their outcomes is a considerable technical, social and organisational challenge — but it is only by solving this challenge that AI can be designed and deployed in ways that make it worthy of public trust.

  • Katherine Mayes

    Katherine Mayes

    Programme Manager | Cloud, Data, Analytics and AI
    T 020 7331 2019

Share this

FROM SOCIAL MEDIA

Event reminder: join our roundtable this Wednesday to hear insight from industry, local and central government on h… https://t.co/tJa1smeW0N
Join us online for the #techUKSmarterState conference on 16 & 17 September. The event will bring together leaders… https://t.co/OfwfsS4XlY
Thank you, and thank you to everyone that contributed to our #PlaceLedInnovation week! You can catch-up with all th… https://t.co/pipMh0Gr1v
Mayor of London calls for an Emerging Technologies Charter, with @LDN_CDO & Smart London Board tasked with developi… https://t.co/GMlOlCKGr4
Guest blog: New Tech Solutions to Old Tech’s Environmental Problems by Mohit Joshi @Infosys President as part of ou… https://t.co/1D4ecmKXhF
Guest blog: Don’t take digital access for granted by @NatMitch1 CEO @intechnologySC & Vice-Chair of techUK's Local… https://t.co/Rj0S2xJz4c
On 16-17 September we'll be hosting the first virtual edition of the #techUKSmarterState conference. Delegates an… https://t.co/pTBpTzBRBR
Great to see the Government announce £20 million in new grants to help SMEs adopt digital technology and access adv… https://t.co/LdswpdgSfM
Guest blog: Neil Manthorpe, Associate Director of Design at @atkinsglobal explores how technology and big data can… https://t.co/RoLYmL1tnS
Become a Member
×

Become a techUK Member

By becoming a techUK member we will help you grow through:

Click here to learn more...