For trustworthy AI, explaining "why" is essential

As organisations set their direction of travel over the years to come, harnessing the power of AI and machine learning technologies will undoubtedly play a crucial, pace-setting role. The convergence of the availability of large datasets, increases in computing power, and the advancement of algorithmic techniques is unlocking unprecedented potential for better, more efficient, and more personalised product- and service-delivery.

However, in order for staff, customers, and the public in general to embrace the growing suite of tools provided by AI technologies, these tools must, first and foremost, be trustworthy. Not only must the results of the tasks that AI systems help to complete be reliable, safe, ethical and fair, but these outcomes should be explainable as such. In other words, those who are affected by the use of AI-assisted decision-making systems and other forms of automated problem-solving should be able to understand the reasons underlying any outcomes from these processes that impact them.

Over the last year, The Alan Turing Institute and the Information Commissioner’s Office (ICO) have been working together to explore how best to tackle the set of challenges that arise from demands for explainable and trustworthy AI. The ultimate product of this joint endeavour — the most comprehensive practical guidance on AI explanation produced anywhere to date — has now been released for consultation. The consultation runs until 24 January 2020, with the final guidance due to be released later in the year.

At the heart of the guidance, which is introduced in greater depth in this extended blog post, is a series of related questions: What makes for a good explanation of decisions supported by AI systems? How can such explanations be reliably extracted and made understandable to a non-technical audience? How should organisations go about providing meaningful explanations of the AI-supported decisions they make? And what do the people affected by these decisions deserve, desire and need to know?

The main focus of the guidance is the need to tailor explanations to the context in which AI systems are used for decision-making. This vital contextual aspect includes the domain or sector in which an organisation operates, and the individual circumstances of the person receiving the decision.

The guidance stresses a principles-based approach to the governance of AI explanations, presenting four principles of explainability that underpin the guidance and steer the recommendations it proposes. These principles are to be transparent, be accountable, consider context, and reflect on impacts.

Building off these principles, we identify a range of different explanation types, which cover various facets of an explanation, such as explanations of who is responsible, explanations of the rationale that led to a particular decision, explanations of how data has been collected, curated, and used, and explanations of measures taken across an AI model’s design and deployment to ensure fair and safe outcomes.

For organisations, the emphasis is on how to set up and govern the use of AI systems to be suitably transparent and accountable, and to ensure that these systems prioritise, where appropriate, inherently explainable AI models over less interpretable ‘black box’ systems. We aim to help governance and technical teams think about how best to extract explanations from the AI systems their organisations use.

The guidance is intended to be a useful and inclusive tool, so the ICO and the Turing welcome comments from members of the public, experts, and practitioners who are developing and deploying AI systems. You can find details on responding to the consultation here. Maximising the explainability of AI systems and their outcomes is a considerable technical, social and organisational challenge — but it is only by solving this challenge that AI can be designed and deployed in ways that make it worthy of public trust.

  • Katherine Mayes

    Katherine Mayes

    Programme Manager | Cloud, Data, Analytics and AI
    T 020 7331 2019

Share this

FROM SOCIAL MEDIA

Join us for an Introduction to techUK on Tuesday 24 November. Whether you are new to techUK, thinking of joining u… https://t.co/Xrp47iFBPP
Last chance to join us for Hong Kong Fintech Week 2020! If you're a techUK member, claim your discounted access to… https://t.co/4QToCzMNho
🚨 New #techUK report - Delivering diversity. techUK has catalogued how members are being proactive in tackling ine… https://t.co/ErzZx1C1Mi
Data adequacy is a hot trend right now. Read our experts letter for @LawSocBrussels delving into importance of data… https://t.co/ekCJOLf7Q5
@AwenCollective Welcome to techUK - we are delighted to have you on board!
The @techcharterUK have launched a new campaign called #DoingItAnyway to help more women get into #tech. Get inspi… https://t.co/WfftI5rKOD
Join our friends @bethebusiness and @Facebook next Wednesday for their latest regional event. If you’re a business… https://t.co/7xXtnWJeMJ
FINAL CALL: Nominations to the Health and Social Care Council close on the 2nd November. Step up and help to lead t… https://t.co/a5DIXuq64U
Become a Member
×

Become a techUK Member

By becoming a techUK member we will help you grow through:

Click here to learn more...