For trustworthy AI, explaining "why" is essential

As organisations set their direction of travel over the years to come, harnessing the power of AI and machine learning technologies will undoubtedly play a crucial, pace-setting role. The convergence of the availability of large datasets, increases in computing power, and the advancement of algorithmic techniques is unlocking unprecedented potential for better, more efficient, and more personalised product- and service-delivery.

However, in order for staff, customers, and the public in general to embrace the growing suite of tools provided by AI technologies, these tools must, first and foremost, be trustworthy. Not only must the results of the tasks that AI systems help to complete be reliable, safe, ethical and fair, but these outcomes should be explainable as such. In other words, those who are affected by the use of AI-assisted decision-making systems and other forms of automated problem-solving should be able to understand the reasons underlying any outcomes from these processes that impact them.

Over the last year, The Alan Turing Institute and the Information Commissioner’s Office (ICO) have been working together to explore how best to tackle the set of challenges that arise from demands for explainable and trustworthy AI. The ultimate product of this joint endeavour — the most comprehensive practical guidance on AI explanation produced anywhere to date — has now been released for consultation. The consultation runs until 24 January 2020, with the final guidance due to be released later in the year.

At the heart of the guidance, which is introduced in greater depth in this extended blog post, is a series of related questions: What makes for a good explanation of decisions supported by AI systems? How can such explanations be reliably extracted and made understandable to a non-technical audience? How should organisations go about providing meaningful explanations of the AI-supported decisions they make? And what do the people affected by these decisions deserve, desire and need to know?

The main focus of the guidance is the need to tailor explanations to the context in which AI systems are used for decision-making. This vital contextual aspect includes the domain or sector in which an organisation operates, and the individual circumstances of the person receiving the decision.

The guidance stresses a principles-based approach to the governance of AI explanations, presenting four principles of explainability that underpin the guidance and steer the recommendations it proposes. These principles are to be transparent, be accountable, consider context, and reflect on impacts.

Building off these principles, we identify a range of different explanation types, which cover various facets of an explanation, such as explanations of who is responsible, explanations of the rationale that led to a particular decision, explanations of how data has been collected, curated, and used, and explanations of measures taken across an AI model’s design and deployment to ensure fair and safe outcomes.

For organisations, the emphasis is on how to set up and govern the use of AI systems to be suitably transparent and accountable, and to ensure that these systems prioritise, where appropriate, inherently explainable AI models over less interpretable ‘black box’ systems. We aim to help governance and technical teams think about how best to extract explanations from the AI systems their organisations use.

The guidance is intended to be a useful and inclusive tool, so the ICO and the Turing welcome comments from members of the public, experts, and practitioners who are developing and deploying AI systems. You can find details on responding to the consultation here. Maximising the explainability of AI systems and their outcomes is a considerable technical, social and organisational challenge — but it is only by solving this challenge that AI can be designed and deployed in ways that make it worthy of public trust.

  • Katherine Mayes

    Katherine Mayes

    Programme Manager | Cloud, Data, Analytics and AI
    T 020 7331 2019

Share this

FROM SOCIAL MEDIA

Guest Blog: Facing up to cyber threats during COVID-19 and beyond by David Viola, @QinetiQ explores how cyber threa… https://t.co/pdrVjW5jBx
techUK members are invited to join a Zoom webinar this Friday 5 June from 15:00 on 'An Introduction to BSA Buying G… https://t.co/1ZF6nptnTj
.@techUK Cloud Week is back 15-19 June. Cloud computing has played a pivotal role in helping during the Covid19 cri… https://t.co/Jcanfykkt2
On 16 June from 14:00-15:00, #techUK will be hosting a session with tech #SMEs to discuss what guidance and support… https://t.co/UaisZagzmX
#techUK, along with other leading international tech and business trade associations, have issued recommendations t… https://t.co/PDqTElNHIl
Last chance to register for today's webinar on responsible mineral sourcing - a massive issue for tech firms - toda… https://t.co/h9aSXM4bnH
Join us on Monday for a webinar looking at human rights due diligence. We've got a great panel of experts setting o… https://t.co/JBDZkkBds0
For our #ConnectandProtect campaign, @PDevComms explains that the experience of @TCS_UKI during #COVID19UK has acce… https://t.co/6sMsDV377D
International perspectives: Join us on 16 June from 12:30 - 13:30 to hear from Patrik Sundström, the architect behi… https://t.co/kTL39WWWMS
This afternoon #techUK will host its ninth post-COVID webinar, this time to discuss the topic of #Diversity &… https://t.co/8Ch8eLzj4U
Become a Member
×

Become a techUK Member

By becoming a techUK member we will help you grow through:

Click here to learn more...