Interpretability for the safe, ethical deployment of AI

Guest Blog: Humanising Autonomy as part of techUK's AIWeek #AIWeek2021

As a behaviour AI company that deals exclusively with camera footage, our role in the modern privacy narrative is clear. Despite many refuting privacy as a thing of the past, we believe that individual privacy and data integrity is not only possible in today’s society, but something that all artificial intelligence and analysis companies should strive for, no matter the domain or markets they’re active in. The solution is simple: interpretable, explainable AI. We’ve built this into our proprietary Behaviour AI Platform to ensure our approach to AI means we are actively working towards an ethical society in which the value of machines is measured by how well they promote the safety and well-being of people. Here’s how we do it.

Interpretable AI for transparent, trustworthy and ethical AI

Using end-to-end deep learning for pedestrian crossing prediction is the preferred approach in industry, but not without its drawbacks. Though this method is exceptionally powerful, it implies very few constraints on the structure of the model because there may be billions of parameters. This complex structure means it is near impossible to understand how decisions are made; nor can we obtain reliable and valid uncertainty estimates, making them black-box models. In addition to their lack of explainability, these Deep learning models are known for their overconfident predictions (see Adversarial Examples). This inadequate  approach makes end-to-end deep learning difficult to justify in safety critical applications like autonomous driving., driver assist products, I4.0 use-cases, and many other domains.

In contrast to end to end models, we believe the missing link in intent prediction is a true understanding of the human cognitive processes. This theory of mind helps to identify when further communication is required in road scenarios to establish a smooth interaction. Moreover, this perspective is necessary to bridge the critical safety gap in many industry’s current approaches to prediction models. Our methodology combines behavioural psychology, statistical AI and novel deep learning algorithms to understand, infer and predict the full spectrum of human behaviours.

This interpretable, white box approach to AI enables the transparency, trust and model improvements necessary to maintain safety, personal privacy and integrity. Our white box architecture gives customers the opportunity  to understand how decisions are made. With interpretable AI, the science underpinning our intention engine framework is clear, making functional safety more attainable than ever before. It means trustworthy AI. Modularity enables easy optimisation to new environments; accounting for local customs that ultimately reduces bias. Finally, interpretable AI enables constantly-improving models that become stronger and more accurate each day.

Understanding risks is critical to our approach

As with any safety-critical camera-based perception application, understanding the risks and ensuring you’re able to respond accordingly is essential. In addition to accounting for functional safety requirements when assessing appropriate applications of our software, we ensure that personal data is collected with lawful basis. Proper data management and protection measures to prevent negative unintended consequences are built into all of our applications and made top priority amongst our talented team. This is relevant across all domains - be it manufacturing, smart city, automotive, insurance, logistics, fleets or retail analytics - and ensures that people’s safety is consistently put first.

 

Author:

Humanising Autonomy

 

You can read all insights from techUK's AI Week here

 

Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore