Scaling and Accelerating AI - The Assurance Challenge

David Lawton, Technical Director, Informed Solutions discusses the challenge of assessing the fitness of systems that employ Machine Learning (ML) and Artificial Intelligence (AI) in highly complex and regulated industries. As part of AI Week 2021 #AIWeek2021

Artificial Intelligence (AI) is one of the fastest-growing, and popular data-driven technologies being used across the globe.

According to analyst house Gartner, just 10% of companies had used or were about to use AI technology in 2015. Fast forward to 2020 and that number had risen to almost 40%, with over 90% of multinationals questioned planning to invest in AI technologies for efficiency and productivity gains.

Today, all around the globe, organisations of all sizes from governments and large organisations to small online businesses are adopting AI with increasing pace and scale. This surge of interest and uptake is being driven by the exponential growth of available data, which, when set alongside the huge increases in computational power provided by cloud computing, and the commoditisation of AI software tools and frameworks makes AI and Machine Learning (ML) technologies ever-more attractive.

Now, the benefits of AI can be seen almost daily. Organisations are taking advantage of AI technologies to improve efficiency, productivity and decision making, Consumers are showing increasing willingness to share their data and adopt AI to improve their experience of services, from streaming music to choosing financial products and more. Alongside this, the societal benefits of such technologies are becoming ever more evident seen by advancement in the speed and accuracy of disease and illness diagnosis.

Such views on how AI is used must be balanced with some caution over the potential for AI and ML technologies to be wrongly configured, mishandled, or abused, and the unpredictability of the technology if its evolution proceeds without necessary safety and bias checks and balances.

So, whilst benefits and advancements are clear, the introduction of AI and ML-based technologies should not be rushed, particularly for organisations operating in complex or highly regulated environments. Indeed, for regulators that serve and protect those industries a dual challenge is presented: How do regulators best regulate AI across industry segments, and how can they themselves take advantage of AI to regulate those industries and assure safe and ethical deployment of autonomous systems.

One of the key considerations for the assurance of AI systems relates to data. This is where adoption of AI introduces unique challenges: Training data (data that is used for machine learning development) can be flawed, introducing error, duplication or even bias into a system.

Data degradation can result in ‘Model Drift’, which fundamentally changes system performance and results, with the AI degrading over time because the algorithms and data do not adequately reflect changes in the real world. The result is that an organisation may make bad decisions and the potential for the AI to be unfair or discriminatory increases.

AI assurance and making sure AI systems are safe requires a hands-on approach - managing processes and people to get the best results. So, what are some best practices to consider? What can be done to put together a good framework for AI governance? If you already have a data policy in place, then you have the right start. This relationship between data and AI is so close: What data do you have? Where is it coming from? How is the data being altered? By whom? Clear-cut requirements and principles must be adopted. 

Our work at Informed puts us at the heart of the design and delivery of digital services for complex and highly regulated environments. Whether in healthcare, justice and emergency services or energy and the environment, the application of AI and ML technologies that bring life and meaning to complex data must be carefully considered in light of policy direction.

The development of smarter regulation can accompany the growth and acceptance of emerging technology, protecting consumers, citizens, or patients, whilst helping drive innovation that sits naturally and effectively with policy. In the last ten years however, policymakers have given industry players something of a free rein to deploy emerging technologies. A key to smart policy development for complex markets that wish to take advantage of AI and ML relies on regulators having greater access to and understanding of the potential and challenges brought by these technologies. This in turn will help the establishment of effective future frameworks that result in innovation growth, whilst the public remains protected.

So where can organisations or regulators who are struggling with AI assurance challenges go to best understand and manage complexities involved in regulating AI and ML technologies for complex markets? Today, one of the world’s leading safety assurance programmes is being run by an increasingly influential and important group led by Professor John McDermid, Director of the Assuring Autonomy International Programme at the University of York.

In 2016 a Lloyd’s Register Foundation Foresight Review of Robotics and Autonomous Systems, (RAS) which includes AI and ML, identified that the biggest obstacle to gaining the benefits of RAS was their assurance and regulation. The Assuring Autonomy International Programme was created and is now actively addressing these global challenges.

It is leading state-of-the-art research to provide industry, regulators and others with guidance in assuring and regulating RAS. It breaks safety assurance into three elements: Communication; confidence; and trust.

  • Communication – A system that communicates the appropriate information to you so that you understand how it is making decisions.
  • Confidence – A system that is built using tools, techniques and data that give you confidence in its decisions, actions, abilities, safety, and limitations.
  • Trust – A structured way to understand and evaluate the system and evaluate whether its safety assurance is sufficient and can be trusted.

“Safety assurance is about bringing these three elements together in a structured and evidenced way. The work we’re undertaking, through collaborations across the globe, is providing guidance for developers and regulators to ensure that we can all benefit from the safe, assured and regulated introduction and adoption of RAS,” said Professor McDermid.

At Informed, our investment in AI skills doesn’t simply rely on an intimate understanding of methods, algorithms and software capabilities. Our true understanding of the capabilities and limits of positively disruptive technology relies on our long-term engagement with innovators, academics and industry stakeholders so that we can develop and deliver the methods and practices that assures safe and ethical implementation of AI and ML in sensitive environments.

Ultimately, a structured, effective and measurable approach to assuring AI and ML technologies for regulated industries will enable gains to be made, both in terms of industry-wide understanding of best practice, and evidence that proves new technologies and the implementation of new systems that use AI and ML are safe. This in turn will help regulators better understand how to protect and serve industries through greater understanding of the risks and limits to which autonomous technologies can be guaranteed to perform safely.

Author:

David Lawton, Technical Director, Informed Solutions

 

You can read all insights from techUK's AI Week here

Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore