30 Jun 2021

Reflections from Roger Taylor, outgoing Chair of the Centre for Data Ethics and Innovation

Guest blog from Roger Taylor, outgoing Chair of the Centre for Data Ethics and Innovation.

In March 2019 the centre was launched at No 11 Downing Street with the brief to position the UK as a world leader in the safe and responsible adoption of AI. This week I finish my term as Chair. It is a moment to review where we have got to. 

Much good work has been done. There are major policy reports on topics such as algorithmic bias and microtargeting.

There are intelligence gathering reports on barriers to innovation and public attitudes. Less obvious but just as important has been the work that the CDEI has done with partners in the public and private sectors to help design effective mechanisms of governance - working with the police, the recruitment industry and the Cabinet Office to name just a few.

This is all welcome. But there is a sense in which we have lost ground over recent years. In 2019, the UK set itself the ambition of leading the world in the debate about how to manage AI for the benefit of our society and economy. It is an achievable goal, given our leading academic expertise, our strong data economy and our position and a global centre for law, regulation and dispute resolution.  

The CDEI was set up on the view that AI is a general purpose technology and that effective regulation depends on the context. The governance issues for automated vehicles are very different from the issues relating to online harms or AI driven diagnostics. In each area sector regulators should lead, supported by a central expert body providing support and co-ordination. 

In the last three years support has grown for the more drastic option of new overarching laws and new regulators for AI. This has been led by the EU but with increasingly vocal support in many countries. I still believe that the UK model is right. However, we need to do much more to demonstrate the effectiveness of an alternative model based on safe data sharing mechanism, standards for auditing and competent sector regulators working within internationally agreed norms. 

A lot has happened in the last three years - a pandemic, a change of government, Brexit. Attentions have understandably been focused elsewhere. But as we emerge from these pressing issues, we have an opportunity to get back on track in developing the UK as a world leading AI and data economy.

With that in mind, I have four pieces of advice to pass on to the new board about how to make this work.

  1. The delivery is the policy 

The CDEI works to turn policy on data driven technology into practice on the ground by working with partners. In this rapidly developing area, policy means nothing without implementation. A policy to encourage new data institutions such as data trust  is of little value in the abstract. The same is true for new models of governance and audit. It is only in the delivery of these models in specific context that a policy to support ‘responsible innovation’ means anything. There is still work to do to close the gap between policy and delivery.

  1. Governance is a driver of innovation 

AI is like a drug, not like a bicycle.  With some products the buyer can work out for themselves if it works - a bicycle for example. With pharmaceuticals it would be hopeless if people had to take pills first and then try to work out if was making them better. AI is similar. You need lots of hard data to properly judge whether it is working or not. The more that this evidence conforms to agreed standards the greater will be the ability of the market to generate real value rather than snake oil.   

Fear of stifling regulation causes some people to put questions of ethics and governance into a separate box from questions of encouraging innovation.To regard ‘responsible governance’ as an overlay on top of activity to promote innovation is to fundamentally misunderstand the way in which the two are closely connected in data driven technologies. Good governance is key to driving effective innovation. Uncertainty about what good governance looks like is a major barrier to adoption. As CDEI surveys have shown, belief in effective regulation is key to public trust. 

Too often work on innovation pays too little regard to governance, and work on regulation, to play too little heed to innovation. 

For example, the current work on online harms aims to address risks from social media and AI driven content recommendation systems. But in deciding how we do this, it is essential that we also think about how any such actions will enable innovative and beneficial uses of such targeting systems. As we bring in regulation to, say, address concerns that social media can harm the mental health of young people, we need to be thinking equally about how our efforts are enabling this same technology to support people to look after their mental and physical health. 

The ability of AI systems to identify aspects of an individual and make recommendations that influence their behaviour has been one of biggest drivers of economic growth over the last decade and is responsible for much of the value in the large tech companies. But this remarkable technology is being used at the moment primarily to sell things. We have only begun to scratch the surface of how this might be used to help people better care for their health, discover new career opportunities or plan effectively for retirement. 

It is right that we are looking at how to regulate online harms. But at the same time, we need to be thinking equally about how we create the market conditions that  encourage online goods. 

  1. Make people happy 

This advice relates to government and the use of AI in the public sector. The CDEI sentiment tracking surveys through the pandemic show two things. Most people in the UK think the government should be using data driven technologies to address their concerns. Most people in the UK do not believe that current government use of data benefits them personally. 

That second statistic is remarkable. Only 8 per cent of people believe that the government use of data about them is of any direct benefit to them. This more than anything drives what the CDEI has labelled ‘tenuous trust’ - a willingness to trust government to use data, but trust that quickly fractures when concerns are raised.

Why does this happen? One reason, I believe, is the tendency within government to use data to address the things that seem to them most pressing - and which are among the most difficult and contentious things to do with data driven systems: for example the use of predictive analytics in policing or child protection. 

This is not wrong in principle. But using AI in high stakes decision making is never likely to build public trust and support. The public sector is getting much better a digital transactional services - as anyone who has renewed their car tax, or received their vaccination status in their NHS app will know. We need to turn our mind to how data driven system and recommender algorithms could be used to help citizens with their everyday problems - finding training opportunities and jobs, managing their pensions, or navigating government information online.  

Show don’t tell is the age old advice to anyone trying to tell a story. The same goes for telling the public about the benefits of data driven technology in the public sector. They need to see it and feel it - not just hear about it. 

  1. Set national priorities 

There is much good work going on across UK government to support the introduction of responsible use of AI - for example within NHSx or the work of the Department of Transport on autonomous vehicles. In other areas, there is much less happening - for example in Education. It is not clear how these differences fit into a considered sert of national priorities. 

In any one area, the responsible adoption of AI and data driven technologies can engage many different parts of  government - BEIS leads on data portability, DCMS on data intermediaries, GDS on digital ID management and individual departments on their own areas of responsibility. However, there is limited coordination across government to ensure these efforts add up to more than the sum of the parts by focusing on agreed national priorities. 

If we are to make up for lost time and deliver on the ambitions set out when the CDEI was founded we would benefit from an approach more akin to that in Singapore where clear national areas of priority are identified and government efforts in governance, data availability, procurement, grants etc are organised to enable these industries to flourish in ways that demonstrate how responsible innovation can benefit us all.