In December I had the opportunity to attend and speak at techUK’s Digital Ethics Summit. This felt like something of a watershed moment, as without doubt the ethical implications of technology has gone from a niche debate to public discourse over the past 12 months. Partly this has been driven by events, notably the Cambridge Analytica scandal, which has exposed the ethical void at the heart of the business model of some of the worlds’ biggest companies. Partly this has been a conscious effort to create frameworks and institutions to start addressing some of the clear shortcomings. These new bodies were very much in evidence at the summit, with high level contributions from Government, the Information Commissioner, the new Centre for Data Ethics and Innovation, the Ada Lovelace Institute, the Alan Turing Institute and many others.
For me personally, having spent much of the past few years engaged in what has sometimes seemed a lonely journey to try and embed ethics into police digital capability development, it was heartening, and surprisingly moving, to see such tangible evidence of a group of people, from a variety of backgrounds and sectors, with a shared determination to address this challenge.
The underlying question that was posed at the summit: what does this mean in practice? In policing there is both a new challenge and some lessons from the past that may be relevant. The British policing model is founded on the principle of Policing by Consent; the notion set out in Peel's principles published at the advent of the Metropolitan Police in 1829 that the police are the public and the public are the police. That the coercive powers that the police are entrusted with must be accountable, both to the law and ultimately to the public they serve, in order to have legitimacy. This historic concept remains surprisingly relevant today.
Data has always been at the heart of policing, although if it hasn't always been explicitly recognised as such. But recording details of people (victims, suspects, witnesses), objects (stolen or damaged property, weapons, drugs etc), places (crime scenes, addresses of persons of interest, public and private spaces) and events (crimes, incidents) has always been critical to policing. Policing has always gathered a lot of data, and a lot of sensitive personal data at that. It's hard to imagine much more sensitive personal data than details of domestic abuse allegations for example.
What has changed in recent years is that the volume of that data has grown exponentially, the variety of sources of data that are relevant to investigations has mushroomed, and the types of policing capabilities that are now in development all have the capacity to totally transform policing as we know it.
Of course this raises questions about how such developments are going to be overseen to maintain that bond of trust that is so fundamental to the entire construct of policing. Recent examples of this issue have arisen such as the publication of the independent evaluation of the impact of automated facial recognition within South Wales police by Cardiff University, the London Policing Ethics Panel interim evaluation of the Met facial recognition trial, and the joint Alan Turing Institute/Independent Digital Ethics Panel for Policing evaluation of national analytics solution developed by West Midlands. All of these highlight the risks and benefits of new capabilities being framed in terms of public consent and legitimacy, and the need for an ethical framework to be in place at the outset, rather than consideration given after the event.
However, current policing capability to develop and test these concepts is limited. As has recently been highlighted by the joint RUSI/University of Winchester policy review on Machine learning and police decision making there has been a lack of guidelines produced nationally which has led to police forces having to feel their way through the complexities of this fast evolving area, risking a loss of public trust and legitimacy in the process. The risk being that this hampers innovation which could have a significant benefit to public safety.
This is clearly unsatisfactory, but it is not an inevitable state of affairs. Some practical steps could make a radical difference. There are some encouraging signs. West Midlands police and crime commissioner has sponsored the creation of a high-level ethics committee to support greater independent oversight and scrutiny of projects from the design stage. In London the Mayor has a similar panel, and there is good work being carried out across the country. But it is clearly unrealistic to expect individual forces to necessarily have access to the necessary expertise to address what are inevitably complex and technical issues. Building on the experience of IDEPP and creating a network of expertise that could advise and support projects from the initiation stage would be major step forward. So too would be the development of a set of guidelines that could help frame and address the foreseeable challenges that will emerge. And across the service there needs to be better education and advice, not just for practitioners but for decision makers and those in oversight roles.
This will require modest investment and some changes in behaviour to embrace a more collaborative approach, within and beyond the police service itself. These are consistent with the direction of travel in a host of other sectors, from healthcare to education, finance to transport, all of whom face parallel challenges from the data revolution. The good news is that there is appetite to embrace this challenge, and growing network of expertise to draw upon, and techUK has a key role to play in convening and supporting this debate. We owe it to the public to ensure that we grasp this opportunity.
For further information on techUK's work in this area, take a look at techUK's Digital Ethics in 2019 paper.