21 Mar 2022

Why ethics might be the most important factor in AI enabled policing

Matthew Cheesman, Accenture explores ethics as part of techUK's emerging Tech in Policing Week

Whether we realise or not, automation and AI is having a significant impact on how Public Services in the UK are delivered. You only need to hear about the Automation and AI journey undertaken at DWP to understand that these types of technology impact us all and are becoming a critical business resource supporting the day to day running of the Public Sector. Policing is no different.  

Whilst this notion may conjure up images of Robocop or scenes from Minority Report, for better or worse, these movies aren’t an accurate reflection of Automation and AI in Policing. However, these technologies are having a direct impact on the day in the life of a police officer and the general public in ways you might not imagine. 

Automation in Policing 

Automation is already being used to unlock the value of the human workforce at a large policing authority who have partnered with Accenture. By automating mundane, high volume, repetitive tasks that were previously being carried out by skilled police officers, teams have been saved hundreds of hours of time per month. From a business case perspective, this saving seems impressive, but the important lesson learned is to look beyond time savings to appreciate how automation adds real value for the police.  

By relieving highly skilled, trained officers of admin-heavy, repetitive tasks that consume their time, there are a number of potential benefits:  

  1. Improved service delivered to the public: The Authority have been able to reduce the time to serve from 24 hours to 45 minutes in one automated business process. A reduced time to serve doesn’t just benefit police operations, it can also help improve public perceptions of the police. Officers can spend more time on public facing roles that no robot or algorithm can carry out – delivering a public service with empathy. 

  1. Increased compliance: On routine tasks, automations can work faster than human beings and can be scaled infinitely (unlike human teams) to meet fluctuations in process volume with the right architecture in place. At this authority, it has helped teams to meet SLAs by reducing the time taken to complete processes and mitigate the risk of hefty regulatory fines. 

  1. Improved staff morale: Automating mundane tasks can help people find more intrinsic value in their daily work and focus on the reason a lot of them joined the police; making a difference in their community. 

AI and Machine Learning in Policing 

Beyond automation, Artificial Intelligence and Machine Learning is already being used by many police authorities in the UK. To name a couple of examples, Durham Constabulary’s Harm Risk Assessment tool predicts how likely an offender is to re-offend and the Metropolitan Police Service are trialling facial recognition software to automatically identify people in CCTV footage. Objectively, the benefits of using these tools over manual processing appear large and there is little doubt that the use of AI and Machine Learning will continue to grow.  The outgoing Met Commissioner Cressida Dick has placed on record her support for allowing these solutions to support police officers in solving crimes

The potential benefits of using AI in policing and vast number of use cases are already emerging. For example, it could help improve officer safety by training officers to act in difficult situations by using VR or help map out criminal networks by using data analytics. However, the technology also presents a number of challenges. As is widely reported, the way AI and algorithms are developed can introduce or exacerbate non-conscious biases

Through a policing lens, there is a risk that if historic crime data is used to train any AI solution, it may inadvertently introduce historic biases and reinforce them today. With this in mind, it should be a priority to prevent those risks becoming serious issues for police forces and the public.  

Currently, there is no AI specific regulation in the UK, although the UK government’s recently published AI strategy is a step in that direction. However, a good place for police authorities to start could be in 4 key areas: 

  1. Governance: An independent governance board could be established to regulate new AI solutions, sitting across all UK Police forces. Within specific authorities, establish data ethics teams – such as the one recently formed at West Midlands Police. This team are responsible for ensuring ethics and people’s rights are at the centre of any AI solutions delivered. At both levels, independence is key to ensure a disconnect from any internal bias that may exist. 

  1. Process: Ensure robust delivery models for new AI solutions include stress testing through red teams and algorithm impact assessments. This can help prevent bias affecting the stakeholders the technology will impact. 

  1. Technology: Creating explainable AI solutions by design and making use of tools and frameworks, such as IBM’s AI Fairness 360 or Google’s What-If tool, can help identify and mitigate biases if they do arise. 

  1. People: Increasing the diversity across the force is a key goal already, but in this area, it could be especially important. By bringing lived experiences to their work, people may be more able to recognise where biases might manifest and challenge teams to do more to tackle it.  

The future of Automation, AI and Machine Learning is exciting, but police authorities in the UK need to be transparent with the public about their work in this area and ensure they get their governance structures right. 

This content is provided for general information purposes and is not intended to be used in place of consultation with our professional advisors. This document may refer to marks owned by third parties. All such third-party marks are the property of their respective owners. No sponsorship, endorsement, or approval of this content by the owners of such marks is intended, expressed, or implied. 

 

Further reading 

AI in Policing and Security, UK Government 

AI ethics & governance, Accenture  

Author:

Matthew Cheesman, Public Safety, Accenture UK

Georgie Morgan

Georgie Morgan

Head of Justice and Emergency Services, techUK

Georgie joined techUK as the Justice and Emergency Services (JES) Programme Manager in March 2020, then becoming Head of Programme in January 2022.

Georgie leads techUK's engagement and activity across our blue light and criminal justice services, engaging with industry and stakeholders to unlock innovation, problem solve, future gaze and highlight the vital role technology plays in the delivery of critical public safety and justice services. The JES programme represents suppliers by creating a voice for those who are selling or looking to break into and navigate the blue light and criminal justice markets.

Prior to joining techUK, Georgie spent 4 and a half years managing a Business Crime Reduction Partnership (BCRP) in Westminster. She worked closely with the Metropolitan Police and London borough councils to prevent and reduce the impact of crime on the business community. Her work ranged from the impact of low-level street crime and anti-social behaviour on the borough, to critical incidents and violent crime.

Email:
[email protected]
LinkedIn:
https://www.linkedin.com/in/georgie-henley/

Read lessmore