21 Mar 2022

The four success factors to effective adoption of AI in policing

Evie Dineva, Senior Analytics & AI Consultant, Agilisys as part of techUK's emerging Tech in Policing Week

Data and information are at the heart of modern policing. Yet, when it comes to the use of novel technologies such as artificial intelligence (AI), there is undoubtedly enormous untapped potential to do more for the benefit of frontline officers and police staff alike. 

 In our daily lives, we take for granted the devices and applications that make our lives easier. Whether it’s Netflix, Alexa, our smart watches or whatever, they all utilise an element of data insights or artificial intelligence. While we’re so advanced in the comfort of our homes, in policing we are only using the tech as a commodity and not to push the boundaries of service provision or transform how we fight and prevent crime. 

With that in mind, what does it take to successfully implement AI and reap its benefits in policing? How can we make sure we don’t fall into the recent headline grabbing example from the US where the use of a criminal risk assessment algorithm resulted in a wrongful citizen arrest? Based on our experience of delivering data-led transformation across policing and the wider public sector, below are four mission critical factors that will enable us to unlock the numerous benefits of AI across policing. 

1. Don’t do artificial intelligence for the sake of it, link it to your outcomes 

AI is powerful when it empowers citizens, operational and frontline staff to realise the mission of policing – to prevent crime and not just fight it. To achieve this, police forces should nurture a culture of innovation, data and value-driven thinking from the outset.  

Decision intelligence underpins this - starting with the outcomes and agreeing how success will be measured is pivotal to any force who wishes to successfully adopt AI. This is a powerful way to approach becoming data and AI-driven because it ensures investments in the technology are connected to the force’s strategy and because it helps to ensure value for money. For example, that could be trying to prevent crime in certain areas through the creation of an early tactical intervention strategy. The solution could be deploying artificial intelligence to proactively detect and analyse crime patterns and areas where incidents are likely to occur. The important thing here however is not to forget about continuous evaluation and learning, and objectively feed this back through the value chain.  

2. Invest in Data foundations otherwise AI will fail  

Establishing solid data foundations is pivotal to the success of artificial intelligence (AI) in policing. Often, this comes in the form of a joined-up data platform based on cloud-first infrastructure which would allow operational and front-line teams to access data from anywhere. 

Storing data securely in one central platform prevents data silos and helps forces build a single version of the truth and a single nominal view which builds confidence in the data and its integrity.  

Each force’s performance and insight teams can focus on data observability – using the toolsets available in the cloud to automate performance monitoring of the data assets and fully understand the health of data in their systems. Therefore, data assets become trusted, enabling the team to build and deploy enterprise grade machine learning and AI-systems at speed, thus harnessing a culture of innovation and ability to ‘fail fast’ without huge cost and overhead implications.  

For an Operational Uplift Manager that is responsible for deciding where new officer recruits from the national uplift programme are placed that could mean leveraging evidence-based insights underpinned by highly available, high-quality data and AI-based demand and resource forecasting models that give an onward projection. The decisions made ultimately impact on a force’s delivery of service to the public and on the welfare of officers. 

3. Build your skills and capability to drive value from your investment 

Police forces should empower well-resourced performance and data science teams. As we increasingly leverage data-driven decision making, the ability to use and understand data becomes mission critical to build resilient AI systems and algorithms.  

Domain knowledge and context is everything in policing. Therefore, forces need to invest in individuals who understand the data, know how to manipulate it and appraise it, and can work with internal users and citizens to translate it into actionable insights and drive long-term operational adoption. For instance, for a Detective in Public Protection that could mean collaborative working with the force’s data science team to look at how they can identify domestic abuse survivors who are most at risk of further abuse so that the force can develop intervention strategies for these individuals and help protect them. 

4. Transparency and Trust 

What do we mean by that? I’ve heard a number of public sector organisations say, ‘we use artificial intelligence but the ins and outs of it are a black box to us, so we can’t fully trust that the outputs aren’t biased’. 

I deliberately placed transparency and trust together as one could argue whether we could ever objectively trust something we don’t quite understand the inner workings of. Truth is, we don’t have to know everything about all the technology and devices we use daily as we are driven by the notion of proven capability – does something do what is says on the tin? With that in mind however, as AI systems at their core operate on data they have been trained on, it’s difficult for AI to be truly objective. Biased data equals biased AI. In policing this is an even more pertinent a topic. In a recently published blog by the Centre for Data Ethics and Innovation (CDEI), it becomes clear that ‘the public needs to be brought into the conversation on ethical and transparent technology use in policing, now and in the future.’ The CDEI continue to actively work with police officers, national policing bodies, policymakers, regulators and academics to develop an ethical framework and feed in recommendations to Government to ensure technology in policing is incorporated in decision-making in a way that is fair and doesn’t discriminate.  

Forces must not forget the creation, training and deployment of AI systems lies in their hands. Technology serves us a purpose and its governance and responsible use is well within our remit. But we ourselves as people irrespective of our role as citizens or front-line officers or policing operational staff, come with a set of biases, both conscious and unconscious ones. Therefore, forces need to ensure they are objective when using such technology and dive into the ethics and governance behind responsible AI, keeping in mind that the accountability lies on our shoulders – we can’t blame the algorithms! 

For a performance and insight team in a UK force, that could be about the use of advanced analytics and AI to bring data from surveillance cameras and detection systems alongside analysis of data on previous crime patterns to proactively identify individuals with high propensity to re-offend before they do so. Working collaboratively with the Information Governance team and under the guidance of a suitable ethical framework to bring in transparency and ethical conversations to the forefront will allow operational and front-line officers to be confident that their force is acting with legitimacy.  

AI as a technology can help forces prevent crime and drive more resilient and optimised process adoption, thus transforming the horizon of policing. However, we must lay the right foundation to enable sustainable AI whilst placing people in the centre of decision making about its use and deployment. This way, we will ensure we realise our ambition responsibly and successfully across the police and criminal justice sector. 

 

Author:

Evie Dineva, Senior Analytics & AI Consultant, Agilisys

 

Georgie Morgan

Georgie Morgan

Head of Justice and Emergency Services, techUK

Georgie joined techUK as the Justice and Emergency Services (JES) Programme Manager in March 2020, then becoming Head of Programme in January 2022.

Georgie leads techUK's engagement and activity across our blue light and criminal justice services, engaging with industry and stakeholders to unlock innovation, problem solve, future gaze and highlight the vital role technology plays in the delivery of critical public safety and justice services. The JES programme represents suppliers by creating a voice for those who are selling or looking to break into and navigate the blue light and criminal justice markets.

Prior to joining techUK, Georgie spent 4 and a half years managing a Business Crime Reduction Partnership (BCRP) in Westminster. She worked closely with the Metropolitan Police and London borough councils to prevent and reduce the impact of crime on the business community. Her work ranged from the impact of low-level street crime and anti-social behaviour on the borough, to critical incidents and violent crime.

Email:
[email protected]
LinkedIn:
https://www.linkedin.com/in/georgie-henley/

Read lessmore