08 Sep 2023

Large Language Models and Generative AI in policing – Data quality, transparency and trust

techUK’s Justice and Emergency Services Programme hosted a hybrid roundtable in partnership with the Office of the Police Chief Scientific Adviser (OPCSA) covering the risks and opportunities on the use of Large Language Models (LLMs) and Generative AI (GAI) in policing and security.

OPCSA’s vision is to deliver the most science and technology led police service in the world, aiming to engage widely, evolve strategically and embed the best S&T in a way that is trusted by the public.

Science and technology sits at the heart of delivering a better and trusted police service. I expect rapid progress this year in areas that include how we communicate with the public, how we use data to deliver effective investigations, and how we engineer technologies to prevent crime in the first place.” – Professor Paul Taylor

Understanding the landscape

Emerging technologies are reshaping the landscape of policing in the UK. While LLM and GAI present numerous opportunities for improving law enforcement, they also raise a number of unresolved queries regarding their implementation. Some of the key concerns are related to the strategies for effective technological utilization, the guarantee of data protection/accuracy, transparency and community trust.

Addressing these issues requires communication between law enforcement bodies and industry. Open dialogues and understanding of the challenges from both parties are essential to ensure that any use of AI in policing aligns with accountability principles. While working on the technical obstacles, it is essential to tackle cultural perceptions and AI-related stereotypes.

The event provided an opportunity for industries and OPCSA to start an open discussion on the opportunities and risks that LLMs and GAI present for policing, addressing challenges and barriers, as well as good practices.

Summary of discussion

Barriers to adopting LLM and GAI

The event identified numerous challenges and obstacles that policing must recognize when incorporating LLMs and GAI. These include the need to educate users, equipping them with appropriate training and guidance, as well as educating the public about the benefits of AI and cultivate trust in policing through transparent communication.

Additionally, there is the need to select the most suitable LLM and GAI tools, considering factors such as commercial versus open-source models. A clear understanding of the context of AI use in policing is essential in order to select data models, address biases, ensuring efficiency and navigate legal considerations.

A unified approach for LLM and GAI in policing is missing – this is a result of differences within the 43 police forces in terms of budget, time and resources. Effective engagement with police forces is crucial, involving concrete examples of policing case studies to foster discussions.

Transparency and explainabiltiy

Any decision to adopt AI tools in policing to deliver improved outcomes for the public must be underpinned by principles of transparency and explainability. The discussion focused on exploring how conscious and unconscious biases may influence the outcome of decisions.

Related to this there is the concern about the lack of transparency of the algorithms within some LLM and GAI models, that would challenge the detection of biases, carrying both ethical and legal concerns.

Given the reasons we discussed earlier, it's crucial, from a decision-making perspective, to make sure that when using LLM or GAI models, you always check the original source of information before making a final decision.

Training and AI knowledge regarding tool operation becomes essential in policing to maximise their utility and to view these tools as “information providers” and not “decision makers”.  Participants have agreed to prevent these tools from substituting human decision-making authority.  

The discussion has also focused on deep fakes. They are a concern that extends beyond law enforcement, and they need to be addressed collaboratively with experts and suppliers who have deep-fakes expertise (requiring the police to be educated on the why and what of deep fakes). 

Outcomes

The event was a successful occasion to start a conversation with techUK members on the risks and opportunities on the use of LLM and GAI in policing and security. Law enforcement can learn a lot from other sectors concerning the use of LLM and GAI, such as education and health-care – an example is the use of Sandboxes not yet adopted in policing.

Both Policing and Industry are keen to ensure that the conversation started leads to a real partnership. With the right tools, the use of LLM and GAI in policing could establish objectivity and potentially greater trust and confidence in policing decision-making.

Both techUK and OPCSA are early on in their thinking in this area and intend to continue running sessions so that Policing and industry fully understand what LLM and GAI look like, and the barriers, risks and opportunities associated with it.

Please contact [email protected] if you are interested in finding out more about techUK’s work in this area.

 

Cinzia Miatto

Cinzia Miatto

Programme Manager - Justice & Emergency Services, techUK

Cinzia joined techUK in August 2023 as the Justice and Emergency Services (JES) Programme Manager.

The JES programme represents suppliers, championing their interests in the blue light and criminal justice markets, whether they are established entities or newcomers seeking to establish their presence.

Prior to join techUK, Cinzia held positions within the third and public sectors, managing international and multi-disciplinary projects and funding initiatives. Cinzia has a double MA degree in European Studies from the University of Göttingen (Germany) and the University of Udine (Italy), with a focus on politics and international relations.

Email:
[email protected]

Read lessmore