This is a hybrid event with limited in person tickets available. Please register ASAP to secure your space. Location – 10 St Bride Street, EC4A 4AD.
techUK is hosting this event in partnership with the Office of the Police Chief Scientific Advisor (OPCSA), bringing policing and industry together to discuss and better understand the risks and opportunities, particularly near-medium-term, on the use of Large Language Models (LLMs) and Generative AI (GAI) in policing and security.
Objectives
- Policing to get a clearer understanding of current speed and timelines for LLM and GAI integration into products and services policing does, or will, consume.
- Industry to be aware of the risks and considerations from a policing and security perspective of these. Particularly:
- Data sovereignty
- Transparency
- Explainability
- Ethics
- Reliability of insights for criminal justice system
- Risk of user over-reliance on AI generated material
- Understand the timescales for development of capability to reliably detect criminal use of GAI (i.e. deepfakes, phishing scams), and apply watermarking to differentiate real from generated material.
Prompt Questions
- What are the biggest barriers to policing effectively and securely adopting LLM and GAI capabilities?
- How do we square the need for transparency and explainability of AI capabilities, with the inherent complexities of GAI and LLMs?
- Where should policing be focussing its investment in LLM and GAI, both in terms of:
- Ability to harness the technology; and
- Capability to combat its criminal use.
- What do policing need to provide industry in order for industry to develop safe, secure, and reliable LLM and GAI products and services for policing to consume?
- Where can policing learn from other sectors?