techUK position paper – Governance for an AI future

In collaboration with our members, techUK has summarised four top priorities for a successful AI governance regime in the UK.

 

The UK is among the global leaders of AI, ranking highly in international comparisons of peer-reviewed publications and private investment.

AI technologies have the potential to drive economic growth, help improve many of the services we interact with daily and even contribute to solving some of the most complex social and environmental challenges facing the modern world.

Yet, a recent global poll found that the population of Great Britain are among the most sceptical of AI use, with only 35% saying they trust a company using AI as much as they trust a company which does not. 

One way to help secure greater public trust is by adopting a clear and transparent approach to AI governance, which facilitates informed engagement. In the National AI Strategy published last year, the government established that a governance regime which “supports scientists, researchers and entrepreneurs while ensuring consumer and citizen confidence in AI technologies” is fundamental to securing the UK’s ongoing position as a global AI superpower. The strategy also announced that the government is preparing a white paper that is expected to set out its chosen path for the UK’s future AI governance framework.

To support this work, techUK has prepared this short paper setting out what we believe must be key elements included in the white paper if we are to a build proportionate, innovation-friendly and effective AI governance regime. We encourage the government to:

  1. Take a risk-based approach; tier AI governance requirements by the estimated level of risk posed by a given AI model or application, informed by clear criteria and categories.
  2. Consider the entire AI lifecycle; clarify at what stages, from AI planning and procurement to ongoing use, risks can reasonably be expected to be addressed.
  3. Encourage and oversee the development of an effective AI assurance market;  work with industry to develop consistent and transparent requirements catering to different levels of risks and AI lifecycle stages.  
  4. Acknowledge the role of existing regulation; ensure that any potential new regulation or governance mechanisms do not replicate or contradict existing regulation.

 

 

Emilie Sundorph

Emilie Sundorph

Programme Manager, Digital Ethics and Artificial Intelligence, techUK

Emilie joined techUK in June 2021 as the Programme Manager for Digital Ethics & AI.

Prior to techUK, she worked as the Policy Manager at the education charity Teach First and as a Researcher at the Westminster think tank Reform. She is passionate about the potential of technology to change people's lives for the better, and working with the tech industry, the public sector and citizens to achieve this.

Emilie holds a master's degree in Philosophy and Public Policy from LSE. In her spare time she is currently trying to learn Persian and improve her table tennis skills.

Email:
[email protected]
Phone:
+44 (0) 07523 481 331
Twitter:
@ESundorph
LinkedIn:
https://uk.linkedin.com/in/emilie-sundorph

Read lessmore