techUK will be going through the AI White Paper consultation’s questions and you will be able to raise any comments you might have. Please also feel free to send us your own written response at [email protected]. The Government’s consultation will close on June 21st. It can be found here. 

Please let us know if you have any questions and please see for the consultation’s questions below. 

Questions;

The revised cross-sectoral AI principles 

1. Do you agree that requiring organisations to make it clear when they are using AI would adequately ensure transparency? 

2. What other transparency measures would be appropriate, if any? 

3. Do you agree that current routes to contestability or redress for AI-related harms are adequate? 

4. How could routes to contestability or redress for AI-related harms be improved, if at all? 

5. Do you agree that, when implemented effectively, the revised cross-sectoral principles will cover the risks posed by AI technologies? 

6. What, if anything, is missing from the revised principles? 

-

A statutory duty to regard 

7. Do you agree that introducing a statutory duty on regulators to have due regard to the principles would clarify and strengthen regulators’ mandates to implement our principles, while retaining a flexible approach to implementation? 

8. Is there an alternative statutory intervention that would be more effective? 

-

New central functions to support the framework 

9. Do you agree that the functions outlined in section 3.3.1 would benefit our AI regulation framework if delivered centrally? 

10. What, if anything, is missing from the central functions? 

11. Do you know of any existing organisations who should deliver one or more of our proposed central functions? 

12. Are there additional activities that would help businesses confidently innovate and use AI technologies? 

12.1. If so, should these activities be delivered by government, regulators or a different organisation? 

13. Are there additional activities that would help individuals and consumers confidently use AI technologies? 

13.1. If so, should these activities be delivered by government, regulators or a different organisation? 

14. How can we avoid overlapping, duplicative or contradictory guidance on AI issued by different regulators? 

Monitoring and evaluation of the framework 

15. Do you agree with our overall approach to monitoring and evaluation? 

16. What is the best way to measure the impact of our framework? 

17. Do you agree that our approach strikes the right balance between supporting AI innovation; addressing known, prioritised risks; and future-proofing the AI regulation framework? 

18. Do you agree that regulators are best placed to apply the principles and government is best placed to provide oversight and deliver central functions? 

-

Regulator capabilities 

19. As a regulator, what support would you need in order to apply the principles in a proportionate and pro-innovation way? 

20. Do you agree that a pooled team of AI experts would be the most effective way to address capability gaps and help regulators apply the principles? 

Tools for trustworthy AI 

21. Which non-regulatory tools for trustworthy AI would most help organisations to embed the AI regulation principles into existing business processes? 

-

Final thoughts 

22. Do you have any other thoughts on our overall approach? Please include any missed opportunities, flaws, and gaps in our framework. 

-

Legal responsibility for AI

L1. What challenges might arise when regulators apply the principles across different AI applications and systems? How could we address these challenges through our proposed AI regulatory framework? 

L2.i. Do you agree that the implementation of our principles through existing legal frameworks will fairly and effectively allocate legal responsibility for AI across the life cycle? 

L.2.ii. How could it be improved, if at all? 

L3. If you are a business that develops, uses, or sells AI, how do you currently manage AI risk including through the wider supply chain? How could government support effective AI-related risk management? 

Foundation models and the regulatory framework 

F1. What specific challenges will foundation models such as large language models (LLMs) or open-source models pose for regulators trying to determine legal responsibility for AI outcomes? 

F2. Do you agree that measuring compute provides a potential tool that could be considered as part of the governance of foundation models? 

F3. Are there other approaches to governing foundation models that would be more effective? 

-

AI sandboxes and testbeds 

S1. Which of the sandbox models described in section 3.3.4 would be most likely to support innovation? 

S2. What could government do to maximise the benefit of sandboxes to AI innovators 

S3. What could government do to facilitate participation in an AI regulatory sandbox? 

S4. Which industry sectors or classes of product would most benefit from an AI sandbox?