As part of Europe’s approach to building an AI “ecosystem of excellence and trust”, we have an update on two key policy initiatives:
- Publication of the European Commission’s AI High-Level Expert Group Assessment List
- Snapshot of consultation survey response to EU AI Whitepaper
Publication of the European Commission’s AI High-Level Expert Group Assessment List
Following two years of work, the AI High-Level Expert Group has presented their final Assessment List for Trustworthy Artificial Intelligence.
The Assessment List for Trustworthy AI (ALTAI), is a checklist for businesses and organisations to self-assess the trustworthiness of their AI systems currently under development. The ALTAI is now available in a document version and as a prototype of a web-based tool.
The assessment list provides an overview of the main requirements first highlighted in the Ethics Guidelines on Trustworthy AI and offers step-by-step a guide for organisations when considering how to deploy AI. We encourage techUK members to take a look at the list and see how it could be adapted and used in their processes. It's worth stating that the Assessment List is entirely voluntary and has no regulatory implications of any kind.
Snapshot of consultation survey response to EU AI Whitepaper
Last month the European Commission ran a public consultation to gather feedback on their proposed AI White Paper. This paper sets out a series of policy and regulatory options to help promote the uptake of AI across the EU economy and build towards an “ecosystem of excellence and trust”.
The Commission received over 1,250 responses and has so far analysed the preliminary trends from the online questionnaire responses. The qualitative responses are still being analysed and will be considered as part of the final report.
The initial findings show that most respondents are in support of the high-level actions proposed by the Commission. Concepts such as working with member states, developing partnerships with the private sector and focusing on AI skills development were all unsurprisingly well received. However, the devil will be in the detail and determining how this long list of high-level actions should be prioritised and executed will most likely divide opinion.
One of the key topics in the survey focused on the different legislative options for AI. From the initial survey analysis, most respondents requested either a new regulatory framework on AI or a modification to current legislation. However there was considerable disagreement amongst respondents as to whether new compulsory requirements should be limited to high-risk applications. In many cases, respondents didn’t seem to have a clear opinion of what "high-risk" means. Defining “high-risk” applications is inherently difficult, it’s often contextually dependent and is influenced by the risk/advantage trade-off attributed to the AI application. It may therefore be better to envisage an approach which is context-based, rather than technology specific.
For AI applications that do not qualify as ‘high risk’ there was a high level of support for a voluntary labelling scheme. Although how this would work in practice requires further consideration. Among a series of conformity assessment mechanisms suggested, there was general support for a ex-post and ex-ante system of market surveillance. A combination of ex-ante and ex-post may be purposeful as long as ex-ante mechanisms are limited to self-assessment only. It will also be important to consider how these mechanisms would apply across different sectors that are already regulated.
This report is simply a snapshot of some of the views and opinions expressed by some of the respondents. Many of the topics raised in this survey are incredibly complex and heavily dependent on context. The survey format and subsequent statistical analysis makes it difficult for nuanced arguments to cut through. An in-depth analysis of the qualitative responses is therefore necessary before conclusions are drawn and proposals are recommended.