The independent High-Level Expert Group on AI presents today their ethics guidelines for Trustworthy AI. In response, the European Commission has announced next steps for taking this work forward.
In summer 2019, the Trustworthy AI assessment list presented in Chapter III will undergo a piloting phase by stakeholders to gather practical feedback. A revised version of the assessment list, taking into account the feedback gathered through the piloting phase, will be presented to the European Commission in early 2020.
Furthermore, the Commission will by the autumn 2019: launch a set of networks of AI research excellence centres, begin setting up networks of digital innovation hubs; and together with Member States and stakeholders, start discussions to develop and implement a model for data sharing and making best use of common data spaces. The Commission has also outlined plans to link up internationally, such as in the framework of the G7 and G20.
Today's plans are a deliverable under the AI strategy of April 2018, which aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.
In June 2018 the Commission set up the High-Level Expert Group on AI (AI HLEG), which consists of 52 independent experts representing academia, industry, and civil society. The AI HLEG’s were tasked with producing practical guidelines to promote Trustworthy AI. A draft version of the guidelines were published in December 2018, as part of a public consultation, which techUK responded to through its Digital Ethics Working Group.
The final guidance comprises of three chapters, from the most abstract in Chapter I to most concrete in Chapter III:
- Chapter I- Foundations of Trustworthy AI: sets out the foundations of Trustworthy AI by laying out its fundamental-rights based approach. It identifies and describes the ethical principles that must be adhered to in order to ensure ethical and robust AI.
- Chapter II- Realising Trustworthy AI: translates these ethical principles into seven key requirements that AI systems should implement and meet throughout their entire life cycle. In addition, it offers both technical and non-technical methods that can be used for their implementation.
- Chapter III- Assessing Trustworthy AI: sets out a Trustworthy AI assessment list to operationalise the requirements of Chapter II, offering AI practitioners practical guidance. This assessment should be tailored to the particular system's application.
Commenting on the launch of the new ethics guidelines for trustworthy AI by, Antony Walker, deputy CEO, techUK said:
techUK welcomes the High-Level Expert Group on AI’s ambition to produce a set of Ethical Guidelines for Trustworthy AI that can be commonly adopted across the European Union.
We agree with the group’s aspiration to create practical, voluntary AI guidelines that can be operationalised. However, in its current form, we are concerned that the granularity of the guidelines, would make it difficult for many companies particularly SMEs, to implement.
We are pleased to see that the Commission has today announced that the assessment list presented in Chapter III will undergo a piloting phase and we look forward to continuing to provide feedback, on behalf of our members, over the coming months”.