07 Oct 2021

The EU’s AI Act: an initial assessment

Guest Blog: Ray Eitel-Porter, Global Lead for Responsible AI, Accenture considers the implications of the European Commission proposal for an Artificial Intelligence Act, from the perspective of AI systems providers.

On 21st April, the European Commission published its proposal for an Artificial Intelligence Act, with the objective of setting the global standard for the development of secure, trustworthy and ethical AI.

Accenture was delighted to contribute to the process leading to this first major proposal to comprehensively regulate the AI space, including participating in the work of the High-Level Expert Group on AI as one of the organisations that piloted its Assessment List for Trustworthy Artificial Intelligence (ALTAI).

This piece will consider the implications of this proposal from the perspective of AI systems providers. As the proposal will likely change before it becomes law, our assessment as set out in this piece is preliminary and based only on the proposal as it stands.

The proposal in brief

We believe that the proposal sets a clear ambition that AI applications in the future will be innovative, equitable, trustworthy and human-centric.

The proposed regulation would categorise AI systems based on the associated level of risk (Unacceptable, High, and Low or Minimal). Compliance requirements and guidance for providers, users, importers and distributors of AI systems would then vary according to the associated risk level. The rules would also require transparency obligations for AI systems that interact with humans in non-obvious ways – deep fakes and chatbots, for instance, would need to be advertised as such.

As with the EU’s General Data Protection Regulation (GDPR), the proposed act would impose stiff fines (reaching a maximum of 6% of global annual turnover or €30 million, whichever is higher) for those in breach of certain provisions of the regulation.

The EU’s proposal marks a turning point for AI technology, with the focus now as much on what should be done as it is on what could be done. It could also serve to catalyse the AI market in the EU, which is lagging behind other global regions, by giving organisations the confidence to invest in AI technology – whether for use in their own businesses or to take to market.

Scaling AI with confidence

When it comes to AI deployments, many enterprises have likely been “stuck in neutral”, unwilling to risk potential harm to end users – and to their reputation – without clear consensus on how to implement the technology in a safe and responsible manner, particularly when it comes to companies in high-risk industries such as financial services, health and the public sector. Although a lot of details still need to be confirmed before the proposal becomes law, developers of AI systems (‘providers’) and organisations which procure and make use of these AI systems (‘users’) now have a much clearer sense of the potential requirements that will underpin AI adoption in the EU.

This is important because many of the organisations we work with are taking a build approach to AI, developing their own models rather than using third party options. For providers of AI systems categorised as high risk, proposed minimum legal requirements are set out in Title III, Chapter 2, in relation to data quality and data governance, technical documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy and cybersecurity.

The good news is that approaches needed to meet these requirements are already embedded in advanced Responsible AI frameworks that are already deployed by forward-thinking providers and other industry stakeholders. With the regulatory direction of travel now clear, companies can start to put these principles into practice.

From principles to practice

In our experience, it has been this move from principles to practice that many organisations have found most challenging. Operationalising Responsible AI is not easy, as demonstrated by a recent Accenture survey of Responsible AI practitioners which indicated that many have struggled to develop a systematic internal approach to convert their principles into practice.

In particular, the importance of active top-down governance, a Responsible AI culture and robust operational and organisational procedures, processes and controls are often overlooked:

  • Organisational. Our research highlights an unwillingness by leadership to deeply engage with issues like lack of fairness, transparency, and the potential for discrimination. In some cases, little value has been placed on risk mitigation, while time pressures mean that short-term product success was prioritised ahead of the long-term benefits of Responsible AI.
  • Operational. Our research also revealed that companies consistently struggle with stakeholder misalignment, frustrating bureaucracy, conflicting agendas and a lack of clarity on processes or ownership when it comes to Responsible AI. 

In this context, it is important to highlight two central requirements of the proposed regulation. In Chapter 2, providers are required to implement a risk management system for high-risk AI systems to mitigate potential risks across the full AI lifecycle.

Chapter 3 sets out clear horizontal obligations for providers, which can be interpreted as a wide-ranging governance framework and which, if adopted, will require them to operationalise their Responsible AI efforts through the development of a sound quality management system, detailed documentation through written policies, procedures and instructions, and the establishment of a robust post-market monitoring system.

The quality management system will incorporate elements such as compliance strategy; techniques; procedures and systematic actions to be used for the design of AI systems; test and validation procedures; systems and procedures for data management; risk management systems; post-market monitoring system; and record keeping, amongst others.

Readying for regulation

The publication of this proposal marks a new generation of Responsible AI, which we expect will be driven by regulation. With countries like Australia, Canada, Singapore and New Zealand taking their first steps towards similar trustworthy, human-rights led approaches to AI governance, we may see some of the EU requirements replicated globally – as was the case with GDPR. In the US, a recent post suggests that the Federal Trade Commission may take more aggressive action to protect consumers from biased algorithms.

The EU’s proposal could mark the start of a new chapter in the story of AI, with the focus on governance, risk management, data quality and accountability. Providers looking to build momentum ahead of the regulation being finalised can start by:

  1. Assessing their existing governance, organisational and operational structures, and
  2. Performing gap analyses and impact assessments to determine if they are managing the associated risks across the full AI lifecycle.

In our experience, this varies significantly by sector: for example, financial services organisations already have well-developed risk controls, whereas some other sectors have hitherto had less need for comprehensive risk management. This phase will likely require that businesses invest resources in managing this change, for example by:

  • Forming an AI ethics committee or council,
  • Creating a cross-functional governance team and operating model,
  • Building frameworks for data ethics and risk management, and
  • Establishing new roles and responsibilities within the enterprise.

With this foundation in place, companies can move up the maturity curve to a fully operationalised end-to-end approach that meets the necessary accountability, transparency and traceability requirements for Responsible AI compliance.  

The starting whistle has been sounded, and the age of Responsible AI can now start in earnest.

 

 

Author:

Ray Eitel-Porter, Global Lead for Responsible AI, Accenture

Accenture.jpg

About Accenture

Accenture is a global professional services company with leading capabilities in digital, cloud and security. Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Interactive, Technology and Operations services — all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 569,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities. Visit us at www.accenture.com. 

This content is provided for general information purposes and is not intended to be used in place of consultation with our professional advisors.

Copyright © Accenture 2021. All Rights Reserved. Accenture and its logo are registered trademarks of Accenture

 

Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore