Guest Blog: Robert Bond, Partner, Bristows LLP - AI, ethics and trust

As part of techUK's AI Week, Robert Bond, Partner at Bristows LLP, who are a member of techUK, has provided a blog on 'AI, ethics and trust'

The increased reliance upon algorithms and artificial intelligence (AI) to produce outcomes and profiling in big data projects such as humanitarian actions, healthcare plans, financing decisions, connected autonomous vehicle infrastructures and generally in marketing and advertising, raise ethical and trust questions around the risks associated with a lack of emotional human intervention.

A recent report by the Alan Turing Institute in London and the University of Oxford has suggested that there is a need for the creation of an AI watchdog to act as an independent third party that can intervene where automated decisions create discrimination. The report indicates that where there is no human intervention in an outcome based on algorithmic automated decisions, then the results may be flawed or discriminatory because the data samples are too small or based upon incorrect or incomplete assumptions or statistics.

Leaving aside the question of whether or not AI needs the watchdog that the Report above calls for, there is the further question as to whether or not individuals have the right to know how algorithms are working that may impact upon their data protection and human and consumer rights. Well there are such rights under the current Data Protection Act 1998. Section 12 gives individuals the right to understand the methodology applied to automated decision making such as performance at work, creditworthiness, reliability or conduct. However this right has seldom been used, and historically there has always been human intervention in profiling activities. Now however, the advances in profiling technology mean that AI functions more and more without human intervention. The Report above and guidance from the Information Commissioner’s Office reinforce the need for individuals to have enforceable rights and for data controllers to comply those rights.

The EU General Data Protection Regulation (GDPR) specifically deals with automated decision making in Article 22, although it is a limited right in that an individual can only object where the algorithm or AI produces a legal or similar outcome that adversely affects the individual. There is no right to object where the profiling is necessary for the entering into a contract or where the individual has expressly consented to the automated decision making. GDPR does, however, place strict obligations on businesses that use AI to put in place security by design and privacy by default to protect the human rights and privacy of individuals, and where AI and profiling uses sensitive data such as biometrics, religious and philosophical beliefs, health data and criminal records, then in addition to security and privacy, the business must have obtained explicit consent to the processing.

Whilst GDPR focuses on aspects of automated decision making, it leaves controllers to take responsibility for compliance and ethics in the use of AI and profiling. It does not expand in Article 22 on how privacy by design nor privacy impact assessments must be applied to automated decision making practices. GDPR does however generally require adherence to privacy by design, security and privacy impact assessments. So controllers and in some cases, processors, must put in place policies and procedures in anticipation of the exercise of rights by individuals under not only current law but also GDPR from 2018.

As individuals begin to understand their enhanced data subject rights under GDPR, such as the right to object to automated decision making but also the rights of erasure, rectification and information, then they will also realise they have rights to compensation for not only actual but also emotional damages where their personal data is abused. So we may see a growth in compensation claims by aggrieved individuals who feel that AI and profiling may have unfairly discriminated against them and businesses that are not prepared to respond to such claims may find themselves not only embarrassed in court but also subject to further investigation by the relevant data protection authority.


For more information on techUK's AI Week please contact:

FROM SOCIAL MEDIA

We're at #Lab18 chatting about the role of autonomous vehicles in #DrivingtheFuture. Come take part in the debate!… https://t.co/bxde4QPS0O
Did you know that many businesses haven't gone #digital yet? Join us at our #Lab18 event where we tackle the issue… https://t.co/Kc7yt7Sdr2
techUK Deputy CEO @techUKdepCEO comments on PM Statement saying UK and EU "must commit to finding a solution that a… https://t.co/T5dbsR8yPF
Get ready for techUK’s Cloud Week! From Monday, we’ll be hosting a week of guest blogs, podcast interviews, press… https://t.co/SVC5P9QKtL
#supercharging18 in Manchester on 18 Oct, will explore the ways #digital increasingly underpins Britain’s #economy.… https://t.co/sArqcyiKD5
Hear from @techUKCEO at the FutureTech Festival in December this year #GREATforCollaboration https://t.co/OzJkA9IjjG
What makes a 'good' Digital Board? Read more about our new report from SmarterUK in @ComputerWeekly https://t.co/UHCTccsH57
Delighted to see @techUK Health and Social Care Council Member @AndreasHT is on the panel https://t.co/4nCJotkAvx