Artificial Intelligence and the problem with privacy

Technology is changing our day-to-day lives and we should embrace this development. When it comes to AI, the issue is not innovation, or the pace of technological improvement. The real problem is its governance, the ethics underpinning it, the boundaries we give it and, within that, the roles for defining the solution to these problems.

Now, when it comes to privacy – can we really teach AI technology to embrace data protection principles? We have seen over the past few weeks, with Facebook being a prime example, how privacy (or the lack of it) is a major issue and we must address it. With the General Data Protection Regulation coming into force in May, data breaches will become more and more expensive for organisations and AI needs to be able to adhere to the GDPR. Is this possible?

Looking at some of the core principles that sit at the heart of the GDPR, there is clearly some scope for securing AI.

Let’s take the right to fairness as an example. This, as defined by the GDPR, requires all processing of personal information to be conducted with respect for the data subject’s interests, and that the data be used in accordance with what he or she might reasonably expect. This principle also requires the data controller to implement measures to prevent the arbitrary discriminatory treatment of individual persons, and not to emphasise information that would lead to arbitrary discriminatory treatment. Now, if enforced, the GDPR could potentially lead to a review of the documentation underpinning the methods AI employ in the selection of data, an examination of how the algorithm was developed, and whether it was properly tested before it came into use. This is particularly important as one of the issues around AI is that it is based on data input by humans and (to varying degrees) humans present a natural bias which AI amplifies, an issue that was recently explained in the Guardian.

Another key principle is data minimisation, which would force developers to consider how to enable AI to achieve a set objective in a way that is least invasive for the data subjects. This goes alongside the principle of purpose limitation which regulates that the data subject exercises control over his or her own personal information.

The transparency in processing requirement as stipulated in the GDPR may prove more tricky to adhere to, as the advanced technology is often too complex to understand and explain. Similarly, black box learning* makes it practically impossible to explain how information is correlated and weighted in a specific process. Furthermore, commercial information may also be used, and this makes it harder to inform the data subject. However, enforcing the GDPR means organisations must adopt a pragmatic approach so that machines can meet this transparency principle. To that end, the legislation is very clear and potentially very effective, especially in relation to automated decision making.

It was disappointing to see that the right to an explanation** did not make it into the GDPR itself. It is mentioned in the preface which is not binding and cannot of itself grant the right to an explanation. However, irrespective of that, the legislation does seem to suggest that the data controller must provide as much information as possible. The debate is open, and court cases will determine the extent of this. Pressure coming from the public will be crucial in shaping some of these decisions.

What is good to see is that practical steps, focused on a privacy by design approach, can be implemented to ensure that AI meets the GDPR and ensures the right to privacy. Although the legislation does not go as far as it could, it is the first step we need on the road of defining the principles governing the machines that, some say, are governing us.

 

*When rules are applied, AI does a lot of complex math. This math often can’t be understood by humans, yet the system outputs useful information. When this happens, it’s called black box learning. We don’t really care how the computer arrived at the decision it’s made, because we know what rules it used to get there.

**The right of explanation refers to the right to know the algorithm underpinning a decision. It didn’t make it into the GDPR in its original form.

 

To read more from techUK AI Week, visit our landing page.

  • Sue Daley

    Sue Daley

    Associate Director | Technology & Innovation
    T 020 7331 2055

Share this

FROM SOCIAL MEDIA

We at #techUK are proud to celebrate this #LivingWageWeek as an accredited #LivingWage Employer! @LivingWageUK https://t.co/a0DLEmHPXA
Read the comment from our @techUKCEO in response to Labour's plans to part nationalise BT https://t.co/m2OrUaWk4e https://t.co/F9kJc7NgWJ
We're running a free SME Strategy Breakthrough Workshop with AddVantage Strategy on 05 December. Providing members… https://t.co/tzWApo8DmR
techUK's final Introductory Evening of 2019 will be on 09 December. Join us to discover more about techUK, meet the… https://t.co/fH68uSR7ng
User trust in government and business has time and again been the watch word across discussions @SommetGovTech No different in #healthtech
@idatin calls for user centred design as a priority to overcome systemic challenges. Rune Simensen echoes the calls… https://t.co/g3dUpMjDNt
Scores on the doors universally low: 3, 3, 2 and a cautious 6.
Health 2.0: Transforming Healthcare Through Technology @SommetGovTech #healthtechuk @JulietBauer asks the panel to… https://t.co/j4YBfvdiRY
Fascinating discussion on the future of payments and what government can learn from fintech @SommetGovTech https://t.co/CagvVn2Dbr
#GovTechSummit assembled a stellar panel on established the conditions for Europe’s GovTech moment @SerbianPMhttps://t.co/5ikt3qrNIX
Become a Member
×

Become a techUK Member

By becoming a techUK member we will help you grow through:

Click here to learn more...