Artificial Intelligence and the problem with privacy

Technology is changing our day-to-day lives and we should embrace this development. When it comes to AI, the issue is not innovation, or the pace of technological improvement. The real problem is its governance, the ethics underpinning it, the boundaries we give it and, within that, the roles for defining the solution to these problems.

Now, when it comes to privacy – can we really teach AI technology to embrace data protection principles? We have seen over the past few weeks, with Facebook being a prime example, how privacy (or the lack of it) is a major issue and we must address it. With the General Data Protection Regulation coming into force in May, data breaches will become more and more expensive for organisations and AI needs to be able to adhere to the GDPR. Is this possible?

Looking at some of the core principles that sit at the heart of the GDPR, there is clearly some scope for securing AI.

Let’s take the right to fairness as an example. This, as defined by the GDPR, requires all processing of personal information to be conducted with respect for the data subject’s interests, and that the data be used in accordance with what he or she might reasonably expect. This principle also requires the data controller to implement measures to prevent the arbitrary discriminatory treatment of individual persons, and not to emphasise information that would lead to arbitrary discriminatory treatment. Now, if enforced, the GDPR could potentially lead to a review of the documentation underpinning the methods AI employ in the selection of data, an examination of how the algorithm was developed, and whether it was properly tested before it came into use. This is particularly important as one of the issues around AI is that it is based on data input by humans and (to varying degrees) humans present a natural bias which AI amplifies, an issue that was recently explained in the Guardian.

Another key principle is data minimisation, which would force developers to consider how to enable AI to achieve a set objective in a way that is least invasive for the data subjects. This goes alongside the principle of purpose limitation which regulates that the data subject exercises control over his or her own personal information.

The transparency in processing requirement as stipulated in the GDPR may prove more tricky to adhere to, as the advanced technology is often too complex to understand and explain. Similarly, black box learning* makes it practically impossible to explain how information is correlated and weighted in a specific process. Furthermore, commercial information may also be used, and this makes it harder to inform the data subject. However, enforcing the GDPR means organisations must adopt a pragmatic approach so that machines can meet this transparency principle. To that end, the legislation is very clear and potentially very effective, especially in relation to automated decision making.

It was disappointing to see that the right to an explanation** did not make it into the GDPR itself. It is mentioned in the preface which is not binding and cannot of itself grant the right to an explanation. However, irrespective of that, the legislation does seem to suggest that the data controller must provide as much information as possible. The debate is open, and court cases will determine the extent of this. Pressure coming from the public will be crucial in shaping some of these decisions.

What is good to see is that practical steps, focused on a privacy by design approach, can be implemented to ensure that AI meets the GDPR and ensures the right to privacy. Although the legislation does not go as far as it could, it is the first step we need on the road of defining the principles governing the machines that, some say, are governing us.

 

*When rules are applied, AI does a lot of complex math. This math often can’t be understood by humans, yet the system outputs useful information. When this happens, it’s called black box learning. We don’t really care how the computer arrived at the decision it’s made, because we know what rules it used to get there.

**The right of explanation refers to the right to know the algorithm underpinning a decision. It didn’t make it into the GDPR in its original form.

 

To read more from techUK AI Week, visit our landing page.

Share this

FROM SOCIAL MEDIA

A full list of our recent events is here: https://t.co/0R30jnHjDG A big thanks to everyone who helped us along th… https://t.co/zGzO3wANtM
And we entered the festive season with a discussion about the Government’s new Vision for Digital Health and Care w… https://t.co/5ZWpHJHW3I
And as the nights closed in, we launched #Manifesto4Matt with 250+ people at our Industry Dinner. @MattHancock welc… https://t.co/c9WPSsGtNo
We began November with a Supplier Development Day to help companies to get on to @NHSEngland 's HSS Framework. 62 c… https://t.co/X1sCp8FbYZ
In October we headed to Liverpool to explore just how much 5G could transform the health and social care sector to… https://t.co/3WXKUkScmX
At the start of Sept we decamped to Manchester for @ExpoNHS and hosted two insightful discussions with… https://t.co/sbOD1ImvBt
In August we hosted 6x GPIT Futures webinars with @NHSDigital ... and at the end of the month co-hosted a fascinati… https://t.co/9aXgmRtJgv
In July some quick-footed players took a break from the World Cup to display their agility and skills at an interac… https://t.co/bglCKYBpB5
In June we were joined by @NHSDigAcademy CEO @ukpenguin Rachel Dunscombe and @NHSCCIO to set out concrete steps on… https://t.co/hhkA5km1Ea
We kicked off May with @TJamesHawkins @NChishick @JamesTnorman leading our @NHSDigitalbriefings; and enjoyed pizza… https://t.co/ltam2iTXvh