07 Dec 2021
by Imogen Parker

How has public trust in data fared over the last five years?

Imogen Parker, Associate Director at the Ada Lovelace Institute, outlines what we have learned about public trust in data over the past five years.

The inauguration of the techUK Digital Ethics summit reflected a growing interest in addressing emerging technologies’ effects on society.  While the UK Government was creating the Centre for Data Ethics and Innovation (CDEI), the Nuffield Foundation – with partners including techUK, the Alan Turing Institute and the Royal Society – announced their intention to create an independent body with a mission to ensure that data and AI worked for people and society. A year later, that became the Ada Lovelace Institute. 

The drive to create Ada recognised that rapid advances in data, algorithms and AI were affecting core principles and shaping our understanding of individual and societal wellbeing. The potential benefits of emerging technologies were evident, but there was a growing undercurrent of unease at the unknown implications of innovations as they accelerated across society, from consumer spaces to public services; from policing to the classroom.  

Evidence was accumulating that individuals lacked agency over how the data about them was being collected and used, and power imbalances between people and corporations had allowed extractive practices to become the norm. The Royal Society identified a “data trust deficit.” – where trust in institutions to use data consistently scored lower than trust in general.  

So how have we fared in the intervening years? As data and AI are embedded through products, systems and services, these issues are increasingly significant. We’ve seen a range of ethics and governance-related interventions, including a proliferation of ethical principles, new organisations arriving and others closing, parliamentary inquiries and major government strategies that place data and AI centre stage.  

Governments have brought forward new (and forthcoming) legislation – with the EU’s GDPR adopted in the UK and now being reviewed, a forthcoming strategy on AI regulation, and the potential for the new European AI Act (AIA) to become a bigger global force than GDPR in shaping industry practice. Across different contexts we’ve seen a rise in recognition of the importance of public deliberation, which aims to empower people to understand and shape the development and use of data technologies – and to ensure public voices are heard by policymakers. 

But we’ve also seen a steady drumbeat of public disquiet about how data is being used. The Cambridge Analytica scandal became an anchor point for any discussions on trust in technologies. The Government was forced to withdraw their visa algorithm following a legal challenge from Foxglove, who dubbed their approach ‘Speedy boarding for white people’. The then UN Special Rapporteur on extreme poverty and human rights Philip Alston’s review of data-driven systems in social protection and assistance – in response to the rising use of these technologies to ‘automate, predict, identity, surveil, detect, target and punish’ – warned of a grave risk of ‘stumbling zombie-like into a digital welfare dystopia’.  

More recently, the public protests over the awarding of A Level grades by an algorithm the firing of Timnit Gebru from Google and Frances Haugen’s testimony on Facebook, offer persuasive arguments as to why the public should be wary with how data and AI are being used. No wonder around half of Britons surveyed in the 2020 Doteveryone People, Power and Technology survey felt little agency when it came to the use of their data online and felt pessimistic about the impact of technology on their lives and on society in the future. 

Rather than try to weigh these two against each other, I want to pick out a few lessons we have learned in the last five years about building trustworthiness into technologies, to support public trust.  

  1. Trustworthiness not trust must be the goal. While everyone should by now be acquainted with Onora O’Neill’s lecture on trust and trustworthiness, we still see a tendency in policy and industry to pitc the upsides of data to encourage public trust, rather than acknowledging trade-offs of relative benefits, risks and harms, and active mitigation to ensure technologies are more trustworthy.  

  1. Use should not be confused with trust. We should not be complacent that, because people feel unable to opt out of systems (what has been described as digital resignation), they are comfortable with those systems. When Apple introduced App Tracking Transparency, to request users’ permission to track users across multiple apps for data collective and ad targeting, 96% opted out – puncturing the narrative that people saw benefit in invasive tracking.  

  1. Use can change. The evidence that 3 million people have opted out of health data sharing as part of GPDPR should offer a cautionary tale about growing fears for a loss of control over data, even in sectors that are seen as trusted and societally beneficial. We should not be complacent that people will continue to accept their data being as available as it has been over the last two decades. 

  1. There is not a broad social licence to use data. in our research, including public deliberations on the use of biometric technologies and vaccine passports, we have consistently found that people want strong governance (through regulation and legislation), specified and delimited purpose, transparency and accountability,rights and routes to redress over their data, even in emergencies.  

  1. Power needs to be part of the conversation. There’s a tendency to think about data issues through the lens of technical, legal or rights frameworks. While these are vital, we need greater attention paid to the societal and political issues arising from data use, how they affect existing power structures and what approaches we need to rebalance or mediate power.  

The public is central to the success or failure of any data-driven innovation, because, for the most part, data is both generated by and about people and their activities.  

Five years ago, concerns were raised that public backlash could trigger a data winter, undermining the real opportunities that data can facilitate for societal benefit and value-generating innovation. In conclusion, I propose that we need to be as attentive now as we were then to public trust, and to ensure that we have the governance structures in place to enable responsible, stable, sustainable data practice that support trustworthiness. This will move us closer to the data futures we all want – those that work for people and society.

Related topics

Authors

Imogen Parker

Associate Director, The Ada Lovelace Institute

Imogen is Associate Director at the Ada Lovelace Institute, where she is responsible for creating social change through developments to policy, law, regulation and public service delivery. She is a Policy Fellow at Cambridge University’s Centre for Science and Policy.

Her career has been at the intersection of social justice, technology and research. In her previous role as Head of the Nuffield Foundation’s programmes on Justice, Rights and Digital Society she worked in collaboration with the founding partner organisations to create the Institute. Prior to that she was acting Head of Policy Research for Citizens and Democracy at Citizens Advice, Research Fellow at the Institute for Public Policy Research (IPPR) and worked with Baroness Kidron to create the children’s digital rights charity 5Rights.

Twitter:
@ImogenParker

Read lessmore