On Wednesday 11 December, techUK held its third annual Digital Ethics Summit to assess the progress that’s been made over the last 12 months to operationalise ethical principles and turn them into real action that delivers genuine benefits to people’s everyday lives.
This event was organised by techUK in partnership with the Leverhulme Centre for the Future of Intelligence, Digital Catapult, Open Data Institute, Royal Society, University of Oxford’s Digital Ethics Lab, Wellcome, the Royal Statistical Society, British Academy, Ada Lovelace Institute, The Alan Turing Institute and The Institute for Ethical AI in Education. It was sponsored by Microsoft, Intel, Visa, Splunk and QuantumBlack.
Kicking off the Summit Antony Walker, Deputy CEO at techUK, highlighted that the day was all about asking whether the focus on digital ethics is giving us the tools we need to ensure safe and responsible innovation in powerful new technologies.
In her opening address, reflecting on the year since the last Summit, Elizabeth Denham, Information Commissioner at the ICO highlighted the progress that is being made but stressed that there are some tough challenging questions ahead. She raised three key questions to frame the day’s discussion: How are we as a digital community monitoring if a digital underclass is forming? Could data ethics provide a common language for greater global convergence? And finally, are there enough voices in the data ethics conversation?
Phil David Harvey, Senior Cloud Solution Architect for Data & AI in One Commercial Partner, Microsoft UK delivered a powerful opening keynote on humanising data and the steps Microsoft have taken to operationalise their AI principles. He explained the need to think about the next generation of skills required for ethical AI, including the incorporation of ethics, sociology and philosophy into AI education. Highlighting human skills as the most important skills in data.
The opening plenary panel, on ‘Digital Ethics- Solution or Sideshow?’ focused on whether digital ethics is providing answers to some of the biggest societal challenges raised by technology. John Thornhill, Innovation Editor at the Financial Times, as part of his introductory provocation, spoke of the recent explosion of different AI principles, raising the question of when we talk about digital ethics, whose ethics do we mean and whose values are we going to upload?
Lord Tim Clement-Jones highlighted that we’re making progress with new institutions focusing on digital ethics but compliance mechanisms and governance need more scrutiny. Heather Patterson, Senior Research Scientist from Intel, observed a shift in her own organisation from determining ethical principles to building them into technology by inserting checkpoints throughout the product development lifecycle. Simon McDougall, Executive Director at the ICO, mentioned that there are plenty of good actors investing money and resource in getting digital ethics right but recognised that there are many smaller orgs that are less sophisticated and performing less well in this area. Sue Daley, Associate Director, techUK highlighted that SMEs want to get this right and need guidance that is practical and operable.
We then moved into the track sessions, focusing on key ethical challenges, the progress that’s been made and where current gaps exist. Full recordings of each session can be found here.
Track 1- Fairness and Transparency
To begin this session, chaired by Stephen Cave, Executive Director from the Leverhulme Centre for Future Intelligence (CFI), Carly Kind, Director at the Ada Lovelace Institute initiated the discussion, arguing that we’re construing fairness too narrowly, seeking tech solutions to tech problems. When in fact we need socio-economic, political and systemic solutions. Luke Woollen, Head of Partnerships at MeVitae, advocated for enhancing public engagement mechanisms and community involvement in the development of ethical solutions and frameworks. In addition to capturing public perspectives, Chris Todd, Chief Superintendent, West Midlands Police, advocated the creation of independent data ethics committees that can be built into existing operations and processes. Lee Glazier, Head of Service Integrity, R² Data Labs, Rolls-Royce, spoke about the importance of baking-in inclusive fairness and transparency considerations early on in decision-making processes, whilst Orlando Machado, Chief Data Scientist at Aviva, spoke about Aviva’s decision to publish a data charter to highlight how customer data is used to improve products and services in the insurance sector.
Track 2 - Inclusivity and Diversity
For this track, chaired by Trish Shaw CEO of Beyond Reach, Jessica Lennard, Director of External Affairs at Visa, during her opening provocation, highlighted that “the future is already here, it’s just not evenly distributed”. According to Krissie Barrick, Head of Digital Influencing at Scope, 98% of websites aren’t accessible to a global minimum standard. Proactively hiring more disabled people and ensuring the workforce is diverse is the first step to improve this. Katie Alpin, Head of Research and Policy, Money and Mental Health, argued that more needs to be done to consider people’s mental health when building technology. Mike Clancy, General Secretary and Chief Executive, Prospect Union, highlighted the importance of culture and involving workers in the conversation on digital ethics, and the risk of not using new and emerging technologies. He called on people to be brave and not “shy away” from the big questions but to work through them.
Track 3 - Ethics and the Tech for Good Debate
Following an opening provocation from James Hodge, Chief Technical Advisor at Splunk, the chair Jonnie Penn from the University of Cambridge, introduced the other panellists- Maria Axente, AI for Good Lead at PwC, Ian Caveney, Head of Tech for Good at BT Group, Sherif Elsayed-Ali Director of AI for Good at Element AI and Sutin Yang, Portfolio Manager at the Social Tech Trust. The first question raised in this session was the need to define what is meant by tech for good and also what does good mean? The definitions of good and morality differ globally according to a variety of socio-cultural practices and with the rise of specifically AI technologies, the tech industry and policy teams need to work towards establishing a universal definition to mitigate the risk of bias and discrimination. One way of ascertaining universality is through inclusion of users in the development of AI technology, as user engagement allows companies to check if their technology is correctly targeting the problems they are trying to solve.
Track 4 - Data Privacy and Governance
Track 5 - Public Engagement and Democracy
This session, chaired by Natalie Banner of the Welcome Trust, discussed the relationship between public engagement, democratic processes and the development of ethical foresight in the UK. Kicking off the panel, Steve Ginnis, Research Director at Ipsos MORI, suggested that the UK should move beyond heterogenous sampling towards a more digitally-enabled, targeted research methods. which could enable decision-makers to make more accurate assumptions about public attitudes towards future interventions. Simon Burall, Senior Associate at Involve, suggested the need to reframe the public debate to focus on outcomes, rather than broader policy objectives. Helen Margetts, Professor of Society and the Internet, University of Oxford and Director, Public Policy Programme, Alan Turing Institute argued that policy makers must be aware that some individuals are more capable than others at assessing the ethicality of advanced digital systems. Without acknowledgement of the underlying reasons for this inequity, the legitimacy of public engagement methods may be undermined. Elena Sinel, Founder at Teens in AI, argued that the key to making the digital ethics debate democratic and inclusive lies in demystifying complex issues and communicating insights to diverse audiences. Over time, this will support the co-development of ethical solutions that serve the interests of future generations.
Track 6 - Accountability and explainability
The session was kicked off with a provocation from Richard Tomsett, Emerging Technology Specialists at IBM Research explored the difference between explainability and interoperability and the assumption of a trade-off between explainability and accuracy. Following this opening the panel chair Hetan Shah, Executive Director, Royal Statistical Society introduced Julie Dawson, Director of Regulatory and Policy, Yoti, Ben Gilburt, AI Ethics Lead, Sopra Steria and Helena Quinn, Senior Policy Officer, Information Commissioner’s Office (ICO).
The panel discussed that accountability requires more than just availability of data but also involvement in decision making to help create “public legitimacy”. It asked whether the accountability and explainability debate is missing participation and how to encourage more voices in this important debate. Highlighted was the work of the ICO on its Project ExplAIn, and the draft guidance now out for consultation. Discussion also touched on whether explainability is in fact a distraction from talking about accountability in the broader sense. A question raised to the panel was what individuals could do to get this right and what the tech sector may need from users. The panel stressed the importance of communication the need to bring people together and also the importance of clarity in the terminology being used.
Following the track sessions, the audience reconvened in the main room to observe the results of our ‘optimism barometer’. A rapporteur from each of the session reported back to the conference the level of confidence they’d felt in their session that we were moving on the right trajectory.
Afternoon fireside chat
Next in a short fireside chat with Antony Walker, Stephanie Hare highlighted the challenges that innovation in data-driven digital technologies is posing to people and society globally. She stressed the need to ensure the UK remains a centre for ethical technology development and deployment. She also highlighted the essential role of education, particularly of engineers, to ensure that future generations are aware of ethical issues and the challenges they will be resolving.
Plenary panel - What’s coming next?
In the final plenary panel, chaired by techUK’s Sue Daley, the panel agreed that more voices were needed in this debate and discussion. Kate Devlin from King’s College London highlighted that technology is not neutral once decisions are made about it as judgement comes into play. Mike Philipps, Microsoft, explained that we face a socio-technical challenge: to programme fairness in AI systems more practical testing and application needs to happen. Checklists are an important tool in addressing fairness in systems for people to have the ability to check needed precautions carefully. Jeni Mundy, Regional Managing Director, UK & Ireland, Visa, suggest we need to make sure we are all using the same words in the same ways to universalise actions and avoid contradictions. Jeni continued by arguing that we need to decide what kind of world we want to live in, excluding and controlling for previous biases and prejudices – include legal and policy teams to formulate these terms. The subjects of digital ethics affects all of us so we need to move it into action. Finally, Catherine Miller, new Chief Executive at Doteveryone, argued that we need to move the conversation forwards, instead of framing the questions we need to take steps to address it, whilst Tom Meakin, COO Europe, QuantumBlack, agreed that we are moving in the right direction but not fast enough.
Bringing the Summit to a close, Antony Walker thanked all the speakers, partners and sponsors for making this year’s Summit a great day. Reflecting on the whole day he summarised that while the UK is doing some very good things on a micro level, we are a long way from where we need to be on a macro level. Now is the time to be skeptical, but not cynical. He called on everyone at the Summit to remain focused in 2020 on what can be achieved and stressed techUK’s ongoing commitment to working with the businesses driving innovation and civil society to work through the details of the complicated and nuanced issues that lie ahead and together find the answers and solutions to getting this right.