Digital Ethics Summit 2020 Day One- Lessons to be learnt from 2020
On Wednesday 9 December we kicked off techUK’s fourth annual Digital Ethics Summit. The overarching theme for day one of the Summit was lessons to be learnt from 2020. Through a series of keynote speeches, breakout sessions and plenary panels, our speakers from across industry, government, academia and civil society, assessed the role and effectiveness of the digital ethics debate during the critical events that occurred this year.
To kick start the day we were joined by the Minister of State for Media and Data, John Whittingdale. As part of his keynote speech the Minister set out Government’s ambition for the upcoming National Data Strategy and highlighted work that is already underway, starting with the publication of the Government Data Quality Framework. The Minister stressed that underpinning this strategy is public trust and highlighted the need to bring the public along in these important conversations. The Minister also recognised the importance of digital skills and highlighted government’s 13M investment in a bid to boost digital skills. The Minister finished by reaffirming the importance of collaboration between the public and private sector, as well as the need for the UK to remain active in the global debate on digital ethics.
In her keynote address the Information Commissioner, Elizabeth Denham reflected on the year, highlighting the accelerated adoption of many digital technologies and suggested we’d seen greater collaboration on privacy issues this year than ever before. During her keynote, the Commissioner provided three takeaways to consider as we enter the new year. First, that data ethics will continue to be a lexicon for these shared discussions. Second is a real need to focus on the people that are affected by data decisions and third is considering the role of data ethics boards. The golden thread throughout each of these three areas is trust. When asked where she’d like to see more focus in 2021, the Commissioner said it comes down to doing more work for kids, including in areas of algorithmic decision-making.
Plenary session - Lessons to be learnt from 2020
In the Summit’s first plenary panel session titled ‘Lessons learnt in 2020’ speakers included Maria Axente, Responsible AI and AI for Good Lead at PwC, Edwina Dunn OBE, Board Member at the Centre for Data Ethics and Innovation, Stephen Metcalfe MP and Co-Chair for the APPG on AI an Kay Firth-Butterfield Head of Artificial Intelligence and Machine Learning at the World Economic Forum. Building on the Commissioner’s keynote, Maria Axente added that we’re building on a strong foundation when it comes to businesses establishing ethics boards. It’s now key that businesses connect their various internal initiatives to see progress. Stephen Metcalfe acknowledged the impact of a current digital divide in society and suggested the need for a refreshed look when it comes to access of digital equipment. Ensuring we’ve got the right digital infrastructure in place, including access to broadband, will be key in 2021. Kay Firth-Butterfield agreed that we need to grow the infrastructure that supports wider adoption of AI to occur. Drawing on her own experiences, Kay highlighted that countries are starting to develop a better understanding of the work that needs to be done when it comes to digital ethics, with many countries learning from other’s experience. Finally, Edwina Dunn OBE, highlighted the work of the CDEI this year and argued the importance of balancing innovation and what we are allowed to do. As we create more data, the more we can do, she says, but does that mean we should do it?
Following the plenary session, the Summit broke out into three breakout sessions focusing on three key issues:
Fairness and equality- Data and technology and the fight against systemic discrimination
This breakout session was chaired by Jessica Lennard, Senior Director, Global Data and AI Initiatives, Visa. Panellists include: Ashleigh Ainsley, Co-founder, Colorintech, Brhmie Balaram, Head of AI Research & Ethics, NHSX AI Lab, Renée Cummings, Data Activist in Residence, The School of Data Science, University of Virginia & Community Scholar Columbia University, Dr Kanta Dihal, Senior Research Fellow & Principal Investigator, Global AI Narratives, Leverhulme Centre for the Future of Intelligence, University of Cambridge and Allyn L. Shaw, President & CTO, Recycle Track Systems and Founding Partner, Deed.Partners.
Whilst discussing some of the greatest challenges we’ve faced this year, Allyn L. Shaw highlighted that current data training sets are still not representative of underrepresented communities, and in addition, many of these communities don’t have access to technology, causing a cascading effect when it comes to fairness and discrimination.
On the topic of defining fairness, Ashley Ainsley preferred to approach this from an equality perspective. He asked- Who are the people responsible for creating algorithms and making decisions? What opportunities are there to have equitable input into those systems and challenge the decisions made? He stated that people should have the opportunity to get involved and influence from the beginning point of the process.
The public has become more conscious to the impact of AI this year, and Dr Kanta Dihal suggested that in 2020, naïve techno-optimism has become inexcusable. Brhmie Balaram commented that when we think of fairness, we must ask fairness for whom? As fairness will look different for different people. She also added that it’s not just tools but also the culture. What’s happening around the way rules are implemented and the way they’re able to be challenged? Renée Cummings highlighted the need to consider due diligence, duty of care and due process, when it comes to the application of data into any kind of system.
Reflections on data privacy and governance in 2020
This breakout session began with a short fireside chat between Jen Rodvold, Head of Digital Ethics & Tech for Good, Sopra Steria and Ben Jones, Head of Digital and Jonathan Milbourn, Director of Customer Services & Modernisation, from Harrow Council. During this session Harrow Council discussed some of the ethical questions they’d had to consider as part of their transition to making their website more personalised for residents and their experience with embedding a digital ethics framework within their organisation.
Following the fireside chat, Sue Daley, Associate Director, Technology and Innovation at techUK chaired a panel to further reflection on data privacy and governance in 2020. Panellists included: Ellis Parry, Data Ethics Advisor, Technology and Innovation from the ICO, Charles Radclyffe, Partner, Ethics Grade, Jen Rodvold, Head of Digital Ethics & Tech for Good, Sopra Steria, Chris Todd, Chief Superintendent, West Midlands Police & Board Member, Ada Lovelace Institute and Adrian Weller, Programme Director for AI, The Alan Turing Institute.
From this breakout discussion it was clear that industry recognises that data governance is an important part of their investor relations. From what once was a corporate social responsibility mechanism, has now matured into a GDPR compliance issue and now a more strategic tool. Organisations are learning that their own response needs to mature.
The critical events of this year have highlighted the importance of accountability and that it should always be people who are held responsible when things go wrong. In addition, transparency is really important. Wherever possible, discussions must be had with all stakeholders affected including conversations around what their hopes and fears are in order to put in place anticipatory governance.
Data sharing and data quality- Getting this right going forward
This session started with an initial fireside chat between Simon Persoff, Partner at Clifford Chance and Chris James, Head of Digital Legal, UK and Europe, HSBC, talking about data sharing frameworks, the pandemic and the role of the UK National Data Strategy.
Followed by a panel session chaired by Jeni Tennison, Vice President and Chief Strategy Adviser, Open Data Institute. The panel included: Rachel Coldicutt, Director, Careful Industries, Stephen Docherty, Industry Executive, Health, Microsoft, Arnav Joshi, Senior Associate, Clifford Chance and Sam Roberts, Head of Open Data & Open Government, Cabinet Office.
This breakout discussed government’s role in encouraging data sharing and making data sets available. Stephen Docherty highlighted the timely publication of the draft National Data Strategy and the importance of data standards and interoperability in the health sector. Sam Roberts from DCMS, focused on the need for a clear purpose and outcome, such as improving public services, as stated in the Digital Economy Act 2017. Rachel Coldicutt argued that there has to be a vision for the kind of world we are interested in living in; not just a list of data sets that are able to be joined up and used. She would like to see engagement with communities who are likely to be underrepresented in data sets addressed as a matter of urgency.
The panel also discussed how we can encourage people to share their data. Rachel Coldicutt highlighted the need for further trust and transparency. Data is used but we get no feedback and it’s not clear to us how our data has been used. There’s a need here to for purposeful engagement. Arnav Joshi suggested that where businesses collect and aggregate huge amounts of data, they have a responsibility to share this data where it’s safe and ethical to do so. It’s not just for their own benefits, but for benefits of the people the data is about and the wider society.
Headline sponsor keynote- Natasha Crampton, Chief Responsible AI Officer, Microsoft
In the afternoon keynote Natasha Crampton, who leads Microsoft’s Office of Responsible AI, as the company’s first Chief Responsible AI Officer highlighted some of the work Microsoft has done recently to operationalise and embed their ethical principles. They have recently created AI champions: multidisciplinary engineers and data scientists, designers, users and researchers united by the desire to respond to the pandemic, in a way that upholds Microsoft’s principles and ethics. They have also introduced mandatory AI ethics training and Envision AI: an interactive workshop showing real-life scenarios and giving the tools to apply learning to an impact assessment. Natasha also spoke about assessing the safeguards that allow Microsoft to maximise the opportunities presented by AI assistants. In the future they’ll focus on three things: 1. Scaling their efforts to develop AI responsibly 2. Building a focus of responsible AI 3. Sharing what they learn with the community.
Democracy in the Age of AI Panel
The final plenary of the day, focused on ‘democracy in the age of AI’, chaired by Antony Walker, Deputy CEO at techUK. Panellists included: Andy Parsons, Director for CAI, Adobe, Nina Schick, Author and Broadcaster, Damian Collins, MP for Folkestone & Hythe, Anna-Sophie Harling, Managing Director for Europe, NewsGuard and Professor Dame Wendy Hall, Regius Professor of Computer Science at the University of Southampton and Chair, Ada Lovelace Institute.
The panel discussed the impact of deepfakes and mis/disinformation on our society and democracy. Nina Schick began by highlighting how the internet ecosystem has rapidly transformed politics and society over the past 30 years and that we’re now facing a monumental crisis of bad information. She went on to explain the difference disinformation and misinformation, with the former being the deliberate dissemination of misleading information.
Damian Collins argued that the question is not just about regulating content itself – but also about the amplification of content that people or organisations make decisions on. From a regulatory perspective, transparency is key- there’s a duty for social media platforms to disclose how their algorithms are made, and by whom. One panellist mentioned the need for better audit trails, as the root of the issue is often not knowing where the source of the information has come.
Dame Wendy Hall argued that a better internet ecosystem is not solely about reforming industry, it also comes down to moral responsibility and individuals. Anna-Sophie Harling highlighted that one way to fight the spread of disinformation and misinformation is to empower users by providing them with the skills and tools to accurately monitor online content and understand who is behind said content, which is what they’re doing at NewsGuard.
Although many of the AI tools mentioned can be used in inauthentic ways, Andy Parson from Adobe highlighted that they can also be used to create amazing creative works- photography, video, etc. and enabling people to use these AI tools can allow them to create faster and more effectively. We should therefore empower creatives to use this technology responsibly and this involves being transparent and showing your work.
In rounding of the session people were optimistic about 2021 suggesting that the heavy lifting on these issues has already begun. However, the panel stressed that conceptualising and understanding the issues as they evolve will take a real joint effort. Everyone has a role to play, with different stakeholder groups taking on different roles and responsibilities.
Click here for our round-up of day two at techUK’s virtual Digital Ethics Summit 2020.